Insights - HexaGroup

A Practical Look at AI for Modern Enterprise Teams

Written by HexaGroup | Jan 7, 2026 3:43:15 PM

AI is already inside your company. Maybe not officially. But practically? It’s everywhere.

It’s on every desk, in every browser tab, and in more “quick questions” than leadership would like to admit. People are using AI at home, on their phones, and yes, at work, because it’s fast, helpful, and the path of least resistance.

That creates a risky reality: your team is already experimenting with AI... Just not inside a space you control. And when AI usage happens in the wild, every prompt, pasted paragraph, and uploaded file becomes a potential leak of internal thinking.

To unpack how enterprise teams can adopt AI safely (without freezing up or handing over IP by accident), Arnaud Dasprez, CEO and Founder of HexaGroup, sat down with Igor Carron, CEO and Co-Founder of LightOn, Editor at Nuit Blanche, and Co-Organizer of the Paris Machine Learning Meetup on the Hex-Files Energy Marketing Podcast.

Keep reading for Igor’s clearest takeaways on why AI is different from past tech waves, why data control matters, how “search and reason” changes knowledge work, and what it really means to deploy AI safely. 

Listen to the full podcast here > 

"Now is the time to equip yourselves."

The most common enterprise mistake with AI is waiting for perfect certainty.

Leaders want time to evaluate vendors, draft policies, and “study the landscape.” Meanwhile, employees are already using public tools to do their jobs faster—because they have deadlines, not roadmaps.

And here’s the part that matters: prompts are not harmless.

Every question someone asks contains context, about customers, pricing, internal processes, engineering decisions, commercial strategy, and how your company thinks.

If you don’t provide a controlled internal environment, your people will find their own. And that’s how knowledge leaks quietly — one helpful chat at a time.

Igor’s idea of “equipping yourself” is simple: create a safe space where AI usage can happen inside the walls. That includes:

  • An internal AI workspace that employees can actually access
  • Policies that treat prompts, outputs, and logs as protected assets
  • Controls on what data can be uploaded, stored, or retained
  • Visibility into usage patterns so leaders can spot process gaps

When you control the environment, you’re not just reducing risk. You’re creating trust and laying the groundwork for higher-value use cases.