The use of generative AI in the enterprise has spread rapidly over the past several months. The rise of paid offerings, marketed as more secure or better governed, has helped cement the idea that these tools are now fit for professional use. That reading deserves scrutiny, because it rests on an implicit assumption: that choosing the right tool is enough to ensure compliance.
In practice for generative AI in the enterprise, the issue lies elsewhere.
A persistent confusion between tool, responsibility, and governance
A generative AI service, even an “enterprise” tier, is still an external service. Using it means transmitting data outside your information systems, having that data processed on infrastructure you don’t control, and relying heavily on individual behavior.
In that context, compliance doesn’t hinge on the service tier you’ve subscribed to. It depends on whether your organization has built an operational governance framework. All the more so given that, under European law, the principle is clear and consistent: responsibility for processing cannot be delegated.
Neither the GDPR nor the forthcoming AI Act provides for any transfer of liability to the tool vendor.
Responsibility shifts toward how the tools are used
The terms of service for generative AI platforms are generally explicit: the content you submit and the uses you make of it are your responsibility.
That includes the lawfulness of the data transmitted, whether it’s necessary and proportionate, and compliance with confidentiality obligations.
In a professional setting, this means unmanaged usage alone is enough to create a compliance breach regardless of any technical failure on the vendor’s side. The risk has shifted from the technology to the practices around it.
Incidents that reveal process failures, not technical ones
Several recent incidents illustrate this dynamic. In 2023, Samsung employees exposed sensitive information through ChatGPT, including internal code and confidential data. Separately, researchers have demonstrated that certain models, including Anthropic’s, can be manipulated through prompt injection techniques to exfiltrate data.
These weren’t sophisticated attacks. Quite the opposite: they reflect the absence of a usage framework.
The myth of “built-in” protections
Some solutions offer options designed to limit data use. These mechanisms have value, but they depend on configuration choices, user maturity, and the frequent absence of centralized oversight.
European requirements operate on a logic of compliance by design and by default. Add to that the fact that compliance which depends on individual settings is, by nature, fragile.
A blind spot: understanding what kind of data is actually being processed
A recurring risk factor is a poor grasp of the basics. The distinction between anonymized and pseudonymized data remains widely underestimated.
Anonymized data can no longer identify a person irreversibly. It falls outside the scope of the GDPR. Pseudonymized data, on the other hand, is still indirectly identifiable. It can be linked back to a person through supplementary information (a mapping table, an internal identifier, etc.). It remains personal data, fully subject to GDPR requirements.
A concrete example:
- Replacing a name with “Client_4587” in an internal file is pseudonymization if the organization holds a correspondence table, the person is still identifiable.
- Permanently severing all links between the data and the identity, with no possibility of re-identification, is anonymization.
In the context of generative AI, this distinction is critical. Sending pseudonymized data to an external service still constitutes processing personal data, with all the obligations that entails. Without training, users adopt risky practices without understanding the implications.
Heterogeneous risk levels, rarely mapped
Not all uses carry the same level of exposure. Three levels generally emerge:
- Generic uses involving no sensitive data
- Uses involving internal information
- Uses involving personal, contractual, or strategic data
The issue isn’t the tool itself it’s the nature of the data and the context of use. Without a usage classification, there is no risk management.
Diffuse risk, hard to audit
Unlike classic cybersecurity incidents, the risks associated with generative AI are rarely visible. They occur in ordinary actions: rewriting documents, analyzing internal content, summarizing data. That diffuse quality makes them difficult to detect, and it requires a shift in posture from technical controls to governance of practices.
Repositioning the issue where it belongs: governance, not tooling
Integrating generative AI into an organization is not a technology choice. It requires a structured framework: defining which use cases are permitted, classifying and controlling data flows, putting operational guardrails in place, maintaining a trace of interactions, training users, and establishing appropriate human oversight.
In other words, it calls for an AI risk governance system, not simply the adoption of a tool. At this point, the latter still seems to be the norm.
In closing about generative AI in the enterprise
Using a paid generative AI solution is not a compliance guarantee. The question isn’t just which vendor you chose, it’s whether your organization can govern its uses, control its data, and own its responsibilities. My consistent observation: technology is not the risk. It’s a risk amplifier. It strengthens organizations that have mastered their processes, and exposes those that haven’t built them yet.
Read more : The impact of AI on the business: a question of process before being a question of productivity








