Here is the fundamental dilemma of Artificial Intelligence in any professional setting: you cannot use the most powerful AI tools without putting your data into them, and you cannot put your data into them without risking its confidentiality. This is not a bug; it is a feature of how Cloud AI works. For any business that handles sensitive information – which in Jersey means every trust company, law firm, bank, and government department – this creates an impossible choice between capability and confidentiality.
This is not a theoretical risk. In April 2023, employees at Samsung, one of the world's most sophisticated technology companies, accidentally leaked confidential source code and internal meeting notes by pasting them into ChatGPT. [1] They were not malicious; they were simply using the tool as designed. This article explores the dilemma from a Jersey perspective, examining the legal and professional obligations that make this a particularly acute problem for our jurisdiction.
The Horns of the Dilemma
The dilemma has two parts, and both are equally sharp.
On the one hand, the capability is undeniable. Cloud AI models like GPT-5.4 and Claude Opus 4.6 are astonishingly powerful. They can summarise a 200-page trust deed in seconds, draft a complex client letter, or analyse a portfolio for compliance issues. The potential productivity gains are enormous. To ignore these tools is to accept a permanent competitive disadvantage.
On the other hand, the risk is structural. When you use a cloud AI tool, your data is sent to a third party – typically a US company like OpenAI, Google, or Microsoft. At that moment, you lose control. Your data may be stored on servers in the US, used to train future AI models, or accessed by US law enforcement under the CLOUD Act.
The Legal Minefield: GDPR and the CLOUD Act
Jersey's Data Protection (Jersey) Law 2018, which mirrors GDPR, is clear: you cannot transfer personal data outside the European Economic Area without adequate safeguards. The US is not considered an "adequate" jurisdiction. While legal mechanisms like Standard Contractual Clauses exist, they are complex and do not override the fundamental problem of the US CLOUD Act, which allows US authorities to compel US companies to hand over data stored anywhere in the world. This creates what legal experts have called an "irreconcilable conflict" between US and European law. [3]
For a Jersey trust company, this is a nightmare scenario. Imagine explaining to a client that their beneficial ownership information, stored with a US cloud provider, has been accessed by a foreign government. The reputational damage would be catastrophic.
The Professional Minefield: Privilege and Confidentiality
For lawyers, the situation is even worse. In a landmark ruling in February 2026, Judge Jed S. Rakoff of the Southern District of New York held that documents generated by a defendant using a consumer AI platform were not protected by attorney-client privilege or the work product doctrine. [4] The court's reasoning was straightforward: because the platform's terms of service disclosed that user inputs could be used for model training and shared with third parties, there was no "reasonable expectation of confidentiality." The implications are chilling. Any professional — not just lawyers — who uses a public AI tool to analyse confidential client matters may have destroyed the very confidentiality that protects that work.
This is not a uniquely legal problem. Every regulated professional in Jersey, from accountants to fund administrators, has a duty of confidentiality to their clients. Sending client data to a third-party AI provider is, at best, a grey area, and at worst, a clear breach of that duty.
Can the Risk Be Mitigated?
There are several ways to try to square this circle, but none are perfect.
| Mitigation Strategy | How it Works | The Problem |
|---|---|---|
| Enterprise Accounts | Use business-focused versions like Microsoft Copilot for M365 or OpenAI Enterprise, which have stronger data protection terms. | Data is not used for training, but it still leaves your premises and is subject to the CLOUD Act. |
| Data Anonymisation | Remove all personal identifiers from data before sending it to the cloud. | True anonymisation is extremely difficult. AI can often re-identify individuals from seemingly anonymous data. |
| On-Premise AI | Run a smaller AI model on your own hardware, so the data never leaves your control. | The only complete solution for confidentiality, but on-premise models are less capable than their cloud counterparts. |
On-premise AI deserves a closer look than the table above might suggest. The gap between cloud and on-premise capability has narrowed significantly in the past twelve months. Models such as Meta's Llama 3, Microsoft's Phi-4, and Mistral's latest releases can now run on a modest server and deliver genuinely useful results for document summarisation, drafting, and analysis — the everyday tasks that matter most to a professional services firm. For a Jersey trust company or law firm, the calculus is increasingly straightforward: the capability trade-off is acceptable, and the confidentiality benefit is absolute. The data never leaves the building. It is not subject to the CLOUD Act. It cannot be used to train a foreign AI model. For high-sensitivity work, on-premise is no longer a compromise; it is the right answer.
Conclusion: A Dilemma with No Easy Answer
There is no magic bullet here. The confidentiality dilemma is real, and it requires a conscious, risk-based decision for every single use case. The answer is not to ban AI, nor is it to blindly adopt it. The answer is to understand the trade-offs.
For low-risk tasks involving public information, the power of cloud AI is a clear winner. For high-risk tasks involving confidential client or citizen data, the security of on-premise AI is the only responsible choice. The challenge for every Jersey business is to draw that line, to create a clear policy that every employee understands, and to accept that for now, the most powerful tools are also the most dangerous.
This is not a comfortable position, but it is an honest one. The first step in solving a dilemma is to admit that it exists.
References
[1] Samsung Employees Leaked Confidential Data to ChatGPT - Gizmodo
[2] CLOUD Act vs. GDPR: The Conflict About Data Access - Exoscale
[3] Prevent CLOUD Act Risks: Secure European Data - Kiteworks
[4] AI Privilege Waivers: SDNY Rules Against Privilege Protection for Consumer AI Outputs - Gibson Dunn