AI Does Not Change Your GDPR Obligations
This is the most important thing to understand: using AI tools does not create a separate set of rules. The same GDPR principles that apply to any data processing activity apply when you use AI. You still need a lawful basis for processing personal data, you still need to be transparent about how you use it, and you still need to keep it secure.
What AI does add is complexity. When you paste customer data into ChatGPT, where does it go? When an AI chatbot collects customer information, how is it stored? When you use AI to analyse employee data, what are the implications?
These are answerable questions, and this guide will help you navigate them.
The Key GDPR Principles Applied to AI
1. Lawful Basis
You need a legal reason to process personal data, whether you use AI or not. The most common bases for AI processing are:
- Legitimate interests: You have a genuine business reason for using AI to process the data, and it does not override the individual's rights. This covers most business AI use cases.
- Consent: The individual has given clear, informed consent. Required if you are processing data in ways they might not expect.
- Contract: The processing is necessary to fulfil a contract with the individual.
2. Transparency
You must tell people if AI is processing their data. This means updating your privacy policy to cover AI tools. You do not need to list every tool, but you should explain:
- That you use AI tools to process certain types of data
- What data is processed and why
- Where the data goes (which third-party AI providers)
- How long data is retained
3. Data Minimisation
Only put the data into AI tools that you actually need. Do not paste an entire customer database into ChatGPT when you only need to analyse purchasing patterns. Strip out personal identifiers when they are not necessary.
4. Purpose Limitation
If you collected data for one purpose, you cannot use AI to process it for a completely different purpose without additional consent or a compatible legitimate interest.
5. Accuracy
AI sometimes generates inaccurate information. If AI output is used to make decisions about individuals (hiring, credit, customer profiling), you must ensure accuracy and give individuals the right to challenge those decisions.
6. Storage Limitation
AI tool providers often retain data. Check your provider's data retention policies and ensure they align with your own.
Practical Steps for UK Businesses
Step 1: Audit Your AI Tool Usage
Create a simple register of every AI tool your business uses:
- Tool name and provider
- What personal data (if any) is processed
- Where the data is stored (UK, EU, US, etc.)
- The provider's data processing agreement (DPA)
- Whether data is used to train the provider's models
Step 2: Check Data Processing Agreements
Every AI tool that processes personal data on your behalf needs a Data Processing Agreement. Most enterprise-tier AI tools provide these. Check for:
- OpenAI: Team and Enterprise plans include a DPA. Data is not used for training on these plans.
- Anthropic (Claude): Team plans include a DPA. Data is not used for training.
- Microsoft (Copilot): Business plans include Azure data protection commitments.
- Google (Gemini): Workspace plans include Google Cloud DPA.
Free-tier plans typically do NOT include adequate data protection commitments. Do not use free AI tools for personal data.
Step 3: Update Your Privacy Policy
Add a section covering AI tools. A simple, honest statement works best:
"We use AI-powered tools to help us process certain business data more efficiently. This may include [summarising enquiries / categorising support requests / generating draft responses]. Personal data processed by these tools is handled in accordance with our data protection obligations and the tool providers' data processing agreements. We do not share personal data with AI tools for purposes other than those described in this policy."
Step 4: Implement Internal Guidelines
Create a simple AI usage policy for your team. Cover:
- What data can and cannot be put into AI tools
- Which tools are approved for use
- How to anonymise or redact personal data before AI processing
- Who to contact with questions or concerns
Step 5: Conduct a Data Protection Impact Assessment (DPIA)
If you are using AI to make automated decisions about individuals, or processing sensitive data (health, financial, etc.), you may need a DPIA. This is a structured assessment of the risks and mitigations. The ICO provides free templates and guidance.
Common Scenarios and What to Do
Putting customer emails into ChatGPT
Risk: Customer emails often contain personal data (names, addresses, account details).
Solution: Use a business plan (Team or Enterprise) with a DPA. Alternatively, redact personal details before pasting.
AI chatbot on your website
Risk: The chatbot collects personal data from visitors (names, email addresses, questions that may contain personal information).
Solution: Ensure the chatbot provider has a DPA, update your privacy policy, and add a notice at the start of the chat explaining data handling.
Using AI to screen job applications
Risk: CVs contain personal data. Automated screening may constitute automated decision-making under GDPR, requiring human oversight.
Solution: Always have a human review AI-shortlisted candidates before decisions are made. Inform applicants that AI is used in the screening process.
Analysing customer data for insights
Risk: Personal data used for a new purpose (analytics) may not be covered by the original consent.
Solution: Anonymise or aggregate data before analysis where possible. If personal data is needed, ensure you have a legitimate interest assessment on file.
The ICO's Position on AI
The Information Commissioner's Office (ICO) has been increasingly active on AI guidance. Key points from their 2025-2026 publications:
- AI is not exempt from data protection law
- Transparency about AI use is essential
- Automated decision-making requires human oversight
- Data Protection Impact Assessments are recommended for higher-risk AI applications
- The ICO will take enforcement action against organisations that misuse personal data in AI systems
The ICO website (ico.org.uk) has free guidance documents specifically covering AI and data protection. They are worth reading if you are planning significant AI adoption.
Frequently Asked Questions
Do I need consent to use AI tools?
Not necessarily. Legitimate interests is the most common lawful basis for routine business AI use. Consent is needed if you are processing data in unexpected ways or handling special category data.
Can I use the free version of ChatGPT for business?
I would not recommend it for anything involving personal data. Free-tier data may be used for model training, and there is no DPA. Use a Team or Enterprise plan instead.
What if I use AI but do not process any personal data?
If the data you put into AI tools contains no personal data (no names, emails, addresses, or anything that could identify an individual), GDPR does not apply to that specific processing. Many business uses of AI involve no personal data at all.
What fines can I face for GDPR violations involving AI?
The same fines as any GDPR violation: up to £17.5 million or 4% of global annual turnover, whichever is higher. In practice, the ICO is more likely to issue warnings and improvement notices first, but the fines are real.
Should I get legal advice?
For routine AI tool usage with proper DPAs and a sensible internal policy, you probably do not need a solicitor. For large-scale automated decision-making, sensitive data processing, or complex data sharing arrangements, legal advice is worthwhile.