D
DaveKnowsAI
ComplianceMarch 2026

How to Write an AI Policy for Your Company: UK Template & Guide (2026)

Your team is probably already using AI, whether you have a policy or not. Here is how to create one that protects your business without killing innovation.

D

Dave

AI Consultant, DaveKnowsAI

Why You Need an AI Policy

If you have more than a handful of employees, some of them are already using ChatGPT, Claude, or other AI tools for work. They might be drafting emails, generating reports, or processing customer data. Without a policy, you have no visibility into what data is being shared with which tools, and no way to manage the associated risks.

An AI policy is not about restricting your team. It is about giving them clear guidelines so they can use AI confidently and safely.

What to Include

A good AI policy for a small business does not need to be 50 pages long. It needs to be clear, practical, and easy to follow. Here are the essential sections:

1. Purpose and Scope

State why the policy exists and who it applies to.

"This policy provides guidelines for the use of artificial intelligence tools by [Company Name] employees. It applies to all staff who use AI tools in the course of their work, whether provided by the company or used independently."

2. Approved AI Tools

List the tools your business has approved for use, and specify any that are explicitly not approved.

Approved tools:

  • ChatGPT Team (for general business tasks)
  • Claude Team (for document analysis and writing)
  • Zapier (for workflow automation)
  • [Your other approved tools]

Not approved for business use:

  • Free-tier AI chatbots (no data protection agreements)
  • AI tools that have not been reviewed by management
  • Any tool that requires uploading customer personal data without a DPA

3. Data Protection Rules

This is the most important section. Be specific about what data can and cannot be used with AI tools.

Never put the following into AI tools:

  • Customer personal data (names, addresses, phone numbers, email addresses) without explicit approval
  • Financial information (bank details, payment card numbers, account information)
  • Health or medical information
  • Employee personal data (performance reviews, salary details, disciplinary records)
  • Confidential business information (trade secrets, unreleased product details, financial forecasts)
  • Passwords, API keys, or access credentials

Acceptable to use with approved tools:

  • General business queries with no personal data
  • Anonymised or aggregated data
  • Publicly available information
  • Draft content that does not contain confidential details

4. Quality and Accuracy

Make clear that AI output must always be reviewed by a human before being used externally.

"All content, communications, and documents generated with AI assistance must be reviewed for accuracy, appropriateness, and brand consistency before being shared externally. Employees are responsible for the accuracy of any AI-assisted work they produce."

5. Transparency

State your position on disclosing AI use to clients and customers.

"We do not need to disclose that AI tools assisted in creating internal documents. For customer-facing communications, [decide your position: always disclose / disclose when asked / no disclosure required]."

6. Intellectual Property

Address ownership of AI-generated content.

"Content created using AI tools in the course of employment belongs to [Company Name]. Employees should not assume that AI-generated content is free from intellectual property concerns. All AI-assisted work should be reviewed and substantially edited before external use."

7. Reporting and Concerns

Tell employees what to do if something goes wrong.

"If you accidentally share sensitive data with an AI tool, or if an AI tool produces output that appears harmful or inappropriate, report it immediately to [name/role]. Do not attempt to cover up data incidents."

8. Review and Updates

AI changes fast. Commit to regular review.

"This policy will be reviewed quarterly. Employees will be notified of any changes. Suggestions for policy improvements are welcome."

Implementation Tips

Keep it short

A 2 to 3 page document is far more likely to be read and followed than a 30-page manual. Save the detail for an appendix if needed.

Train your team

A policy document sitting in a shared drive achieves nothing. Run a 30-minute session to walk through the key points, answer questions, and demonstrate approved tools.

Lead by example

If leadership uses AI tools responsibly and openly, the team will follow. If leadership ignores the policy, so will everyone else.

Be realistic

A policy that bans all AI use will simply be ignored. People will use tools secretly, which is worse than having no policy at all. Focus on safe use, not prohibition.

Get feedback

After a month, ask your team what is working and what is not. Update the policy based on real-world experience.

Template: Quick-Start AI Policy

Here is a minimal, practical template you can adapt for your business:

[Company Name] AI Usage Policy

Approved tools: [List your approved tools]

Rules:

  1. Never put customer personal data, financial data, or confidential business information into AI tools without management approval.
  2. Always review AI output for accuracy before using it externally.
  3. Use approved tools only. If you want to try a new AI tool for work, ask [name/role] first.
  4. Report any data incidents immediately to [name/role].
  5. AI tools assist your work. You are responsible for the quality and accuracy of everything you produce.

Questions? Contact [name/role].

That is genuinely enough for most small businesses to start with. You can expand it as your AI usage matures.

Frequently Asked Questions

Do I legally need an AI policy?

There is no UK law that specifically requires an AI usage policy. However, your GDPR obligations require you to control how personal data is processed, which effectively means you need guidelines for AI tool usage if your team handles personal data.

What if employees are already using AI without a policy?

That is very common. Do not panic. Introduce the policy, explain the reasoning, and move forward. Unless there has been a specific data breach, retrospective enforcement is counterproductive.

Should I ban AI tool usage entirely?

No. Banning AI will not stop people using it; it will just make them use it secretly, without oversight. A better approach is to approve specific tools and set clear guidelines.

How often should I update the policy?

Quarterly is a good cadence for the first year. After that, every six months or when significant new tools or regulations emerge.

Can a consultant help with this?

Yes. If you want a policy tailored to your specific industry, data types, and team structure, get in touch and I can help you create one that actually works.

Related Reading

Want to Put This Into Practice?

Book a free 30-minute discovery call. We will talk through your specific situation and identify the highest-impact AI opportunities for your business. No obligation, no jargon.

Book a Free Call