We spoke to Dr. Anna McLaughlin, of Sci-Translate, a neuroscientist who specialises in helping companies navigate AI and who has deep experience in health-tech and life sciences, two sectors where data integrity, compliance and ethical use of technology are critical, about the use of and need for AI policies .
Here’s what she told us:
Why do you need an AI policy?
The evolution of AI has outpaced many businesses’ ability to govern it. In your organisation, do you know:
- who is using AI (which teams, which roles?),
- all the tasks they are using AI for (e.g. minute taking, writing letters, drafting contracts, analysing data, recruitment?)
- how they are using it (as an occasional cross check, for research only or regularly to generate work?),
- what AI tools they are using and are they closed or open source (ChatGPT, GitHub, Copilot?); and
- the features and terms of use of all the AI tools being employed in your business?
Innovation and experimentation may be particularly prized in start-ups and fast growth businesses, so AI’s potential for quick results and efficiencies are understandably attractive.
A lack of full understanding and oversight matters because:
Bias and fairness:
AI models can reflect or amplify hidden biases. Left unchallenged, this can creep into recruitment decisions, marketing materials, or data analysis.
Compliance and confidentiality:
Putting client or personal data into public AI systems can lead to security breaches and breaches of legal obligations including GDPR (e.g. does your privacy notice reflect how you are using AI to manage personal data?).
Reputation and trust:
Misuse of AI can lead to inaccurate outputs, IP disputes, or brand-damaging mistakes.
Governance and control:
Without clear rules, staff won’t know what’s acceptable – and leaders won’t know what risks they’re carrying.
A clear AI policy sets the tone for responsible innovation. It empowers staff to use AI confidently and productively, while ensuring the business protects itself against misuse. Think of it as part risk management, part culture-setting: giving employees a green light to explore AI’s potential, but with guardrails that keep your company safe.
It’s easy to overlook the risks, until it’s too late – privacy breaches, compliance failures, data leakage, misuse by staff and reputational damage are all risks inherent in the use of AI in your business. See our recent blog: Have you checked how your team is using AI and why you should care
But having an AI policy can help you mitigate these risks.
Where do I start with my policy?
LegalEdge lawyer Jo Osborne shares some practical guidance on creating a realistic AI policy for businesses starting their AI risk management journey:
1. Define what “AI use” means for your business:
Everyone throws the term around, so be specific. Are you talking about tools that generate text, analyse data, automate workflows, or free online resources like ChatGPT – or all of the above? Spell it out in your policy so there’s no confusion.
2. Map current use:
Start by finding out what’s actually happening. Create a simple table showing information such as: Tool, Business owner, Purpose, Users, Software integration, Input/output types and Legal terms location. This should give you a picture and identify operational risks. As a bonus, it’s also your first step in better understanding your AI operational risk.
3. Set some ground rules:
This is where your policy really earns its keep. List approved tools, prohibited ones, usage guidelines and approval processes. Give approvers a risk-benefit checklist. Keep it concise and practical; people only follow policies they can understand.
4. Cover the key legal and practical points:
Build these into your policy and the approval checklist, so accountability belongs to everyone:
✔Data and privacy: No confidential, personal data in public AI tools.
✔Intellectual property: Monitor input risks (your IP assets) and third-party content in outputs.
✔Transparency: Set disclosure expectations for clients/colleagues.
✔Human oversight: Require human review for accuracy and fairness.
✔Accountability: Assign owners for each tool’s updates and usage.
5. Share and train:
Don’t just park your shiny new AI policy on the intranet. Bring it to life. Walk teams through it, explain why it matters and make it easy for people to ask questions. Aim is for confident risk management, not compliance theatre.
Act sooner rather than later
Putting in place an AI policy will help your business to:
- Decide where you do and do not want to use AI.
- Define acceptable use of AI tools.
- Set out your expectations for how staff should use AI.
- Help manage the risks of AI identified.
Early adoption of an AI policy sets the tone for responsible innovation – don’t wait for a security breach or a PR incident – you can always start simply and expand your policy as your business scales.
Can we help you?
We can help you draft your AI policy and provide training to cover these risks – get in touch for a chat.
Sci-Translate helps organisations translate complex research and technical concepts into clear, actionable insights. They specialise in explaining how AI really works in practice and what that means for teams, leaders and clients. And they help companies navigate AI with clarity, so they can innovate confidently without falling into the common traps. Get in touch if you’d like an intro to Anna.
