Book a call
By LegalEdge News

AI adoption: compliance and ethics


AI adoption

It feels like companies are hurrying to implement AI-related features in products and tools. No one wants to miss out, and little can be done to stop the use of generative AI by staff anyway. But we’re finding it’s also creating a whole new debate around compliance and ethics. 

AI has been around for a while, and until recently even the best AI technologies and tools could barely rally interest. But recent advancements in neural networks and access to powerful GPU-powered hardware has enabled the creation of models such as GPT4. And major providers such as OpenAI, IBM, and Google are now offering models which can be adapted to any downstream task. Plus, there are plenty of resellers who are customising models for client needs.

Interestingly, Samsung recently banned ChatGPT internally amidst a data breach, but then announced development of its own AI tools.

So what do you need to be aware of? 

Alex Dittel, Partner in Technology at Wedlake Bell LLP, helps answer this for us below.

Due diligence on the AI provider.

Model purity.

Use of AI and legal compliance.

Auditability / explainability. 

Personal data.

Confidentiality and intellectual property.

Security. 

Workforce and societal risk.

AI governance within the organisation.

In short, if you’re adopting AI within your business, you’ll need an AI governance programme which assesses usage and risks, and you’ll then need to ensure staff are trained on it regularly.

With the growing awareness of the legal and ethical risks, the increasing regulatory interest in the new technology and, generally, the uncertainty of what the near future holds, ignoring compliance and legal risks for AI is not a sensible option. If you want to discuss this in more detail, get in touch on info@legaledge.co.uk

Back To Blog Our Services
  • Share:

What do our clients think?