It feels like companies are hurrying to implement AI-related features in products and tools. No one wants to miss out, and little can be done to stop the use of generative AI by staff anyway. But we’re finding it’s also creating a whole new debate around compliance and ethics.
AI has been around for a while, and until recently even the best AI technologies and tools could barely rally interest. But recent advancements in neural networks and access to powerful GPU-powered hardware has enabled the creation of models such as GPT4. And major providers such as OpenAI, IBM, and Google are now offering models which can be adapted to any downstream task. Plus, there are plenty of resellers who are customising models for client needs.
Interestingly, Samsung recently banned ChatGPT internally amidst a data breach, but then announced development of its own AI tools.
So what do you need to be aware of?
Alex Dittel, Partner in Technology at Wedlake Bell LLP, helps answer this for us below.
Due diligence on the AI provider.
Any AI system should undergo the usual scrutiny of a tech provider. Look at the provider’s policies, capability statements, compliance documentation, terms and conditions and SLAs. Standard terms do not currently exist so you need to consider carefully what you’ll be using AI for within your organisation and its different parts, and that it poses different risks to different organisations.
Model purity.
Bad input data leads to a bad AI model. Enquire about how the model was/is trained to understand its potential bias, inaccuracy and other risks. If the AI system produces fictional or offensive outputs or fails to produce an output due to bias, it could lead to liability issues. (As seen recently where lawyers have been fined for citing cases they believed were legitimate but were actually made up by AI, known as ‘hallucinations’). Also be aware that some resellers’ models are ‘closed’ so don’t allow for it to be trained, on your own data and/or that of others.
Use of AI and legal compliance.
Understand how the use of AI for your intended purposes complies with the law. For example, if AI is deployed to make decisions about hiring individuals, you will need to take extra steps to understand its logic and build in appropriate human supervision.
Auditability / explainability.
By its very nature, the hidden layers of the AI’s neural net are a black box. Nevertheless, it’s important to ask the AI provider how its system works at the outset and that users are informed.
Personal data.
If users are going to put personal data (for customers, staff, etc.) into the AI model, ensure its purpose is limited and there’s an appropriate data sharing agreement in place. You might have to use privacy enhancing measures to minimise data protection risks. And consider if individuals should have the right to opt-out from having their data included.
Confidentiality and intellectual property.
When using a public AI model, data input by users might become part of the AI library. But if data includes confidential information or trade secrets, for example, it could lead to liability issues, e.g. a data breach, a breach of confidentiality or an infringement of intellectual property rights. Users need to be informed of the risks at the outset.
Security.
Most technology is vulnerable to attacks. So you might want to deploy the AI system on secure infrastructure and not share input data with the provider. A security assessment will be very important.
Workforce and societal risk.
Rolling out AI too quickly could diminish certain professions. The loss of qualified people in a specific sector could mean that there is no one to supervise the use of AI. This could lead to overreliance on AI and an inability to audit and further develop the quality of AI generated output.
AI governance within the organisation.
Stakeholders from across the organisation should be involved in addressing the risks of using AI throughout the AI lifecycle, including its operation and improvement by users. It’s not something that can be assessed once and forgotten. This is particularly important if you’re in a regulated industry. Keep up to date with your regulator’s views on AI.
In short, if you’re adopting AI within your business, you’ll need an AI governance programme which assesses usage and risks, and you’ll then need to ensure staff are trained on it regularly.
With the growing awareness of the legal and ethical risks, the increasing regulatory interest in the new technology and, generally, the uncertainty of what the near future holds, ignoring compliance and legal risks for AI is not a sensible option. If you want to discuss this in more detail, get in touch on info@legaledge.co.uk