Five ways to manage AI risk

The use of generative AI is on the rise in businesses across the world.

However, in a PWC Trust Survey which asked 500 executives how they prioritised major risks that could erode trust in their company, the threats associated with AI fell well below other cyber-related ones like a data breach or a ransomware attack, with the results suggesting that many business leaders may not yet have grasped the challenges that generative AI might pose. Such risks include at the very least content reproduced illegally from copyrighted material, the inadvertent sharing of intellectual property, inaccurate output, or deepfakes intended to spread misinformation.

How, then, do businesses manage that risk? Below, we outline five practical steps that a business can take to manage AI risk:

1. AI AUDIT: As a starting point, businesses should perform an audit of where and to what extent they are actually using AI. The uses are of course endless and will vary in scale depending on the size and nature of any one business. Obvious examples include targeted advertising, content personalisation, risk modelling, data forecasting and claims handling within the insurance industry, document generation, and of course the use of AI search engines and chat bots. AI is also used in schools and indeed the Information Commissioner’s Office recently reprimanded a school in Essex in relation to its use of fingerprint technology which was used to manage cashless catering (the school’s use of the technology infringed the UK GDPR).

2. DATA AWARENESS: Digital transformation comes with a greater requirement for businesses to stay on top of data management and can raise a series of complex questions including who owns the data, who hosts it, who collects it, and who might stand to benefit from it. Other questions relate to safe storage and transfer, and how and to what extent the data can be transferred within the business and also outwith. Businesses / organisations should at the very least:

  • Complete a data protection impact assessment before processing any personal data. In the case of the school in Essex, for example, an impact assessment should have been completed given i) the processing of biometric data and ii) the fact that the data was at least partly that of vulnerable data subjects (in this case, children).
  • Obtain valid and explicit consent from anyone whose data is being gathered. It is worth bearing in mind that the UK GDPR requires affirmative action, i.e., consent must be given rather than assumed. In the case of the school, students should have been offered the chance to “opt-in” failing which they should have been assumed to have “opted-out”.
  • Appoint a Data Protection Officer who is specifically trained in managing data risk (and therefore likely to also be involved in managing AI risk).
  • Ensure cyber, data and privacy policies are in place.

3. PAPER TRAIL: Perhaps an obvious housekeeping matter but businesses should ensure to obtain and record consents (from data subjects), as well as keep copies of licences and contracts.

The EU has drafted standard EU model contractual AI clauses which have been made available for public organisations wishing to procure AI systems developed by external suppliers. There are no such model clauses in the UK (at least not yet) but businesses using AI software should be aware of AI clauses and take advice in relation to what they mean in practice.

4. IP: Businesses and those using AI within them should be aware of intellectual property and the scope for infringing the IP rights of others. Media reporting in relation to AI and IP often focuses on the materials used to train generative AI systems (i.e., the input) – for example, in the US there are ongoing lawsuits filed by authors, artists and major media organisations against AI companies who are charged with using their copyrighted works to generate new AI-generated material. Similarly, in the UK, Getty Images (“Getty”) is in the middle of a litigation against Stability AI, which concerns the use of images owned by Getty to train Stability AI (enabling it to then generate content).

Businesses should also be mindful of the fact that AI-generated output may also be deemed to infringe IP rights, if the output is identical to the original work or if the original can be recognised in it. In such scenarios, complex questions of liability for infringement also require to be addressed, including the extent to which the AI operator is liable for any infringing output. These questions may to some extent be answered by the terms and conditions of a company providing AI or in carefully drafted AI clauses.

5. REGULATORY COMPLIANCE: Businesses should be aware of the regulatory landscape. For those with an EU presence, that requires awareness of the provisions of the EU AI Act, implemented as part of the EU’s digital strategy and designed to regulate AI to ensure better conditions for its development and use. There is no parallel legislation in the UK (at least not yet), though the UK government has adopted a cross-sector, outcome-based framework for regulating AI which is underpinned by five core principles (safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress).

Managing AI risk is likely to be a moveable feast for most UK businesses as the technologies available continue to evolve. Following the above steps will enable most businesses to put in place flexible frameworks which will in turn mean they are well positioned to cope with the requirements of AI and any future regulatory requirements.

Should you have any questions about the use of AI, AI clauses in commercial contracts, or how to manage the legal risks that might arise from the use of AI technology within your business, please contact our IP, Tech and Data Protection Team.

STAY INFORMED