Emerging AI issues affecting life sciences companies in the EU and UK | Hogan Lovells

How the EU is leading the way in the regulatory development of artificial intelligence

The Committee began by noting that a year had passed since the European Commission (the Commission) issued its proposal European Union regulatory framework On Artificial Intelligence (AI). The proposal, released in April 2021, represents the first regulation of its kind across sectors, creating a comprehensive framework that addresses challenging ethical issues such as bias and transparency as well as risks arising from automated decision-making. According to the team moderator Imogen Irelanda senior associate at the Hogan Lovells Group on Intellectual Property, Media and Technology, the AI ​​legal landscape requires that we step out of our sectors’ vacuum and work across silos.

Dan WhiteheadCounsellor at Hogan Lovells’ practice of privacy and cybersecurity, noted that in recent years, the European Union has focused heavily on digital regulation, and that the proposed AI law will have a profound impact on AI governance in healthcare. Mr. Whitehead noted that penalties under the proposed AI law are even greater than those stipulated in the General Data Protection Regulation (GDPR) and could amount to €30 million, or 6% of the company’s annual global turnover. Mr. Whitehead noted that GDPR and other existing regulations (such as anti-discrimination and product safety laws) already indirectly address some of the major risks associated with AI, such as risks of bias or performance inaccuracies (false positives or negatives) and risks to patient safety when The use of artificial intelligence in the context of healthcare, challenges in explaining complex technology and its impact on real-world decisions and actions. The new EU regulatory framework on AI will go further in specifically addressing these risks in the context of AI technology.

Bonilla Ramsay, a senior advisor on the company’s global regulatory practices, provided an overview of AI regulations in the context of medical devices and in vitro diagnostics (IVDs). She noted that the new European Union Regulation of medical devices (EU MDR) Effective from May 2021 and List of laboratory diagnostic medical devices (IVDR) as of May 2022. Both are subject to a transitional arrangement, but in the context of the program as a medical device, the AI ​​will automatically be considered a Class IIA medical device, and possibly even a Class III or Class IIB medical device, making conformance assessment of the CE marking more complex. However, EU regulations do not explicitly address AI as a medical device, which leads to questions about how AI products should be treated under the current regulatory framework and proposed AI law.

Is the UK keeping up with the European Union?

Mr Whitehead noted that while the EU is leading the way, the UK has also published a file National Artificial Intelligence Strategy The past year includes ambitious plans in terms of investing in and regulating AI across all sectors. It remains to be seen how these plans will be implemented in practice.

Ms Ireland noted that in the context of intellectual property, the UK is already keeping pace with and looking at ways in which intellectual property laws can and/or should address the complexities posed by AI. In 2021, the UK Intellectual Property Office (UK IPO) opened a Consultation It asks, among other questions, whether inventions created by artificial intelligence should be patentable, and if so, how? Guidance from the UK IPO is expected soon.

Next practical steps

Louise Crawford, Senior Associate at Hogan Lovells Technology Practice, provided an overview of how the current liability regime, which relies on a mixture of tort, product liability, discrimination, privacy and contract laws, may not be sufficient to provide appropriate remedies for those who suffer losses due to Errors or defects in artificial intelligence. Important in this analysis is the need to identify a link between error and loss which can be particularly challenging when there are many parties involved in developing and operating a complex solution. This liability regime is under review by the European Commission and important changes are expected in the near future.

think of 2020 EC Primer DocumentMs Crawford noted that while it is still early days, the EU is likely to take a two-pronged approach: 1) extending its existing product liability regime to digital products and 2) introducing a specific system for an AI operator that distinguishes between high-risk and low-risk systems and personalize responsibility accordingly. When it comes to legal reform in this area, the Commission’s priorities will be 1) coordination among member states; and 2) ensuring that the responsibility framework is robust enough to foster confidence in AI technology and encourage continued development in the field.

Turning to “top tips” for clients in addressing AI liability risks, the panel stressed that having the right governance framework in place would be critical. Whether local or from an external supplier, when using AI technology companies, you will need a framework of policies and protocols that cover cost/benefit analyzes and AI risk factor assessment mechanisms. For companies that use vendors to provide AI technology, conducting due diligence on the vendor, technology, and data is also critical. Sellers must be able to explain the effectiveness of their products as well as the risks and be able to explain how to mitigate those risks. Appropriate contractual obligations must also be put in place to address the liability risks associated with technology.

The panel concluded its session by advising stakeholders to pay close attention to developments in this fast-moving space: significant investment in AI technology is being made in tandem with the changing regulatory environment. Ms. Ramsay advised life sciences companies to stay abreast of the regulations as they apply to every stage of AI product design and implementation. Mr. Whitehead said he advises clients to take AI governance seriously and not just roll the AI ​​compliance system into another set of policies. Mr. Whitehead also advised stakeholders to engage with regulators on policy proposals while there is still an opportunity to do so: the AI ​​law is not yet in place and the proposals could change significantly before they become law. Ms Ireland said that people remain central, not only in the research and development process, but also in terms of asset control and development such as intellectual property.