The Heads of Medicines Agencies (HMA) and the European Medicines Agency (EMA) have recently published new guiding principles for the use of large language models (LLMs) in regulatory affairs. In the context of the guidance, artificial intelligence (AI) refers to a type of generative computing which can generate predictive output, based on the information inputted. LLMs are AI specifically designed for text-based output.
The new guidance largely focuses on defining the legal, ethical and safe use of AI, especially LLMs. It is a living document, intended to be updated and revised as AI technology and the understanding of it develops. It follows the more comprehensive but less up-to-date Ethics Guidelines for Trustworthy AI, published by the European Commission’s High-Level Expert Group on AI in 2019.
The guidance makes some recommendations for organisations wishing to support their staff who use LLMs ‘only where they can be used in a safe and responsible manner’.
AI technologies are powerful tools which can improve both the speed and the quality of scientific and regulatory writing; for some examples of how, read Breaking barriers: the future of CMC powered by AI in October’s issue of Regulatory Rapporteur. However, there are also risks, such as those of bias, copyright infringement and misinformation.