-
Date Time 11:00 - 12:00
Location Webinar Timezone America/New York
Overview:
AI models and their more advanced Large Language Model (LLM) counterparts are being evaluated to detect adverse events in scientific literature and used to streamline post-market surveillance workflows. The utility of artificial intelligence techniques in adverse event detection, however, requires richer analysis of their accuracy and precision. The session examines different approaches organisations should take in exploring the efficacy of AI in detecting adverse events in literature, as well as regulatory considerations associated with its use.
Key learning objectives:
- Leverage AI models and advanced Large Language Models (LLMs) to detect adverse events in scientific literature and streamline post-market surveillance workflows.
- Evaluate the nuances of assessing the accuracy and precision of AI techniques in adverse event detection and key considerations for conducting a robust analysis of AI model performance.
- Ponder different approaches that organisations can take to explore the efficacy of AI in detecting adverse events in literature, including methodologies for evaluation and comparison.
- Address regulatory considerations associated with the use of AI in adverse event detection and key compliance aspects and considerations when integrating AI technologies into post-market surveillance.
- Optimise post-market surveillance workflows through the effective integration of AI technologies, fostering efficiency and accuracy in adverse event detection processes.
Speaker:
Andrew Purchase
Director, Pharmacovigilance Specialised Services and UK QPPV, Pharmacovigilance & Safety Reporting Services