WHAT HAPPENED TO REAL INTELLIGENCE?
There has been a great deal written on the use of artificial intelligence in medical practice and, more particularly, in the field of pharmacovigilance. Claims are made for reduced case processing times, more efficient recording and storage of data, and better use of algorithms for the detection of safety signals. Whilst this is undoubtedly the case, there still remains a need for real intelligence, viz. a human input rather than a machine.
If anyone suffers and adverse event, whether in a clinical trial or in routine clinical practice, someone must consider the potential relationship between the event and any medicinal product that person has used. One of the weaknesses of real-world drug safety is under reporting of events: despite initiatives such a Prescription Event Monitoring in the UK we are still critically dependent on human evaluation and reporting. First of all, the patient must consider the event possibly linked to having taken a product. In the unlikely event that they inform their prescriber or anyone else in the healthcare chain (from reception staff through to the pharmacist) even then a decision has to be taken as to reportability.
In the clinical trial situation, there are strict requirements on the staff involved in the investigation to be aware of any events that might potentially be adverse reactions to the Investigational Medicinal Product. Most trials now have a back stop in the form of a Data Monitoring Committee which can review the data in real time and request additional information and, ultimately have the power to modify or even halt the studies on safety grounds.
Lest I be thought a complete Luddite holding back the march of machine intelligence I must add that the vast amount of data we generate in everyday life means that there is a need for AI systems to support complex decision making and I referred to this for financial systems in a previous post. However, there is always a place for real people to review data and make decisions. This has been brought into sharp focus in the coronavirus epidemic where reductions in fae to face interactions between patients and clinical staff has reportedly led to many missed diagnoses.
If you ever fill in those irritating questionnaires about your experience with a car service, hospital appointment, carpet fitting etc. (and I hereby confess not to doing so!) then you will have noted that the range of options to be ticked has now been changed to an even number, cunningly removing the most popular human choice of going for the middle. In drug safety we adopted in many cases a binary choice on relationships, related or unrelated. This was an attempt to reduce the grey areas of “possibly” or “unlikely” related and to improve reporting. However, there is still a need for a human interaction to assess, evaluate and decide.
So, whilst accepting that AI, machine learning and other such techniques are enormously valuable in our modern data filled world let’s remember there’s still a place for human input.