top of page

Explain it to me as if I were Human

My car has an alert system that warns me of collisions.


There is this one spot where the alarm goes crazy- beeping faster and louder until we pass the spot and it relaxes.


The first time we passed there, I pushed the brakes by instinct, not knowing what was going on. After a couple of times though, I figured this was a false alarm and slowly but surely the system had trained me to ignore it completely (to the terror of occasional fellow passengers).


To this day I have no idea what is the trigger that sets it off in that specific spot. I tried looking in the manual, but there are literally dozens of possible situations for this alarm, and it wasn't clear.


I wish it could tell me what pattern it saw that triggered the alarm (i.e. a pattern matching “person on a motorcycle”), in which case I would reprogram it to recognize this is just an old tree, growing by the road. But then comes the question- should I? Maybe it's better to have an overly sensitive system, and not risk a false-negative (i.e., a real person coming off as an old tree).


Even though I always say I will be the first to hand over the keys to Driverless cars when they become reality, this makes me wonder how soon will we trust those systems with our lives. As long as my foot is on the pedal, it remains in the realms of philosophy and rants about how terrible Humans drivers are.


In Pharma, we will be facing the same dilemmas very soon.


When the AI will tell us there is an issue with the Fermenter, or something is off in the way material flows- we will have to trust the machine's “gut feeling” as opposed to ours. The basic requirement for such trust would be that we UNDERSTAND what it saw - was it a Tree or a Pedestrian? Do we really have to stop or could we just slow down?


AI, especially in highly regulated environments like Pharmaceutical manufacturing will have to be bundled with an Explanation layer (xAI) that will articulate the reasons why the algorithm chose one feedback loop over another. And to make it even trickier- the explanation will have to vary depending on the person interacting with the system.


The cleanroom operator will need to have confidence in the Red light going off, and understand what (and if) to do. The QA person will want assurance that the process remained within the design space. Regulatory will look for comparability and state of control.


All stakeholders will be referencing the same sources of data in their ML algorithms, but with entirely different analyses and outcomes, and entirely different narratives behind them.


As always, Pharma will probably be the latest adopters of this party, but the low-hanging fruit and the narrow AI modules are already here, capable of augmenting our abilities immensely, at least outside the GxP environment.


This is the substance for ongoing industry discussions with the Regulatory authorities, but at some point, we will have to decide whether the machines can handle it better than us, and let our foot off the pedal.


 

bottom of page