Volume 11, Issue No. 1, February 2024
As the fever pitch of excitement and expectation of GenAI continues to reverberate into 2024, more scientific papers, op ed’s, blogs, podcasts, roundtables, webinars, keynotes, and panel discussions will ensue. New phrases will be coined by the evangelists, prognostications will abound, the trusted hype versus reality, man versus machine debates will continue in an attempt to see what’s beyond the bend in the road. Granted, the AI genie has long since left its dwelling—and has been rather busy building these tools—but there has been very little regulatory oversight in lockstep with these technological advancements to build the robust regulatory foundations needed to future-proof what has become to be known as the 5th Industrial Revolution.
We have to understand that GenAI is simply a tool, I don’t doubt it will bring transformational change in helping cure disease as its multi modal capabilities crunch through vast datasets of biological and clinical data, but it cannot occupy the same status as science itself nor human creativity. It augments—it is not singularly the answer. But it also brings with it issues of explainability. The wider and deeper the language nets are cast across these multiple parameters, the more difficult it becomes to assess the sequential decision-making process. We will inevitably create a culture of overreliance if we fail to recognize how we should be adopting these tools—tools which allow the human mind to see things it otherwise could not see, facilitating further human interrogation, analysis, and deduction. Let’s try to avoid Lazy Language Models and indeed, lazy science.
Accusations of potential scientific misconduct raised their ugly head again in recent weeks. We need, at all costs, to stand united in our fight against fraud and manipulation. Moreover, it would hardly be fair to accuse GenAI of hallucinations when it’s gobbling up our mistruths and fabrications. Paradoxically, LLM’s are now being used to add some layer of scrutiny over our digital footprints, so perhaps this should be a deterrent to those of a nefarious disposition seeking personal gain.
We have seen the EU take the lead around regulation in AI, which is very promising indeed, and other regions should follow suit, although clearly we need to understand not all healthcare delivery systems are equal as we traverse borders and continents. Again, the perennial issue of regulation versus innovation perpetuates, but as ever, a measured pragmatic approach needs to be found.
We are already witnessing how AI is reducing physician workload, accelerating decision making, improving operational efficiencies, and making more predictive and precise analysis in screening, diagnosis, and treatment of disease across many modalities. We’re seeing the huge untapped potential in drug discovery and development, but as Michael Liebman rightly points out in his article, disease evolves and is a process, not a state. Clinical guidelines vary considerably, but they need to reflect this evolution, and the AI tools we are starting to implement need to address the complexity of the problem, factoring in that we often only have a fragmented and partial snapshot of disease. More crucially though, both as participants and pioneers in this endeavour, we cannot lose sight of what is needed to enable the success of AI—willful ignorance of those facts is simply not an option.
Damian Doherty
Editor in Chief
Damian Doherty
Editor in Chief