Dr Yiannis Kourmpetis dispels some common myths about artificial intelligence (AI) and talks about our relationship with the ‘black box’. When it comes to scientific discovery, will AI or human beings hold the reins?
Most humans share multiple misconceptions about AI, and I’d like to clear these up. The first misconception is that AI is this novel thing that recently emerged from a computer research lab. It’s not. A huge proportion of what we refer to as AI today is merely computational modelling, which has been around for decades. Take AI-driven drug discovery as an example. Yes, the computational models used to discover new drugs have become faster, more accurate and more complex, but these models are fundamentally based on scientific methods developed in the 1990s.
What is changing, however, is that there’s an explosion of companies adopting AI today. This is due to data availability, open-source libraries and cheaper ways to access the increased computational power needed to build AI systems, which has become exponentially easier to do. From the development of personalised therapies to regenerative medicine, AI’s role in the life sciences is already significant and will become increasingly impactful in the coming decade.
Another misconception is that if we pour a lot of data – often simply referred to as ‘big data’ – into a computational model, AI will figure out whatever we need it to without human input. This is not the case. Human domain knowledge and insight cannot always be encoded as data, and for most challenges we are faced with today, human expertise is the key factor in developing successful solutions. I use data modelling to facilitate innovation. I believe that catalysing human–AI interaction is the key to going beyond the distinction between traditional and AI-driven approaches and bringing the best of both worlds together.
Our team at Siftlink is working towards this to help companies innovate and launch impactful products. For instance, our AI conversational agent Sensi is designed to interact naturally with professionals involved in innovation processes to help them make better and faster decisions. Sensi provides responses and guidance but also asks for specific information from these professionals to improve its models rather than churning out more and more data that may not be useful for the humans it converses with.
Of course, that opens up some fascinating questions on the role of AI going forward. To what extent will humans allow AI to be in the driver’s seat? Should we accept a ‘black box’ that can do our jobs for us?
For me, there are cases where it is perfectly fine to entrust important decisions to a black box. If there’s an AI model that can diagnose which disease a patient is suffering from to save time and the patient’s life or virtually screen libraries with millions of drugs to hit a certain target, it’s acceptable to use this model without knowing exactly how it works.
But when it comes to scientific discovery, sustainable progress relies on our ability to explain what’s happening, as we aim to progressively understand the causal factors and mechanisms behind a disease rather than just screening for drugs. We need AI methods designed to help us gain clarity in our reasoning and decisions rather than black boxes that ‘just work’. Even if AI will one day be able to model diseases itself, the system will be so complex that we will no longer understand the underlying parameters that lead to predictions.
This is problematic for several reasons. Who is ethically responsible if a patient dies when AI has called all the shots? Who can investigate and understand what went wrong with the model and fix it? If AI is doing everything for me, how do I fit into the equation when it comes to ownership and responsibility? Who is liable if a model spits out answers but we fail to understand how they were generated? Can we as humans maintain and advance our expertise? For all these reasons, we need transparency going forward.
After all, scientific discovery has always meant humans figuring out the laws of nature while pushing the boundaries of technology. If I drop my pencil onto my desk 1,000 times and measure the time it takes to fall, I can use the data to create a model that tells me how long the pencil will take to hit the desk. It will be accurate, but it won’t help me understand the law of gravity that Newton figured out. In a world with no Isaac Newtons and only AI models instead, we would no longer understand how the world around us works intrinsically. For example, would we really have had the confidence to go to the moon relying on an accurate model but without understanding the law of gravity?
I’m confident that AI will affect every aspect of our lives. But as science and scientific discovery are our driver for moving humanity forward, it is our responsibility to stay on top of how AI is developing and shape our future. AI is the technology that will enable us to expand our human ability to understand and explore the unknown. It’s a technology that in a strange way will start feeling more and more like us.