In December 2023, the European Commission came to a historic agreement to create an Artificial Intelligence (AI) Act to regulate the technology and to protect users’ health, safety, and rights across its 27 member countries.

The act is currently in the provisional phase and is expected to become law in around two years’ time. It is the first time such a bill has been created with the sole purpose of regulating and controlling the use of AI around the world.

Sandeep Reddy

With AI being increasingly used in healthcare and precision medicine, ensuring that it is safe, effective, and unbiased has increasingly become a concern for users. Although many in the tech industry acknowledge that these are important considerations, developers of AI-based technology fear that overly stringent regulation could stifle innovation in the sector.

Sandeep Reddy is an entrepreneur and academic at Deakin University in Australia. Coming from a clinical background, he now specializes in medical informatics, digital health, and the use of AI in healthcare. He has been closely following the development of the EU’s AI Act and other related health tech regulations. He discussed the importance of AI in healthcare, the impact of the AI Act, and its potential effects on the industry with Inside Precision Medicine’s senior editor, Helen Albert.

Q: Why is AI important for improving healthcare?

Sandeep Reddy: When we look at the healthcare landscape, immaterial of where you’re located or which health system you are part of, there are significant challenges that need to be overcome.

I’ll use the example of the Australian healthcare system … we have a lot of similarities with the National Health Service in the U.K., but we also have some differences. One of the biggest challenges we have here in Australia is maintaining continued funding of the public healthcare system … it’s come to a point where many services have to be cut down due to lack of funding or staff. I can also see that happening in many other healthcare systems, which are reliant on public government funding.

When you look at the U.S., which is a different healthcare system, there is a different issue … there’s about 40 million Americans who don’t have health care insurance, and they can’t access many hospital services, even primary health care services.

If you look at both aspects, sustainability of the health care system and access to health care, there is no single solution except for AI which can address these issues. For example, with the advent of generative AI and the emergence of large language models, it’s very likely that you could have an AI assistant or a semi-autonomous AI agent triaging patients … you’d still need human clinicians to be able to manage patients and then be able to continue the care that has commenced with the AI agents, but at least patients would have better access than they do now.

When you look at populations across the world, and you ask them whether they would accept AI as a potential alternative to human clinicians, you will see that there is a huge amount of positivity in under resourced developing countries, because there is absolutely no access to any sort of healthcare. Forget tertiary or secondary healthcare, even primary health care is very difficult to access. In those situations, I truly believe that AI can be a solution. Even in better resourced, developed countries, AI can help by filling in insurance forms, addressing administrative issues linked to electronic health records and giving more time for physicians to spend with their patients, among many other things.

Q: What were your takeaways from the EU’s recent AI Act agreement?

SR: The EU’s AI Act is unlike any other act in the world. We haven’t seen this kind of act specifically for a particular technology before. There have been laws and regulations, but to have so many countries come together and agree on something on a single technology is unprecedented.

Generally, I like the principles of the act. I think if implemented in a way that is very practical and allows for innovation and supports small and medium enterprises, it will go a long way in ensuring the quality of AI in healthcare and in other sectors too.

One thing I like about it is it includes risk stratification. It’s a general act, so it’s covering AI overall and is not specific for healthcare, but it does identify that the highest risk applications do need to go through the process of getting regulated and approved. That would apply very much to healthcare applications. This means there would be much more stringent and closer monitoring of AI applications in healthcare as opposed to AI for retail or financing, for example. From that point of view, it might be good for preventing any risky outcomes from AI applications or stopping shoddily built AI applications from reaching the market.

It also looks at AI transparency. I strongly advocate for explainability in AI technologies … if we have an AI agent, being unable to explain how it arrived at a decision is very risky in healthcare. The EU AI act emphasizes the transparency requirements. It also prohibits AI from being used for facial recognition, social monitoring, those kinds of areas where it invades privacy, which I think is welcome to a lot of people.

Q: Why were countries such as France and Germany objecting to the act to begin with?

SR: I think Germany and France worried about their tech startups and related small or medium-sized businesses being left behind because of the act. They were worried about strict regulations inhibiting or stifling innovation. The big tech companies have in-house lawyers and “unlimited” pockets, so can more easily go through that process of getting compliance.

I am an entrepreneur, as well as an academic. I’ve been through the process of trying to get our software as a medical device approved by our local regulatory agency. Having gone through it, I realize it’s not just providing the information, which I think is fine. It’s having the ability to sustain the costs and the resourcing required to go through that process. This is beyond the capacity of most small and medium enterprises.

I was in Malta last month and I was talking to a couple of European startups. They said that getting CE approval [a current requirement for medical devices] takes a long period of time. It also involves a lot of costs and there are a lot of complexities involved in getting the compliance. So, if you have a much more complex act coming into play, it pretty much stops many businesses or small startups from being able to introduce their products into the market.

Some amendments were made to the act to address those concerns. But I still think that if the bureaucracy and the regulations become quite complex, it may affect the ability for new products to be released in the EU. That then puts European Union member nations behind the U.S. or other countries where there isn’t so much stringent regulation.

Q: Do you think there is a way that you could still have AI regulation, but keep innovation alive?

SR: I have argued in the past for creating precise regulations for the use of AI in healthcare. You could potentially have a much more nuanced type of subregulation, which is very customized to the healthcare sector.

In the healthcare sector, I don’t think we should worry so much about the technology per se, what kind of algorithm was used, or what the technical metrics are, we should instead be worried about outcomes. That is where we can use the analogy of pharmaceutical medications, it’s not so much the type of ingredients in the medications, it’s the outcomes that the medication achieves that we’re concerned about. And that’s what we monitor in the randomized control trial. I think we can do the same with AI software as a medical device where we look at the clinical outcomes and assess the risk based on that.

I think we need to include transparency and explainability and then focus on the overall outcomes of the AI-based technology, as opposed to monitoring each and every aspect of the AI model, which will make it really burdensome for many businesses to be able to get that approval.

Q: How does it this kind of AI regulation compare with that seen in other countries like the U.S.?

SR: This is unprecedented. There’s no broad sector-wide act that has been introduced in any part of the world. Having said that, this is a precedent for other nations to start looking at it and seeing how they can implement their own AI acts.

Within the U.S., President Biden already issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI in October 2023. There are a lot of commonalities with the EU’s proposed AI Act because we’re dealing with the same technology, so you would naturally cover those aspects. But the important difference is that it’s a directive, it’s not a law or a legislation. The focus is on the National Institute of Standards and Technology to issue standards around AI, but it’s not a legal requirement for companies.

Despite this, there are existing AI regulations in the U.S. The FDA has been a pioneer in issuing regulations around AI in healthcare. It’s really the benchmark for other countries to follow, because they’ve been very innovative in ensuring that there is a specialized pathway for AI-based software as a medical device, even though it takes some time and money to progress through this pathway.

Q: Do you think other countries and regions are watching what happens in Europe to see if it works well before they introduce anything similar themselves?

SR: Yes, one of the concerns that a lot of my connections in the EU member nations, who are all entrepreneurs, have is that this act could stifle innovation and result in the flight of a lot of AI startups to countries like the U.S., or other parts of the world. For example, one entrepreneur I met recently was trying to shift to Southeast Asia even before the AI Act was agreed.

The U.K. could be a positive beneficiary of the act with a lot of businesses moving there, but there is pressure on thegovernment to introduce their own regulations. Indeed, they recently had a meeting to discuss AI regulation and ensuring the safe delivery of AI applications.

Q: Do you think it’s important to have these regulations?

SR: Yes, absolutely. Don’t mistake me, I’m actually pro-regulation. But I’m worried about the application aspect making it really difficult for innovation to happen or creating artificial barriers for entry of AI in healthcare. I welcome regulations, but over-regulation can lead to pockets in the world where there is an absence of AI and others where there is abundant use of AI. Without any regulation, it’s a wild west situation in those instances. If we can come together and understand the risks associated with AI applications in healthcare and how to contain that, while not stopping the entry of AI into healthcare, I think that’s really important.

The stakeholders are not just the government and the healthcare organizations, but the patients too. A lot of patients are asking for AI and are already accessing AI-based healthcare apps. People are already using AI for health-related issues and we need to be aware of this. We can’t stop people using it, but I think we should have some agreement on how we apply regulations around AI applications in healthcare and make them really practical to follow rather than burdensome.


Read More

  1. Navigating the AI Revolution: The Case for Precise Regulation in Health Care, September 2023
  2. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, October 2023
  3. UK government to publish ‘tests’ on whether to pass new AI laws, January 2024


Helen Albert is senior editor at Inside Precision Medicine and a freelance science journalist. Prior to going freelance, she was editor-in-chief at Labiotech, an English-language, digital publication based in Berlin focusing on the European biotech industry. Before moving to Germany, she worked at a range of different science and health-focused publications in London. She was editor of The Biochemist magazine and blog, but also worked as a senior reporter at Springer Nature’s medwireNews for a number of years, as well as freelancing for various international publications. She has written for New Scientist, Chemistry World, Biodesigned, The BMJ, Forbes, Science Business, Cosmos magazine, and GEN. Helen has academic degrees in genetics and anthropology, and also spent some time early in her career working at the Sanger Institute in Cambridge before deciding to move into journalism.

Also of Interest