Russell Regulatory Consultants Ltd

Artificial Intelligence

Harnessing AI’s full potential responsibly and ethically, whilst attempting to avoid ‘potential unintended consequences’

Summary

Many would suggest that the beginnings of ‘Artificial Intelligence (AI)’ occurred thousands of years ago. Ancient Greek, Eqyptian and Chinese culture makes reference to robots and engineering machinery with extra capabilities. Robots with human-like features and functions have been present in the minds of science fiction creative writers since the first half of the twentieth century. In the 1950s, it had been become a familiar notion culturally due to increasing popularity in books, television, and movies. The actual concept of ‘artificial intelligence’ was first raised by a British mathematician, Alan Turing in 1956, whilst studying at university. His research discussed how to build intelligent machines and how to test their intelligence. Continual research has been conducted ever since and AI has slowly but consistently gathered pace. Nowadays, AI is entrenched in our daily lives, from entertainment, to banking and of course, healthcare. It’s widely considered that further AI integration into our healthcare landscape has triggered somewhat of a ‘healthcare revolution’. This has of course led to widespread debate surrounding AI, particularly, how safe it is and how it should be properly regulated. This white paper will look at the perceived benefits and challenges of AI, as well as true concerns, and how we, as a society, are looking to address them.

Benefits of AI, both current and potential

The world has witnessed the rapid advancement of AI in healthcare, leading to continued debate about what it all means, and what it could all mean. Many believe that AI has matched human potential and is on track to surpass it, with human doctors eventually being replaced. General public opinion has gradually become more favourable towards AI, likely as understanding grows. However, lack of trust in AI as well as it’s inability to match humans on a ‘human’ level, remain high in the public perceived list of pitfalls.

When laid out, the potential benefits of AI in healthcare are undeniable and could be ‘transformative’ for global health. The evidence shows that AI could augment our existing clinical capabilities. It could redefine what is possible in medical treatment and device functionality, leading to greatly improved patient outcomes. This is likely to in turn, reduce the burden on our health services, clinical personnel and reduce health spending.

Design revolution – AI algorithms can rapidly generate multiple design prototypes in medical device design, overtaking the human-led design groups which we have been accustomed to. This could lead to dramatically reduced waiting times for innovative treatments and devices, again leading to improved medical outcomes.

Personalised medicine – AI could make patient customised medicine and treatment more achievable and accessible. Algorithms can be trained to consider individual anatomical traits and adjust the outcomes accordingly. For example, in Orthopaedics, with the help of AI, bespoke implants can be designed to fit patients individually. This would take into account a patient’s individual bone structure and physiology, increasing the likelihood of favourable and consistent outcomes following surgical intervention. Another example of this would be AI generated cardiac stents which are matched to an individual patient’s vascular structures, making the likelihood of success much greater.

Reduced procedure complications – with the predictive capabilities of AI as well as the potential ability to evaluate individual’s anatomical differences, AI could lead to a reduction in medical complications experienced by patients. Again, leading to improved personal and financial outcomes.

Reductions in recovery times – due to more successful procedures, tailored implants and reduced complications, this could lead to reduction in recovery times. Saving money for the health service and improving patient outcomes.

Reducing regulatory delays – the predictive nature of AI could provide a robust framework which can be applied to regulation of different medical devices. This could mean reduced approval times for new devices, leading to the public having more timely access to medical device innovation. It could also be argued that this is a safer method for regulating devices as it is less subjective and more methodical.

Challenges

AI is an increasing topic of discussion in Global current affairs as society grapples with gaining full understanding of its capabilities. Global Tech leaders, although heralding its potential benefits, are quick to warn of the dangers and continually push for AI to be more regulated. Elon Musk, entrepreneur and CEO of Tesla, SpaceX and Neuralink, and considered by many a visionary, advocates ‘responsible development’ of AI and robust regulation in order for AI to be used safely and ethically. He has urged Governments to protect the public from the dangers that could be posed by AI. He also emphasised the importance of not undervaluing human ability to strategize, be creative and be empathetic. Musk called for a dedicated regulatory body for AI, with other Global leaders suggesting that as much as we need more regulation on AI, we are not fully ready to create those legislations as we struggle to keep up with the rapid development of AI.

Safety issues and concerns relating to AI in medical device regulation

Albeit the public perception of the integration of AI into our healthcare systems, medical treatments and devices has improved, trepidation surrounding how disruptive AI will be to our existing health practices and treatments remains. Generally, patients are much more trusting of a human clinician than AI and yearn for the traditional doctor-patient relationship. To add to this, there are concerns over how ethical AI is and how secure our health data truly is.

However, with rising health concerns and spending, being able to harness AI’s potential responsibly and ethically will inevitably spell less time with human clinicians but could also mean improved outcomes.

Potential for discrimination – Studies have shown discrimination already exists, albeit often unintended, within our health services and culture against minority groups. If AI enabled devices are trained with inherently biased algorithms, the devices born out of that will inevitably be biased too. Studies have found reduced incidence in skin cancer diagnosis in darker skinned patients, when compared with light skinned patients. This is thought to be due to the AI algorithm having been trained on detecting cancer in predominantly lighter skin. We must ensure that datasets are not underrepresented in order to strive for and achieve equity in all medical device development. To do this we must adopt an equitable approach immediately and incorporate this in algorithm development and machine learning.

The proficiency of the data collected and used to train the AI algorithms must also be called into question. How can we be sure that the data used in these instances is of sufficiently high quality? The system needs to be transparent enough to ensure service users are reassured of the objectivity of the data used.

Data ethics – In terms of informed consent, the premise is for the individual giving consent to have a full understanding of what they are actually consenting to. In terms of AI enabled devices, it may be difficult for a ‘layperson’, so to speak, to have a full enough understanding to make an informed decision on their care. Issues surrounding data privacy have also been raised as a potential issue. Many suggest that AI enabled devices may contain patient identifiable data which will become vulnerable to data breaches, potential hacking or attacks on the AI network.

In terms of data ownership, the individual which the health data relates to ultimately owns the data. However, healthcare providers and companies own the right to access the data, and so often is the case, it will then belong to the highest bidder, to use it for advancement in their objectives. It cannot be guaranteed that this will always be in the best interests of the data owner, in other words, the patient.

Lacking in expertise – AI in medicine is a case of the virtual colliding with the physical. There is likely to be a lack of expertise which will contribute to disadvantages. For example, software engineers who are qualified to train algorithms are unlikely to be medically trained. Likewise, clinicians are not likely to be trained in software engineering. This may well lead to some technical illiteracy. This may in turn reduce efficiency and increase errors.

It is also suggested that original algorithms will eventually deviate due to the evolvement and variances in disease progression and patterns. It’s vital that we do not become too reliant on AI and ensure that it is regulated and monitored effectively to ensure it remains efficient. Currently, it seems there is a lack of standardised processes and regulation to protect against the pitfalls of AI.

Human factors – As mentioned, the algorithms guiding the AI devices will be trained and implemented by software engineers. There will always be an element of human error at play which could be unwittingly trained into the device. As well as this, human error on the clinician’s part, how proficient they are in using the AI enable device, could also have detrimental effects.

Ethics of clinical practice – rule orientated robots may well be more reliable but when faced with ‘human’ situations, requiring empathy, humans are more ethically reliable. Patients would much rather receive news about poor prognosis from a human clinician than from an AI platform.

Social acceptance – Albeit there has been increase in public exposure to AI, which has allowed its shortcoming to be discussed and addressed, most patients report to remain more trusting and accepting of being delivered a diagnosis by a human rather than AI. Concerns from the clinical sector include fears that they will be replaced by AI technology, but others fear clinicians will become too reliant on AI, and in effect, clinicians will begin to become de-skilled. Experts have warned clinicians must remain sceptical about AI outputs, they must not blindly accept AI results and instead continue to rely on their clinical judgment and experience to ultimately guide them.

Evolving legislation in the UK and globally

The majority are calling for a structured legal system to be developed to address AI. Others have suggested the development of independent ethics committees being set up to oversee AI activity. Experts advise that these systems should be flexible, fluid and be adaptable to changing times, regulations and trends.

We should aim for safe, reliable and equitable AI

Discussions like this gives rise to the need for more contribution from patients and clinicians into the real issues in healthcare, in order to prioritise effectively and create feasible solutions to healthcare problems.

We should place more weight in the quality of the data we collect as this will inevitably be used to train our AI enabled devices so any discrepancy in the reliability of this information will feed through into the device and beyond.

We can add value to our data by engaging with more diverse groups of patients in order to create an AI device which is transparent, fair and represents true equity.

Global discussion on AI regulation

The EU Commission proposed a regulatory framework to address AI in 2021 and the subsequent AI Act was adopted in March 2024. This regulation applies to 27 EU member countries but will also be applicable for any countries using EU customer data, including the UK. The AI Act prohibits certain AI applications which are considered high-risk (AI systems thought to negatively affect safety or fundamental rights), such as cognitive behavioural manipulation, social scoring, predictive policing and biometric identification. The new framework will see a dedicated AI office opened in the European Commission, as well as an AI board for member state representatives, and an Advisory forum for stakeholders.

The Global Summit on AI Safety was held in November 2023, during which, many areas were highlighted as being potentially harmful. The summit recognised and discussed the prevalence of underrepresented datasets in AI algorithm training. It underpinned the importance of fairness metrics and equitable AI. Concerns were also raised about health care practitioners AI literacy and skills, with many fearing AI advancement will overtake human care givers and clinically skilled workers. The potential for AI to change the doctor-patient relationship in unpredictable ways was also discussed. The summit also raised interesting points about how health problems and disease treatments are selected and prioritised for AI device development. Failure to structure this could also lead to unfair bias against some patient groups. Furthermore, the summit reiterated the concerns about how data is selected in the development and testing of devices and how can we gauge the quality of the original data, thus predicting the quality of the AI output. Another interesting point raised was how the impact of AI enabled devices is monitored and made quantifiable once in use.

UK Discussion

The Medicines and Healthcare products Regulatory Agency (MHRA) has called for balance in how we approach AI in healthcare. We need to adopt a framework which ensures safe and effective regulation of AI without stifling innovation and potentially blocking service users access to the latest technology. The MHRA have launched AI Airlock, which is a regulatory initiative aimed a AI as a medical device (AIaMD). It’s designed to be a collaborative space for innovators, regulatory organisations, Approved bodies, Government bodies, Academia, the NHS and the wider healthcare industry to connect and face the challenge of AI and its complexities together. They have further underpinned this by forming alliances with the International Medical Device Regulators Forum (IMDRF) and forging partnerships with US FDA and Health Canada to ensure that the UK are at the forefront of the adoption of international best practice in the AI MedTech Regulation space. In light of the desired balance, the MHRA states that it commits to taking a ‘proportionate approach to regulating AI medical products’.

The Whitehead report, commissioned by the UK Government, in response to findings of inherent bias within the healthcare system, called for a ‘whole system approach’ to ensuring AI enabled systems and devices are safe and equitable. The report recommended that patients and clinicians should be more involved in discussion surrounding health care priorities and experiences in order to harness that information and use it to our benefit. Diverse groups of patients and clinicians should be consulted in order to promote fairness, equity and transparency. The ultimate goal should be to improve our understanding of AI assisted medical devices.

The Future

The UK Government aims to make Britain a global leader in AI safety and regulation. In order to build a more agile and robust AI regulatory framework, the UK has pledged £10 million to upskill regulators, to in turn address the risks of AI. The hope is that this will also encourage more leaders in a safe, responsible AI innovation environment. Furthermore, £90 million will be invested in opening nine research hubs across the UK, with the focus being on achieving responsible AI. A further £19 million will be spent on 21 projects across the UK to develop safe and trusted AI tools to aid in responsible deployment. A further £100 million will be spent in developing the Artificial Intelligence Safety Institute (AISI) which will aim to examine, evaluate and test new types of AI. With view to international collaboration, £9 million has been invested by the International Science Partnerships Fund with the view to bringing the UK and US together to focus on development of safe, responsible, and trustworthy AI.

Global collaboration – The UK hosted the first AI Safety Summit in London in November 2023 where attendees signed the Bletchley Declaration on AI Safety, which is a list of pledges to ensure AI is “designed, developed, deployed and used in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible”. The follow-up AI Safety Summit was held in Seoul, the Republic of Korea and saw attendees signing an agreement to “collaborate on quantifying potential AI risks, including what would constitute a “severe risk””. The third event, the AI Action Summit, will take place in Paris, France in early 2025.

How can RRC help you?

We have particular interest in AI and AIaMD, combined with our existing regulatory expertise, we could be the consultancy you have been looking for. We would be delighted to help you navigate the ever-changing regulatory landscape in this area, get in touch for more information.

Sources