–Jake Okechukwu Effoduh
Artificial intelligence (AI) has brought a paradigm shift to healthcare. Thanks to the increasing availability of healthcare data and rapid progress of analytics. Whether it’s being used to provide early warning for Coronavirus, diagnose breast cancer, perform robotic surgeries, or re-innovate drugs across fields of neuroscience, the healthcare ecosystem is experiencing an AI revolution. With the several ways that AI is getting better than humans in detection, diagnosis, prediction and even prognosis evaluation, health insurance companies may soon offer their clients the option of either being treated by a human physician, or an AI.
If I am ever placed in a situation to choose between an AI doctor or a human physician (and can’t choose both), I’ll choose the AI. This does not mean less work for the human physician. With the paucity of medical personnel needed to meet the current global demand, there should always be work for both the human physician and the AI. Whilst it is possible for human labor to, in aggregate, decline in relevance because of technological progress, doctors are said to be one of the “safe” professions that will not be diminished by automation. AI physicians will help reduce the much-protested workload off human physicians, and excuse them from the mundane aspects of their work. As automobiles haven’t rendered horses useless, or digital music limited human artistry, the value of human doctors will remain beyond the reach of AI technology. Therefore, I won’t feel bad for choosing an AI over my fellow human. It is actually justifiable.Health is a critical aspect of one’s life. For me, I am inclined to consider where (and from whom) I can acquire the quickest and most reliable health service. Since AI can learn from previous situations to provide input and automate complex future decision-making processes, it makes it easier and faster to arrive at concrete conclusions based on data and past experiences (and family history too). This is important for when a person may have a medical emergency like an accident, or a heart attack. If in three seconds Watson compresses a process that normally takes an ordinary physician several weeks to accomplish, this timesaving ability could help with early diagnosis and treatment-intervention to save more lives (including eliminating the costs associated with such delay and post-treatment complications). An AI physician may be able to “tap the brains” of thousands of human doctors all at once.
Medical AI can perform with expert-level accuracy by sourcing through (and prioritising) the unprecedented amount of medical data available today, combined with advances in natural language processing, and social awareness algorithms. An AI physician could even better customise my treatment choice by identifying optimal treatments based on my specific health needs and formulate a personalised approach to my care.
Machine Learning (ML) can determine if I am at risk of certain diseases faster than a human would and long before it becomes critical. I believe that an AI physician is bound to provide better and faster health services than a human would. With an AI physician, patients may not have to deal with the shame and judgement they could feel with a human, when talking about their sexual health. They may also not have to deal with the “irrational exuberance” or stereotyping that some human doctors perform.
However, I know that AI systems will be limited by several factors such as implicit bias, probability to malfunction, privacy breaches and a lack of creative “common sense” (slight changes in input signals can wreck ML models and this may be because one of the challenges facing AI is the inability to solve the “common sense” problem or replicating situational awareness.
The ability for an AI physician to take appropriate action based on situational context and to decide without having to train through vast data pools, is perhaps not yet possible). But human doctors have similar limitations too. Human physicians have their biases and are prone to making error. Even more, they are subject to fatigue, ill-health, and phobias. With an AI doctor, we could perhaps witness a reorganization of medical bureaucracies. For example, a patient should be able to see a doctor within lesser waiting times and perhaps from any location. AI will not limit people by space and time as they could “log in” from their homes (and may not need to deal with the smell of hospitals too).
One obvious concern I’ll have with an AI doctor is accountability. If I get ill-advised or treated negligently, it would be difficult to determine who takes responsibility. Would I sue the AI manufacturer, the hospital, or my insurance company? Will the law accord strict liability, vicarious liability or product liability to the use of AI physicians? Will I have to sign a liability waiver even when I don’t understand how the AI physician would arrive at a decision or instruction? (The ways by which ML algorithms make decisions often remain opaque to us, which raises questions about the acceptability of delegating responsibility to them). In medicine, justification and trust are deeply linked, so this gap will not only be problematic for determining accountability, but also to situate causation and remedy in law.
Based on my choice, I may also subject myself to discrimination (there may be increased inequalities in health outcomes between those who use human physicians and those who use AI). This may also put unnecessary pressure on human doctors as patients will begin to compare (and rate) human services with that of AI systems. This may foster competition, or initiate tensions (just like the drivers’ protests, or congress debates on labour rights in the extant gig economy). Unlike a human physician, my AI doctor is susceptible to hacking, the result of which could be terminal.
Another concern is how AI may not be able to provide the human-to-human connection that a human physician would. Caregiving is not just defined by who saves lives better, or bests palliates suffering. Delivery also matters. Linking technical competence to caregiving, compassion and consolation is a central task of good medicine. Therefore, values of love, empathy and human kindness are invaluable to healthcare delivery. These values, however, are not only obtained from a physician.
The majority of the caregiving that goes on in this world is not administered by doctors or nurses, but by families and communities (majority of this, unpaid and uninsured).
As we navigate our lives through this Fourth Industrial Revolution, our health won’t be guaranteed by how intelligently we choose between a human physician or an AI, but by our access to quality comprehensive social medicine — which includes mental, physical and emotional health provisions. To secure them, we need both technological and human services alike. The extraordinary promise of AI medicine shouldn’t be just about options; but to reach half of the world’s population (who lack access to health) with at least one: be it human or AI.