Medical treatment by Artificial Intelligence (AI) is being realized.
Just recently, Tokyo University Hospital reported that Watson, an AI program powered by IBM, had successfully diagnosed a woman suffering from leukemia. Doctors previously considered her as a typical type of leukemia, but the pharmacotherapy for her was not successful. Watson examined her status, to identify her illness as a rare form of leukemia. The medical team altered the treatment strategy based on the recommendation by Watson and was able to offer her better treatment.
SiliconAngle: Watson correctly diagnoses woman after doctors were stumped
One of the strengths of AI is the range of information utilized for choosing the correct answer. An AI doctor can find an answer, if exists, in a short time. Watson can refer 20 million oncological studies to examine a patient’s medical status. Instead, we rely on our experience and instinct, and there are limited amount of experience for a human. It is impossible for a human to learn as many things as an AI does, regardless of your industriousness.
On the April fool last year, I wrote a joke article about Google psychiatrist. But it will be no more fantasy. Since there is limited evidence in psychiatric treatment, it will take a while for AI to replace human decision making, but the time will certainly come.
My past entry: Google Psychiatrist launched: loosing my job
This article above suggested two issues to be concerned regarding utilization of medical AI. One is privacy, and the other is the fact that AI would not work in the region reliable data is lacking.
However, I think these issues are not serious in considering the utilization of medical AI. The importance of protecting the privacy of the patients is not limited to this situation. And deep learning method will be a solution for detecting a preferable answer in underdeveloped regions. To begin with, these two issues are also challengeable for human doctors.
I believe medical AI will be spread soon. The issue most likely to be discussed will be who is responsible for mistaken treatment. The decision of the doctor in charge has been respected so far. But, in the near future, we will not be able to understand the process of thinking by the AI. We should decide whether we trust, human or AI, in a critical situation.