Artificial intelligence and treatment decisions

Bookmark and Share
Published: 24 May 2019
Views: 3107
Dr Pratik Shah - Massachusetts Institute of Technology, Cambridge, USA

Dr Pratik Shah speaks to ecancer at the 2019 MyKE Myeloma meeting in Barcelona about the use of artificial intelligence (AI) to guide treatment decisions.

Dr Shah gives examples of how AI can be used in the healthcare setting - which include the use of computational images for pathological staining and algorithms to determine dose recommendations to limit toxicity in glioblastoma patients.

He also describes some of the ethical concerns that should be considered while using this type of technology in this way and the benefits AI can provide to healthcare professionals and patients.

Within the next 5 years, Dr Shah hopes that this technology will become more accessible and safer for both patients and physicians to use; along with increasing its scalability.

ecancer's filming has been kindly supported by Amgen through the ecancer Global Foundation. ecancer is editorially independent and there is no influence over content.

I was an invited speaker and my group at MIT invent artificial intelligence and machine learning technologies for many health-related problems and some of our work overlaps with oncology. Today in my talk I shared three examples of my research at MIT. One of them was to generate images which allow physicians to do computational staining of pathology slides at the point of care. These are algorithms that can take photographs of unstained tissue biopsies, and the algorithm can then computationally dye stain it without using the tissue. We also have algorithms that can take existing photographs of stained tissues and de-stain them. This allows physicians to not use biopsies at the point of care and use photographs and to check to make some decisions.

I also shared some of our work in reducing toxicity in end-of-life glioblastoma patients and we have a digital algorithmic system that makes dosing recommendations for these patients where the machine learning algorithm determines the fewest amount of doses that these patients need to shrink their tumour without experiencing toxicity. These are new digital clinical trials, clinical algorithmic systems we invented at MIT.

In the third part I talked about regulation inclusivity of AI and machine learning and how we should regulate this technology – what are standards, ethics, explainability of this new technology. Oncology care physicians will come here, patients’ foundations, government agencies who we work with can benefit from AI without having to worry too much about all these other issues.

What are the ethical considerations?

I think there’s a broader conversation of ethical conversations, but in medicine some of the key conversations are: are the datasets on which your algorithm trains inclusive of all the patient populations that usually suffer from the disease? More than the ethics of AI, the ethics of humans is also being examined here. It’s not that humans currently have been the most ethical things in some of our clinical decision-making systems. We still operate with small data that’s available and do the best we can. I think in machine learning systems we have a chance to get rid of the biases and create a new way of ethical algorithmic interventions.

That’s one, and a second is obviously: when an algorithm makes decisions, are those decisions recommendations to a physician or are they actually actions that the algorithm is asking the physician to take? That should be carefully separated out.

What are the advantages to using AI? Does it help you personalise treatments?

We have shown some work today in the talk that we have personalised dosing regimens for many of our oncology patients, where the machine learning system can learn individual patient characteristics. After learning that, it can make recommendations for personalised precision medicine dosing. That’s one of the advantages of using machine learning in some ways, where human physicians may find it hard because of the time constraints or the large amounts of data they have to process, or because the pharmacological development model that was given to them didn’t consider those things. Machine learning can help you get around those, solve those things.

How do you expect to see this field develop in the next few years?

In my research and my group at MIT, we broadly work with patients and foundations and individual citizens. I should have mentioned at the beginning, I have no conflicts, no significant financial interest. We are trying in the next five years to create benchmarking for some of this work, and make the technology more accessible, explainable, safer, inclusive, and make it really useful for patients and physicians who need it – that’s a broader vision. In the field there are internal conversations about how do we make sure machine learning and AI impacts people in different fields like this community, where they might want it but they don’t know how to access it, and how we can learn from this community of clinicians and physicians as machine learning people, what we need to do in our research in our groups to make sure that we invent technology that’s scalable.