Application of artificial intelligence to monitor disease progression in multiple sclerosis

This is a 2 part interview. Click here for Part 1.

As an emerging phenomenon, artificial intelligence (AI) technologies have the potential to transform many aspects of patient care and administrative processes within providers, payers, and pharmaceutical organizations. There are different types of AI, including machine learning, which learns from experience without being explicitly programmed, and deep learning, which learns from raw/near-raw data without the need for feature engineering.

Researchers at the Buffalo Neuroimaging Analysis Center (BNAC) have been at the forefront of developing and validating AI-like algorithms to improve patient care, particularly for patients with multiple sclerosis (MS). At the 2023 Americas Committee for Treatment and Research in Multiple Sclerosis (ACTRIMS) Forum, February 23-25 ​​in San Diego, California, Michael Dwyer, PhD, presented on the use of AI and MRI for MS care and as the case may be. In his presentation, Dwyer discussed several potential areas including MR acquisition, image segmentation, diagnostics, prognosis, and others.

At the forum, Dwyer, director of IT and neuroinformatics development at BNAC, spoke about the increasing exposure to AI and whether these techniques should be part of medical education in the future. In addition, he discussed the many ways AI can help both patients with MS and their physicians to track the long-term course of the disease and identify erroneous patterns.

NeurologyLive®: Should learning from AI be more integrated into neurology education?

Michael Dwyer, PhD: That’s a good question, I don’t know if I have a short answer for that. think they should be aware of that. So I would answer that in two ways. First I’m going to give a very boring answer because I think there is no substitute for the basics, the basic statistics, the basic familiarity with hypothesis testing. AI is a wonderful, powerful tool, but it is also so powerful that it can fool us very easily.


We’ve seen a lot of AI techniques that seemed promising and then fell flat because the statistical foundation didn’t necessarily exist, we didn’t test it properly, or we trained it on one dataset and then it doesn’t translate or work on the other. I think it should be part of a more holistic framework for statistical and general research methods. Clinicians and the general public don’t need to know how to do deep learning. You don’t need to know how to sit there and code something in Pytorch. What I think they need to know now – because of the explosion we were talking about – is separate the baby from the bathwater and realize what a reliable AI tool is? What can I trust, what can I not trust?

The editors of Radiology and the more recent journal Radiology Artificial Intelligence have issued guidelines for clinicians, dubbed CLAIM guidelines, and for publishers to create checklists for the proper unethical use of AI in these areas. This is very important for people to understand. What is good AI? What is bad AI? How much should you use? And how should you use it? Take ChatGPT, something everyone is so excited about. It’s an amazing tool when you’re trying to write a document. If you look at clinicians and the real world of clinical life, they spend a lot of time filling out templates. Where they fill out the forms, ChatGPT type technology can probably help make these templates much better. But you must use it in such a way that an expert will verify everything it says and you cannot rely on it. That’s the key. We see negatively where students use it to write their homework. But it’s just a tool to be used badly or with great value. We have to balance these things.

READ :  Wearables are getting smarter and making healthcare easier

Are there ways in which AI can be used specifically to monitor disease progression?

There are a few areas where it can help with that. Many people see AI as mimicking what humans do. We’re training a model to do something faster or maybe more reliable, but not fundamentally different. This is called supervised learning. We tell him what we want to learn. In unsupervised learning, we instruct an AI tool to examine data and see if it can find patterns. There have been some really interesting advances in clustering, for example. For example, Arman Eshaghi and his group in the UK were able to identify latent MS clusters and different types of disease pathologies and say, “This, this might take a different trajectory in the future.” If we identify these types of subtypes early on we may be able to help intervene earlier and know if people respond differently to different treatments.

Another area is the ability to synthesize data from many different locations. Humans are good at analyzing and reducing, we’re not always good at putting lots of data points together in the same way. These AI tools can be a very helpful assistant to integrate genomics, connectomics, other serum markers and imaging to make predictions based on many data points as opposed to the clinical algorithms where we only look at 2 or 3 things . That’s another possible way that we can [use AI]and we’re already beginning to see a shift in that.

I should also mention that deep learning has been a big buzz word for a while. What sets this whole deep learning thing apart from traditional machine learning is that it learns on raw data. With traditional machine learning, it would still learn the rules, but you had to decide what it was going to look at, you had to take a picture and say, “I’m going to measure the thalamus, the cortex, the amount of lesions, or, I’m going to do a clinical one.” Make an assessment, I’m going to include the EDSS in these 4 scores or something or these specific sub-scores. With deep learning you feed much rawer data. For us it works for raw MRI. Instead of extracting those features, we just tell it to look at who’s making progress and who’s not, and try to find predictors from the images. There are people who do gait analysis, people who make wearables, and it’s all based on the raw data, so we don’t have to have someone sitting there and being a gatekeeper of the information and saying, “These are the features that we’re pulling out.” should.”

READ :  Meet ChatGPT, the artificial intelligence chatbot that could do the job for you

It’s very powerful there because it can pick up on things that we’re missing. It can see subtleties that we might not see: maybe one part of the thalamus is important but the other part isn’t, and if we just go ahead and extract the thalamus, we lose that. It’s potentially powerful, and it’s a very exciting field . We’ll see a lot more in the future. I understand we have to be careful. We must go with open eyes. We have to go step by step and carefully validate these tools. But there is tremendous value here.

Transcript edited for clarity.