Healthcare Analytics, Population Health Management, Healthcare Big Data

Analytics in Action News

MIT Machine Learning Tool Reduces Cancer Treatment Toxicity

A machine learning algorithm developed at MIT could reduce the toxicity of brain cancer treatment by creating a personalized delivery regimen.

MIT machine learning tool reduces cancer treatment toxicity

Source: Thinkstock

By Jessica Kent

- MIT researchers have developed a machine learning algorithm that could proactively adjust toxic chemotherapy and radiotherapy dosing for patients with glioblastoma to minimize the toxic effects of the treatments.

Glioblastoma, an aggressive, malignant tumor of the brain or spinal cord, requires patients to receive radiation therapy and multiple drugs each month. Although providers generally dispense safe drug doses to shrink tumors as much as possible, these strong therapies can still cause debilitating side effects in patients.

Using a “self-learning” machine learning technique, MIT Media Lab researchers created a model that could make dosing regimens less toxic but equally effective.

The algorithm examines current treatment regimens and repeatedly adjusts the doses until it finds an optimal treatment plan, one with the lowest possible potency and number of doses that still reduces tumor size.  

“We kept the goal, where we have to help patients by reducing tumor sizes but, at the same time, we want to make sure the quality of life — the dosing toxicity — doesn’t lead to overwhelming sickness and harmful side effects,” said Pratik Shah, a Principal Investigator at the Media Lab.

Researchers tested the model on 50 simulated patients and found that the model designed treatment plans that reduced the doses’ potency by a quarter or a half while still retaining the same tumor-shrinking potential.

Additionally, the model often skipped doses completely, scheduling administrations only twice a year instead of monthly.

The team used a reinforced learning technique to train the model, in which the algorithm received a reward or penalty depending on whether its action worked toward the desired outcome. The algorithm then adjusted its actions to achieve that outcome.

The model analyzed the treatment regimens, and at each planned dosing interval, it chose to either dispense or withhold a dose. If it did administer a dose, it decided whether the entire dose or only a portion was necessary. If the action shrunk the average size of the tumor, the algorithm received a reward.

However, researchers had to ensure that the model didn’t simply choose to administer full doses each time in an effort to reduce tumor size. Thus, the team penalized the algorithm if it chose to dispense full doses to all patients, which caused the model to choose smaller, less frequent doses instead.

“If all we want to do is reduce the mean tumor diameter, and let it take whatever actions it wants, it will administer drugs irresponsibly,” Shah explained.

“Instead, we said, ‘We need to reduce the harmful actions it takes to get to that outcome.’”

The research team also designed the model to treat each patient individually, which could lead to more personalized treatments and improved outcomes.

Traditionally, a single dosing regimen is applied to groups of patients without considering their different tumor sizes, medical histories, or genetic profiles. This can result in individuals responding poorly to therapies that are not targeted to their needs.

The algorithm provides an alternative method, offering the opportunity to tailor treatments to each patient’s specific disease characteristics.

“We said to the model, ‘Do you have to administer the same dose for all the patients? And it said, ‘No. I can give a quarter dose to this person, half to this person, and maybe we skip a dose for this person,’” said Shah.

“That was the most exciting part of this work, where we are able to generate precision medicine-based treatments by conducting one-person trials using unorthodox machine-learning architectures.”

The team stated that this algorithm is an improvement over current methods of glioblastoma therapy, where providers administer doses to patients and wait to see how they respond.

“Humans don’t have the in-depth perception that a machine looking at tons of data has, so the human process is slow, tedious, and inexact,” said Nicholas J. Schork, Professor and Director of Human Biology at the J. Craig Venter Institute, and an expert in clinical trial design.

“Here, you’re just letting a computer look for patterns in the data, which would take forever for a human to sift through, and use those patterns to find optimal doses.”


Join 25,000 of your peers

Register for free to get access to all our articles, webcasts, white papers and exclusive interviews.

Our privacy policy

no, thanks

Continue to site...