Deep learning analysis of digital histopathology to predict tumor phenotype and enhance precision medicine
The current clinical paradigm for cancer assessment involves manual evaluation of histopathologic (H/E) features, which are used to risk stratify patients for therapeutic decision making. At present, H/E analysis is mostly centered around facets of tumor cell biology (e.g. tumor invasion, anaplasia, necrosis) and lack a rigorous assessment of the tumor microenvironment (TME), which recent studies suggest is an important determinant of treatment response and outcomes. This project will focus on developing deep-learning based image analysis methods to identify patterns in the tumor micro-environment that can be used to infer the gene expression and predict treatment outcomes.
Leveraging cell-free DNA and machine learning to enhance precision medicine for brain metastases
While precision medicine approaches for BM have demonstrated impressive intracranial responses, patients are not able to benefit from this treatment paradigm as molecular analysis of BM tissue is not usually feasible. To address this unmet need, we are developing multi-modal DL models, integrating cell-free DNA (cfDNA) genomic profiling and brain MRI, to develop rigorous and reproducible approaches that reflect the presence of targetable vulnerabilities within BM. Given wide availability of conventional MR and minimally invasive nature of cfDNA, our novel approach should be accessible to all patients and offers opportunities to non-invasively track BM evolution longitudinally.
Deep learning model monitoring in radiology
Deep learning models are poised to play an increasing role in clinical decision making within radiology. However, once a trained and validated model is deployed into the real world, the environment in which it operates is constantly evolving as imaging protocols, patient demographics, and disease prevalence shift. This project will explore methods to monitor automatically the model performance and develop and validate methods for determining when shifts in the input data may be giving rise to deterioration of model performance.
Generalizable deep learning models for radiological images
Radiological images, particularly volumetric modalities such as computed tomography (CT) and magnetic resonance imaging (MRI), contain huge amounts of information about patients and may be used to identify a large number of different abnormalities, and medical conditions and quantitative biomarkers. Despite this, most AI models operating on radiology images are narrow, and specialize on a single task. This project will explore how the use of unsupervised and self-supervised learning that exploit the specific characteristics of radiological images can learn representations that are re-usable for multiple tasks, thus increasing the efficiency of the process of creating deep learning models, and moving closer to more general artificial intelligence models within radiology.
Contact: jkalpathy-cramer [at] mgh.harvard.edu, cpb28 [at] nmr.mgh.harvard.edu (Christopher Bridge) and akim46 [at] mgh.harvard.edu (Albert Kim)