Projects Sharing Researchers
|Project Title||Interpretable Deep Learning Framework for Predictive Modeling|
|Tags||machine learning, AI framework, prediction, knowledge-distillation, model interpretability, mimic learning, multitask learning, reinforcement learning, gradient boosting trees, Personalized healthcare, phenotyping|
|Posted Date||Oct 9, 2017 1:09 PM|
Exponential growth in Electronic Healthcare Records (EHR) has resulted in the urgent need for discovery of meaningful data-driven representations and patterns of diseases in Computational Phenotyping research. Deep Learning models have shown superior performance for robust prediction in computational phenotyping tasks, but suffer from the issue of model interpretability which is crucial for clinicians involved in decision-making.
USC researchers have developed a novel knowledge-distillation approach called Interpretable Mimic Learning, to learn interpretable features for making robust prediction while mimicking the performance of deep learning models. This approach provides interpretable models that help primary care providers, physicians and clinical experts in monitoring and decision-making for patient care. The technology can be successfully applied not only to healthcare, but also to other applications such as speech processing, computer vision, finance or marketing.
Achieves or exceeds state-of-the-art prediction performance
Provides interpretable features and decision rules
Phenotype discovery for clinical decision making
Better quality of patient care
Faster adoptability among clinical staff
Stage of Development
Provisional patent application filed
Nikolaus Traitler, Licensing Associate
USC Stevens Center for Innovation
|NCD 2017-057 - Interpretable Deep Learning Framework for Predictive Modeling.pdf||None||Download|