About Our Research

We have been researching as a team for few years in collaboration with professors, researchers and leaders from prestigious universities such us Stanford, Berkeley, UPenn and UCSF. Here is a list of relevant papers and the topics we are currently researching related to Machine Learning.



Expert-Augmented Machine Learning (EAML)

The success of Machine Learning is often limited by the quality and quantity of available data, while its adoption by the level of trust that models afford users. Human vs. machine performance is commonly compared empirically to decide whether a certain task should be performed by a computer or an expert. In reality, the optimal learning strategy may involve combining the complementary strengths of man and machine. Here we present Expert-Augmented Machine Learning (EAML), an automated method that guides the extraction of expert knowledge and its integration into machine-learned models.

Efstathios D. Gennatas, Jerome H. Friedman, Lyle H. Ungar, Romain Pirracchio, Eric Eaton, Lara Reichman, Yanet Interian, Charles B. Simone II, Andrew Auerbach, Elier Delgado, Mark J. Van der Laan, Timothy D. Solberg, and Gilmer Valdes


Paper (coming soon)

The Additive Tree

Building more accurate decision trees with the Additive Tree

Current decision trees, such as Classification and Regression Trees (CART), have played a predominant role in fields such as medicine, due to their simplicity and intuitive interpretation. However, such trees suffer from intrinsic limitations in predictive power. We developed the additive tree, a theoretical approach to generate a more accurate and interpretable decision tree, which reveals connections between CART and gradient boosting. The additive tree exhibits superior predictive performance to CART, as validated on 83 classification tasks.
Visit publication

José Marcio Luna, Efstathios D. Gennatas, Lyle H. Ungar, Eric Eaton, Eric S. Diffenderfer, Shane T. Jensen, Charles B. Simone II, Jerome H. Friedman, Timothy D. Solberg, and Gilmer Valdes


Read Paper

MediBoost

A Patient Stratification Tool for Interpretable Decision Making in the Era of Precision Medicine

Machine learning algorithms that are both interpretable and accurate are essential in applications such as medicine where errors can have a dire consequence. Unfortunately, there is currently a tradeoff between accuracy and interpretability among state-of-the-art methods. Decision trees are interpretable and are therefore used extensively throughout medicine for stratifying patients. Current decision tree algorithms, however, are consistently outperformed in accuracy by other, less-interpretable machine learning models, such as ensemble methods. Visit publication

José Marcio Luna, Eric Eaton, Charles B. Simone II, Lyle H. Ungar, Timothy D. Solberg, and Gilmer Valdes


Read Paper