Low-dimensional Representations Lab

This lab is developing methodologies that extract and exploit latent, low-dimensional structure when learning predictive models from high-dimensional data sources. The lab brings together tools from probability and statistics, geometry, topology, and computer science to study techniques such as variable selection, graphical modeling, classification, dimensionality reduction, matrix estimation, and manifold learning in concert with other projects and labs in CPCP.

Related CPCP Publications

When can Multi-Site Datasets be Pooled for Regression? Hypothesis Tests, L2-consistency and Neuroscience Applications. Zhou HH, Zhang Y, Ithapu VK, Johnson SC, Wahba G, Singh V. Proceedings of the International Conference on Machine Learning (ICML), 2017

Publication details

Structure-leveraged methods in breast cancer risk prediction. Fan J, Wu Y, Yuan M, Page D, Liu J, Ong IM, Peissig P, Burnside E. Journal of Machine Learning Research 17:1-15, 2016

Publication details

Hypothesis testing in unsupervised domain adaptation with applications in neuroimaging. Zhou H, Ravi S, Ithapu V, Johnson S, Wahba G, Singh V. Advances in Neural Information Processing Systems (NIPS), 2016

Publication details

Minimax optimal rates of estimation in high dimensional additive models. Yuan M, Zhou D-X. Annals of Statistics 44(6):2564-2593, 2016

Publication details

Degrees of freedom in low rank matrix estimation. Yuan M. Science China Mathematics 59(12):2485–2502

Publication details

Lead

Ming Yuan

Investigators

Grace Wahba

Shulei Wang

Han Chen

Resources

CPCP 2017 Retreat: Phenotype Models for Breast Cancer Screening Symposium Video