Low-dimensional Representations Lab

This lab is developing methodologies that extract and exploit latent, low-dimensional structure when learning predictive models from high-dimensional data sources. The lab brings together tools from probability and statistics, geometry, topology, and computer science to study techniques such as variable selection, graphical modeling, classification, dimensionality reduction, matrix estimation, and manifold learning in concert with other projects and labs in CPCP.

Related CPCP Publications

Structure-leveraged methods in breast cancer risk prediction. Fan J, Wu Y, Yuan M, Page D, Liu J, Ong IM, Peissig P, Burnside E. Journal of Machine Learning Research 17:1-15, 2016

Publication details

Hypothesis testing in unsupervised domain adaptation with applications in neuroimaging. Zhou H, Ravi S, Ithapu V, Johnson S, Wahba G, Singh V. Advances in Neural Information Processing Systems (NIPS), 2016

Publication details

Minimax optimal rates of estimation in high dimensional additive models. Yuan M, Zhou D-X. Annals of Statistics 44(6):2564-2593, 2016

Publication details

Degrees of freedom in low rank matrix estimation. Yuan M. Science China Mathematics 59(12):2485–2502

Publication details

On tensor completion via nuclear norm minimization. Yuan M, Zhang C-H. Foundations of Computational Mathematics 16(4):1031–1068, 2016

Publication details

Lead

Ming Yuan

Investigators

Grace Wahba

Shulei Wang

Han Chen