- Data Sets
- Training Resources
Results for: Training
CPCP Seminar: Big Data in Behavioral Medicine Seminar Video
Complex chronic diseases are creating a growing burden on society. This burden affects the quality of life for many individuals in addition to the financial burden associated with treatment. Every year a large percentage of deaths in the United States are caused by poor diet, physical inactivity or substance abuse (primarily tobacco). These problems are fundamentally behavioral in nature. In addition, developmental disorders such as autism are diagnosed by qualification of behaviors. Dr. James Rehg talks about the role of Big Data on these types of behavioral health disorders. Dr. Rehg and his coleagues work with the new types of sensors that are becoming increasingly available to measure behavioral patterns. They have developed a number of computational models to improve the analysis of these measurements. These models allow them to make quantitative statements about what types of therapies have the greatest affects on behavior.
CPCP Privacy Symposium 2016: Privacy Preserving Federated Biomedical Data Analysis Symposium Video
Learn about the challenges associated with the technical approaches for utilizing data from multiple sources to build more accurate machine learning algorithms from Dr. Xiaoquian Jiang. We know that having more types of data and data from distributed sources provides a stronger platform for research and discovering with machine learning. To address privacy in this context, Dr. Jiang proposes a privacy-preserving distributed data framework and describes various models implemented to solve the such problems. Dr. Jiang's research group has produced versions of this framework in R and Java as well as an online web-service. All version of this framework are available for other researchers to use for their own analysis.
CPCP Privacy Symposium 2016: Privacy is an Essentially Contested Concept Symposium Video
What does privacy mean in the context of Big Data? Dr. Deirdre Mulligan discusses various definitions of privacy in law, philosophy and computer science. Traditional approaches to privacy in data place most of the responsibility for the control of private information flow on the individual with mechanisms such as consent. This idea, known as informational actualization, has limitations that have been exposed by machine learning on big data. These limitations cause violations of privacy such as uncovering identity of individuals where it has been withheld or unexpected inferences made from data that have been intentionally disclosed. Dr. Mulligan suggests new ways of viewing privacy that evolve as social life and technology change.
CPCP Privacy Symposium 2016: Proving that Programs Do Not Discriminate Symposium Video
As the field of Artificial Intelligence (AI) continues to advance, an increasing number of prediction are made by computer programs about humans. These predictions affect decisions made about humans in a wide variety for areas including decisions about: who should get the job, the bank loan, or early release from prison. As we increasingly rely on AI programs to help make decisions about peoples lives, it becomes vitally important that we are able to ensure the programs we are depending on do not have an unfairly biased against certain groups of people. Dr. Aws Albarghouthi of the University of Wisconsin - Madison Computer Sciences department uses his expertise in programming languages to address this issue of fairness.
CPCP Privacy Symposium 2016: Panel Discussion Symposium Video
Dr. Pilar Ossorio from Morgridge Institute for Research at the University of Wisconsin-Madison, and Dr. Peggy Peissig from the Biomedical Informatics Research Foundation join Dr. Aws Albarghouthi, Dr. Deirdre Mulligan, and Dr. Xiaoquian Jiang to answer questions from the audience about privacy and fairness in the context of computational analysis.
CPCP Privacy Symposium 2016: Welcome Symposium Video
Dr. Pilar Ossorio and Dr. Mark Craven welcome attendees to the 2016 Big Privacy Symposium. At the Center for Predictive Computational Phenotyping (CPCP) we have expertise in the ethical and legal aspects of analyzing complex datasets. Experts in these fields work with our experts in computational analysis to ensure that new methods for improving human health developed at CPCP are discovered and used in a legally and ethically manner.
CPCP Retreat 2016: High-Throughput Computing in Support of High-Throughput Phenotyping Symposium Video
Dr. Miron Livny describes the opportunities available in terms of High-Throughput Computing at the UW Madison. In the past year, his team worked with close to 200 research teams, utilizing a total of 320 million computing hours. The High-Throughput Computing group facilitates data processing by "submitting locally and running globally" using many resources including the Open Science Grid (OSG). One of the most important resources the group has to offer is their team of expert consultants/liaisons who help scientists learn how to use High-Throughput Computing to effectively and efficiently accomplish their research goals.
CPCP Retreat 2016: High-Throughput Predictive Phenotyping from Electronic Health Records Symposium Video
Ross Kleiman describes his work as part of the EHR-based Phenotyping project at the CPCP creating predictive models of diseases, such as heart attack or breast cancer, from electronic medical health records. Using the extensive medical health record data collected by Marshfield clinic in Marshfield, WI over the past 40 years and High-throughput computing resources, the EHR-based phenotyping project is able to make prediction about medical outcomes of patients. Recent work predicts the risks of specific patients developing specific disease, as well as risk of patients being readmitted to the hospital in the next thirty days. This model serves as a machine learning pipeline for forming diagnoses from EHR records and is an initial baseline for this new area of pan-diagnostic machine learning research.
CPCP Retreat 2016: Using Active Learning to Phenotype Electronic Medical Records Symposium Video
In the analysis of Electronic Medical Records, labelled examples are examples for which we know what medical conditions, such as cataracts or diabetes, are indicated by a specific patients health record. The availability of this type of labelled records is essential for the development of robust machine learning models because the labels serve as a ground truth for researchers to test their machine learning models against. Unfortunately, these labelled examples are difficult and expensive to obtain because the labeling is typically done by a medical expert who will spend anywhere from 30 minutes to 6 hours determining each disease label for each patient. In this talk, Ari Biswas describes how his research in the EHR-based Phenotyping research group at the CPCP addressing this labeling problem with an active learning approach. This active learning method learns to label EHR's by iteratively determining a labeling model and then improving it's labeling process by querying a medical expert for labels of examples that the method is most uncertain about the label. The results of this research on a test example show that this active learning method learns to label patients using fewer labelled examples compared to a model that learns from randomly labelled example records.
CPCP Retreat 2016: Entity Matching for EHR- and Transcriptome-based Phenotyping Symposium Video
Dr. AnHai Doan describes the task of entity matching across EHR- and transcriptome-based data and introduces a new tool called Magellan that allows non-experts to perform entity matching on their datasets. Entity matching allows matching of data across multiple data sets, for example identifying all of a patient's data when they have been treated at different medical offices or selecting all patients who have been treated with a specific drug. This is a challenging task because of variation in the data such as spelling mistakes or the use of abbreviations. Magellan fills an important gap in the data science pipeline by providing a step-by-step workflow for individuals to perform entity matching on their own data without becoming experts in the field. This can potentially save research groups thousands of dollars that would otherwise be spent hiring an expert. The Magellen package will be released in 2016 as a python package.