Columbia University: Master the essentials of machine learning and algorithms to help improve learning from data.
What is the Machine Learning course at Columbia University like?
Machine Learning is the basis for the most exciting careers in data analysis today. You’ll learn the models and methods and apply them to real world situations ranging from identifying trending news topics, to building recommendation engines, ranking sports teams and plotting the path of movie zombies.
Major perspectives covered include:
- probabilistic versus non-probabilistic modeling
- supervised versus unsupervised learning
Topics include - classification and regression, clustering methods, sequential models, matrix factorization, topic modeling and model selection.
Methods include - linear and logistic regression, support vector machines, tree classifiers, boosting, maximum likelihood and MAP inference, EM algorithm, hidden Markov models, Kalman filters, k-means, Gaussian mixture models, among others.
In the University's own words -
"In the first half of the course we will cover supervised learning techniques for regression and classification. In this framework, we possess an output or response that we wish to predict based on a set of inputs. We will discuss several fundamental methods for performing this task and algorithms for their optimization. Our approach will be more practically motivated, meaning we will fully develop a mathematical understanding of the respective algorithms, but we will only briefly touch on abstract learning theory.
In the second half of the course we shift to unsupervised learning techniques. In these problems the end goal less clear-cut than predicting an output based on a corresponding input. We will cover three fundamental problems of unsupervised learning: data clustering, matrix factorization, and sequential models for order-dependent data. Some applications of these models include object recommendation and topic modeling."
What you’ll learn
- Supervised learning techniques for regression and classification
- Unsupervised learning techniques for data modeling and analysis
- Probabilistic versus non-probabilistic viewpoints
- Optimization and inference algorithms for model learning
Meet the instructor
Professor John W. Paisley
Department of Electrical Engineering
Columbia University
John Paisley is an Assistant Professor in the Department of Electrical Engineering at Columbia University. John is also an affiliated member of the Data Science Institute at Columbia.
John received his Ph.D. in Electrical and Computer Engineering from Duke University, where he worked with Lawrence Carin. He was then a post-doc in the Computer Science departments at Princeton University with David Blei and UC Berkeley with Michael Jordan.
John’s research is in the general area of statistical machine learning. His interests include probabilistic modeling and inference techniques, Bayesian nonparametric methods, dictionary learning and topic modeling.