Bayesian Methods for Machine Learning

  • 4.5
Approx. 33 hours to complete

Course Summary

Learn how to use Bayesian methods in machine learning to make predictions and explore data uncertainties.

Key Learning Points

  • Understand the basics of probabilistic programming and Bayesian inference
  • Learn how to use PyMC3, a Python library for Bayesian modeling and probabilistic programming
  • Apply Bayesian methods to real-world problems in machine learning

Job Positions & Salaries of people who have taken this course might have

  • Machine Learning Engineer
    • USA: $112,000
    • India: ₹1,015,000
    • Spain: €46,000
  • Data Scientist
    • USA: $117,000
    • India: ₹1,580,000
    • Spain: €39,000
  • Quantitative Researcher
    • USA: $122,000
    • India: ₹1,950,000
    • Spain: €62,000

Related Topics for further study


Learning Outcomes

  • Ability to apply Bayesian methods to real-world problems in machine learning
  • Understanding of probabilistic programming and Bayesian inference
  • Proficiency in using PyMC3 for Bayesian modeling and probabilistic programming

Prerequisites or good to have knowledge before taking this course

  • Basic knowledge of Python programming
  • Familiarity with machine learning concepts

Course Difficulty Level

Intermediate

Course Format

  • Online
  • Self-paced
  • Video lectures

Similar Courses

  • Probabilistic Graphical Models
  • Bayesian Statistics: From Concept to Data Analysis
  • Applied Machine Learning

Related Education Paths


Notable People in This Field

  • Andrew Gelman
  • Yann LeCun

Related Books

Description

People apply Bayesian methods in many areas: from game development to drug discovery. They give superpowers to many machine learning algorithms: handling missing data, extracting much more information from small datasets. Bayesian methods also allow us to estimate uncertainty in predictions, which is a desirable feature for fields like medicine.

Outline

  • Introduction to Bayesian methods & Conjugate priors
  • About the University
  • Think bayesian & Statistics review
  • Bayesian approach to statistics
  • How to define a model
  • Example: thief & alarm
  • Linear regression
  • Analytical inference
  • Conjugate distributions
  • Example: Normal, precision
  • Example: Bernoulli
  • About University
  • Rules on the academic integrity in the course
  • MLE estimation of Gaussian mean
  • Introduction to Bayesian methods
  • Conjugate priors
  • Expectation-Maximization algorithm
  • Latent Variable Models
  • Probabilistic clustering
  • Gaussian Mixture Model
  • Training GMM
  • Example of GMM training
  • Jensen's inequality & Kullback Leibler divergence
  • Expectation-Maximization algorithm
  • E-step details
  • M-step details
  • Example: EM for discrete mixture, E-step
  • Example: EM for discrete mixture, M-step
  • Summary of Expectation Maximization
  • General EM for GMM
  • K-means from probabilistic perspective
  • K-means, M-step
  • Probabilistic PCA
  • EM for Probabilistic PCA
  • EM algorithm
  • Latent Variable Models and EM algorithm
  • Variational Inference & Latent Dirichlet Allocation
  • Why approximate inference?
  • Mean field approximation
  • Example: Ising model
  • Variational EM & Review
  • Topic modeling
  • Dirichlet distribution
  • Latent Dirichlet Allocation
  • LDA: E-step, theta
  • LDA: E-step, z
  • LDA: M-step & prediction
  • Extensions of LDA
  • Variational inference
  • Latent Dirichlet Allocation
  • Markov chain Monte Carlo
  • Monte Carlo estimation
  • Sampling from 1-d distributions
  • Markov Chains
  • Gibbs sampling
  • Example of Gibbs sampling
  • Metropolis-Hastings
  • Metropolis-Hastings: choosing the critic
  • Example of Metropolis-Hastings
  • Markov Chain Monte Carlo summary
  • MCMC for LDA
  • Bayesian Neural Networks
  • Markov Chain Monte Carlo
  • Variational Autoencoder
  • Scaling Variational Inference & Unbiased estimates
  • Modeling a distribution of images
  • Using CNNs with a mixture of Gaussians
  • Scaling variational EM
  • Gradient of decoder
  • Log derivative trick
  • Reparameterization trick
  • Learning with priors
  • Dropout as Bayesian procedure
  • Sparse variational dropout
  • VAE paper
  • Relevant papers
  • Categorical Reparametrization with Gumbel-Softmax
  • Variational autoencoders
  • Categorical Reparametrization with Gumbel-Softmax
  • Gaussian processes & Bayesian optimization
  • Nonparametric methods
  • Gaussian processes
  • GP for machine learning
  • Derivation of main formula
  • Nuances of GP
  • Bayesian optimization
  • Applications of Bayesian optimization
  • Gaussian Processes and Bayesian Optimization
  • Final project

Summary of User Reviews

Learn about Bayesian methods in machine learning with this course from Coursera. Students have praised the course for its comprehensive coverage of the topic and its practical applications in real-world scenarios.

Key Aspect Users Liked About This Course

The course is praised for its practical applications in real-world scenarios.

Pros from User Reviews

  • Comprehensive coverage of Bayesian methods in machine learning
  • Practical application of concepts in real-world scenarios
  • Engaging and knowledgeable instructors
  • Well-structured course material
  • Excellent community support

Cons from User Reviews

  • Some students found the course challenging and difficult to follow
  • The pace of the course may be too slow for experienced learners
  • Limited practical exercises and hands-on learning opportunities
  • Some students found the course content to be too theoretical and not applicable to their work
  • The course may be too technical for beginners
English
Available now
Approx. 33 hours to complete
Daniil Polykovskiy, Alexander Novikov
HSE University
Coursera

Instructor

Daniil Polykovskiy

  • 4.5 Raiting
Share
Saved Course list
Cancel
Get Course Update
Computer Courses