Bloomberg ML EDU presents:

Foundations of Machine Learning

Understand the Concepts, Techniques and Mathematical Frameworks Used by Experts in Machine Learning

About This Course

Bloomberg presents "Foundations of Machine Learning," a training course that was initially delivered internally to the company's software engineers as part of its "Machine Learning EDU" initiative. This course covers a wide variety of topics in machine learning and statistical modeling. The primary goal of the class is to help participants gain a deep understanding of the concepts, techniques and mathematical frameworks used by experts in machine learning. It is designed to make valuable machine learning skills more accessible to individuals with a strong math background, including software developers, experimental scientists, engineers and financial professionals.

The 30 lectures in the course are embedded below, but may also be viewed in this YouTube playlist . The course includes a complete set of homework assignments, each containing a theoretical element and implementation challenge with support code in Python, which is rapidly becoming the prevailing programming language for data science and machine learning in both academia and industry. This course also serves as a foundation on which more specialized courses and further independent study can build.

Please fill out this short online form to register for access to our course's Piazza discussion board. Applications are processed manually, so please be patient. You should receive an email directly from Piazza when you are registered. Common questions from this and previous editions of the course are posted in our FAQ .

The first lecture, Black Box Machine Learning , gives a quick start introduction to practical machine learning and only requires familiarity with basic programming concepts.

Highlights and Distinctive Features of the Course Lectures, Notes, and Assignments

  • Geometric explanation for what happens with ridge, lasso, and elastic net regression in the case of correlated random variables.
  • Investigation of when the penalty (Tikhonov) and constraint (Ivanov) forms of regularization are equivalent.
  • Concise summary of what we really learn about SVMs from Lagrangian duality.
  • Proof of representer theorem with simple linear algebra, emphasizing it as a way to reparametrize certain objective functions.
  • Guided derivation of the math behind the classic diamond/circle/ellipsoids picture that "explains" why L1 regularization gives sparsity (Homework 2, Problem 5)
  • From scrach (in numpy) implementation of almost all major ML algorithms we discuss: ridge regression with SGD and GD (Homework 1, Problems 2.5, 2.6 page 4), lasso regression with the shooting algorithm (Homework 2, Problem 3, page 4), kernel ridge regression (Homework 4, Problem 3, page 2), kernelized SVM with Kernelized Pegasos (Homework 4, 6.4, page 9), L2-regularized logistic regression (Homework 5, Problem 3.3, page 4),Bayesian Linear Regession (Homework 5, problem 5, page 6), multiclass SVM (Homework 6, Problem 4.2, p. 3), classification and regression trees (without pruning) (Homework 6, Problem 6), gradient boosting with trees for classification and regression (Homework 6, Problem 8), multilayer perceptron for regression (Homework 7, Problem 4, page 3)
  • Repeated use of a simple 1-dimensional regression dataset, so it's easy to visualize the effect of various hypothesis spaces and regularizations that we investigate throughout the course.
  • Investigation of how to derive a conditional probability estimate from a predicted score for various loss functions, and why it's not so straightforward for the hinge loss (i.e. the SVM) (Homework 5, Problem 2, page 1)
  • Discussion of numerical overflow issues and the log-sum-exp trick (Homework 5, Problem 3.2)
  • Self-contained introduction to the expectation maximization (EM) algorithm for latent variable models.
  • Develop a general computation graph framework from scratch, using numpy, and implement your neural networks in it.

Prerequisites

The quickest way to see if the mathematics level of the course is for you is to take a look at this mathematics assessment , which is a preview of some of the math concepts that show up in the first part of the course.

  • Solid mathematical background , equivalent to a 1-semester undergraduate course in each of the following: linear algebra, multivariate differential calculus, probability theory, and statistics. The content of NYU's DS-GA-1002: Statistical and Mathematical Methods would be more than sufficient, for example.
  • Python programming required for most homework assignments.
  • Recommended: At least one advanced, proof-based mathematics course
  • Recommended: Computer science background up to a "data structures and algorithms" course
  • (HTF) refers to Hastie, Tibshirani, and Friedman's book The Elements of Statistical Learning
  • (SSBD) refers to Shalev-Shwartz and Ben-David's book Understanding Machine Learning: From Theory to Algorithms
  • (JWHT) refers to James, Witten, Hastie, and Tibshirani's book An Introduction to Statistical Learning

Assignments

GD, SGD, and Ridge Regression

Lasso Regression

SVM and Sentiment Analysis

Kernel Methods

Probabilistic Modeling

Multiclass, Trees, and Gradient Boosting

Computation Graphs, Backpropagation, and Neural Networks

The cover of Hands-On Machine Learning with Scikit-Learn and TensorFlow

Other tutorials and references

  • Carlos Fernandez-Granda's lecture notes provide a comprehensive review of the prerequisite material in linear algebra, probability, statistics, and optimization.
  • Brian Dalessandro's iPython notebooks from DS-GA-1001: Intro to Data Science
  • The Matrix Cookbook has lots of facts and identities about matrices and certain probability distributions.
  • Stanford CS229: "Review of Probability Theory"
  • Stanford CS229: "Linear Algebra Review and Reference"
  • Math for Machine Learning by Hal Daumé III

A photo of David Rosenberg

David S. Rosenberg

Teaching Assistants

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Statistics and probability

Unit 1: analyzing categorical data, unit 2: displaying and comparing quantitative data, unit 3: summarizing quantitative data, unit 4: modeling data distributions, unit 5: exploring bivariate numerical data, unit 6: study design, unit 7: probability, unit 8: counting, permutations, and combinations, unit 9: random variables, unit 10: sampling distributions, unit 11: confidence intervals, unit 12: significance tests (hypothesis testing), unit 13: two-sample inference for the difference between groups, unit 14: inference for categorical data (chi-square tests), unit 15: advanced regression (inference and transforming), unit 16: analysis of variance (anova).

STAT 430: Basics of Statistical Learning

University of illinois at urbana-champaign, fall 2017, dalpiaz, schedule - homework - quizzes - projects, syllabus - compass - r4sl.

  • First day of class! Course overview and syllabus discussion.
  • Materials : Syllabus Slides , Full Syllabus
  • ISL Videos : Opening Remarks and Examples , Supervised and Unsupervised Learning
  • Quick probability review. Recapping some R basics.
  • Reading : R4SL Chapter 2 , R4SL Chapter 3
  • Slides : Probability Recap , R Introduction
  • Lab : R Basics , R Basics Solutions
  • Introduction to rmarkdown .
  • Slides : rmarkdown Introduction
  • No class! Labor Day
  • More rmarkdown details and practice. What is a model?
  • Reading : R4SL Chapter 3
  • Lab : rmarkdown , rmarkdown Solutions
  • Begin recap of regression basics.
  • Reading : ISL 3.1 - 3.4, R4SL Chapter 4
  • ISL Slides : Linear Regression
  • ISL Videos : Simple Linear Regression , Hypothesis Testing , Interpreting Regression Coefficients , Model Selection and Qualitative Predictors , Interactions and Nonlinearity
  • Deadline : Homework 00 Due
  • Review using lm() for regression models in R .
  • Reading : R4SL Chapter 4
  • Introduce the supervised learning, regression, task. Discuss the test-train split and models that generalize to unseen data.
  • Reading : ISL 2.1 - 2.2
  • ISL Slides : Statistical Learning
  • ISL Videos : Statistical Learning and Regression , Assessing Model Accuracy and Bias-Variance Trade-off
  • Lab : Test-Train Split , Test-Train Split Solutions
  • Continue discussion of regression in the context of statistical learning.
  • Slides : Linear Models for Statistical Learning, Regression
  • Deadline : Homework 01 Due
  • Introduce KNN. Compare non-parametric methods to parametric methods. Discuss tuning parameters versus model parameters.
  • Reading : R4SL Chapter 7 (Currently very sparse notes.)
  • Continue discussion of KNN. Compare KNN to linear models. Some live coding examples.
  • Finish discussion of KNN
  • Deadline : Homework 02 Due
  • Bias-Variance Tradeoff
  • Reading : R4SL Chapter 8
  • Slides : Bias-Variance Tradeoff
  • Begin classification.
  • Reading : ISL 4.1, R4SL Chapter 9
  • Slides : Classification Introduction
  • More classification. Introduction to logistic regression
  • Reading : ISL 4.2 - 4.3, R4SL Chapter 10
  • ISL Slides : Classification
  • ISL Videos : Introduction to Classification
  • Deadline : Homework 03 Due
  • Continued discussion of logistic regression.
  • Reading : ISL 4.3, R4SL Chapter 10
  • ISL Videos : Logistic Regression , Multiple Logistic Regression
  • KNN for classification.
  • Reading : R4SL Chapter 12
  • Generative methods in R .
  • Reading : ISL 4.4, R4SL Chapter 11
  • Deadline : Homework 04 Due
  • Continued discussion of generative methods. Details of univariate LDA.
  • ISL Videos : Linear Discriminant Analysis and Bayes Theorem , Univariate Linear Discriminant Analysis
  • Continued discussion of generative methods. Multivariate LDA, QDA, Naive Bayes.
  • ISL Videos: Multivariate Linear Discriminant Analysis , Quadratic Discriminant Analysis and Naive Bayes
  • Some final thoughts on generative methods. Some recap of classification methods. Some R details.
  • Deadline : Homework 05 Due
  • Begin discussing Statistical Learning in practice.
  • Deadline : None. No homework during quiz week.
  • Cross-validation and caret .
  • Reading: ISL 5.1, R4SL Chapter 20 , R4SL Chapter 21
  • ISL Slides: Resampling
  • ISL Videos: Validation Set Approach , k-fold Cross-Validation , Cross-Validation: The Right and Wrong Ways
  • More cross-validation and caret .
  • Deadline : Homework 06 Due
  • Some comments on variable selection.
  • Reading: ISL 6.1, R4SL Chapter 22
  • ISL Slides: Model Selection
  • ISL Videos: Best Subset Selection , Forward Stepwise Selection , Backward Stepwise Selection , Estimating Test Error I , Estimating Test Error II
  • Entering the modern age. Introducing regularization.
  • Reading: ISL 6.2, R4SL Chapter 24 - Regularization
  • ISL Videos: Shrinkage Methods and Ridge Regression , The Lasso , Tuning Parameter Selection
  • More on ridge and lasso. Using ridge and lasso in R .
  • Reading: ISL 6.2, R4SL Chapter 24 - Regularization , R4SL Chapter 25 - Elastic Net
  • Deadline : Homework 07 Due
  • Elastic net.
  • R4SL: Elastic Net
  • Overview: Introduction to trees.
  • Reading: ISL 8.1
  • ISL Slides: Trees
  • Additonal Slides: Part II: Tree-based Methods
  • ISL Videos: Decision Trees , Pruning a Decision Tree , Classification Trees and Comparison with Linear Models
  • Discussed some finer details of R .
  • Deadline : Homework 08 Due
  • Deadline : Final Project Group Choice
  • Continuation of tree discussion. Introduction to ensemble methods, mostly random forests.
  • Reading: ISL 8.2
  • R4SL: Ensemble Methods
  • Additonal Slides: Part I: Pruning, Bagging, Boosting
  • ISL Videos: Bootstrap Aggregation (Bagging) and Random Forests , Boosting and Variable Importance
  • Continuation of tree discussion. Introduction to ensemble methods, mostly boosting.
  • Extensions of random forests and boosting, in R . Some summary of supervised learning.
  • Additional Slides: Supervised Learning Review
  • Reading: Extremely Randomized Trees, Ranger, XGBoost [ rmarkdown ]
  • Reading: Statistical Modeling: The Two Cultures
  • Reading: Do we Need Hundreds of Classifiers to Solve Real World Classification Problems?
  • No class. Fall break.
  • Deadline : Homework 09 Due
  • No class . Consider a group meeting.
  • Unsupervised learning. Clustering.
  • Reading: ISL 10.1 - 10.3
  • R4SL: Unsupervised Learning
  • ISL Slides: Unsupervised Learning
  • Additional Slides: Unsupervised Learning, Part I, Clustering
  • ISL Videos: Unsupervised Learning and Principal Components Analysis , Exploring Principal Components Analysis and Proportion of Variance Explained , K-means Clustering , Hierarchical Clustering
  • Unsupervised learning. Clustering in R .
  • Deadline : Project proposals. No homework is due.
  • Unsupervised learning. PCA. Clustering again.
  • Additional Slides: Unsupervised Learning, Part II, PCA
  • No class. Office hours 8 - 10 at David’s office. Work on projects!
  • Discussion of graduate student project results. Thoughts on keeping up to date with data science and machine learning.
  • Reading: Some Machine Learning and Data Science Resources
  • Deadline : Homework 10 Due
  • No class. Finals!
  • Deadline : Final Project Report
  • Deadline : Final Project Peer Review

Homework 00

  • Due: Friday, September 8
  • Assignment: [ html ] [ pdf ] [ zip ]
  • Solution: [ zip ]

Homework 01

  • Due: Friday, September 15

Homework 02

  • Due: Friday, September 22

Homework 03

Homework 04.

  • Due: Friday, October 6

Homework 05

  • Due: Friday, October 13

Homework 06

  • Due: Friday, October 27

Homework 07

  • Due: Friday, November 3

Homework 08

  • Due: Friday, November 10

Homework 09

  • Due: Monday, November 20

Homework 10

  • Due: Wednesday, December 13
  • Date: Wednesday, October 18
  • Review: [ In-Class Practice Probelms ]
  • Date: Wednesday, December 6

Group Final Project

  • Group Choice - Friday, November 10, 11:59 PM
  • Analysis Proposal - Friday, December 1, 11:59 PM
  • Report Template
  • Peer Evaluation - Thursday, December 21, 10:00 PM

Graduate Student Project

  • Autograder - Saturday, December 9, 11:59 PM
  • Report - Saturday, December 9, 11:59 PM

Teach yourself statistics

Statistics and Probability

This website provides training and tools to help you solve statistics problems quickly, easily, and accurately - without having to ask anyone for help.

Online Tutorials

Learn at your own pace. Free online tutorials cover statistics, probability, regression, analysis of variance, survey sampling, and matrix algebra - all explained in plain English.

  • Advanced Placement (AP) Statistics . Full coverage of the AP Statistics curriculum.
  • Probability . Fundamentals of probability. Clear explanations with pages of solved problems.
  • Linear Regression . Regression analysis with one or more independent variables.
  • ANOVA . Analysis of variance made easy. How to collect, analyze, and interpret data.
  • Survey Sampling . How to conduct a statistical survey and analyze survey data.
  • Matrix Algebra . Easy-to-understand introduction to matrix algebra.

Practice and review questions reinforce key points. Online calculators take the drudgery out of computation. Perfect for self-study.

AP Statistics

Here is your blueprint for test success on the AP Statistics exam.

  • AP Tutorial : Study our free, AP statistics tutorial to improve your skills in all test areas.
  • Practice exam : Test your understanding of key topics, through sample problems with detailed solutions.

Be prepared. Get the score that you want on the AP Statistics test.

Random Number Generator

Produce a list of random numbers, based on your specifications.

  • Control list size (generate up to 10,000 random numbers).
  • Specify the range of values that appear in your list.
  • Permit or prevent duplicate entries.

Free and easy to use.

Sample Size Calculator

Create powerful, cost-effective survey sampling plans.

  • Find the optimum design (most precision, least cost).
  • See how sample size affects cost and precision.
  • Compare different survey sampling methods.
  • Assess statistical power and Type II errors.

Tailor your sampling plan to your research needs.

Stat Toolbox

Check out our statistical tables and online calculators - fast, accurate, and user-friendly.

Discrete probability distributions

  • Hypergeometric
  • Multinomial
  • Negative binomial
  • Poisson distribution

Continuous probability distributions

  • f-Distribution
  • Normal distribution
  • t-Distribution

Special-purpose calculators

  • Bayes Rule Calculator
  • Combination-Permutation
  • Event Counter
  • Factorial Calculator
  • Bartlett's Test Calculator
  • Statistics Calculator
  • Probability Calculator

Each calculator features clear instructions, answers to frequently-asked questions, and a one or more problems with solutions to illustrate calculator use.

ECE 543: Statistical Learning Theory (Spring 2021)

About   |   schedule   |   coursework, announcements.

  • Homework 4 is posted, due by the end of the day on Tuesday, April 27.
  • Recordings of all lectures up to April 15 are now available.
  • Homework 3 is posted, due by the end of the day on Tuesday, April 6.
  • Recordings of all lectures up to March 23 are now available.
  • There was a typo in Problem 1 of Homework 2. Revised version is posted.
  • In-class notes and video recordings of Lectures 7-12 are now available.
  • Homework 2 is posted, due by the end of the day on Tuesday, March 16.
  • There will be TA office hours this week on Tuesday, February 16, 9:00-10:00 am.
  • Since Wednesday, February 17, is a no-instruction day, there will be no TA office hours that week. Homework 1 is now due by the end of Wednesday, February 24.
  • In-class notes and video recording of Lectures 4-6 are now available.
  • Information about homework submissions and final project is available on the coursework page .
  • Homework 1 is posted, due by the end of the day on Thursday, February 18.
  • In-class notes and video recording of Lectures 2 and 3 are now available.
  • In-class notes and video recording of Lecture 1 are now available.
  • Zoom link and passcode for the course are posted on Piazza .

About this course

  • Lecture notes by Bruce Hajek and Maxim Raginsky
  • Shai Shalev-Shwartz and Shai Ben-David, Understanding Machine Learning , Cambridge University Press, 2014

Regular weekly schedule

CS 578: Statistical Machine Learning (2021 Spring)

Course information.

When : Mon/Wed 4:30 pm -- 5:45 pm.

Where : Remote learning; synchronized and asynchronized (see below).

Instructor : Yexiang Xue, email: yexiang AT purdue.edu.

Teaching Assistant : Masudur Rahman (rahman64 AT purdue.edu),                                 Shamik Roy (roy98 AT purdue.edu).

Office Hour : Yexiang Xue, Mondays 3:30 pm -- 4:30 pm (by appointment, notified via email at least by 5 pm the previous Sunday; Zoom link in Brightspace).                       Masudur Rahman. Time: 2 pm -- 3 pm on Thursdays (by appointment, notified via email at least by 5 pm the previous day; Zoom link in Brightspace).                       Shamik Roy. Time: 11 am -- 12 pm on Mondays (by appointment, notified via email at least by 5 pm the previous day; Zoom link in Brightspace).

Course website: https://www.cs.purdue.edu/homes/yexiang/courses/21spring-cs578/index.html.

Notifications and slides will be via Brightspace (https://purdue.brightspace.com/).

Online discussion is available at Piazza (piazza.com/purdue/spring2021/cs578). Access code is on Brightspace.

Homework and exam submissions will be via Gradescope (https://www.gradescope.com/courses/221543).

Course project submission at CMT (https://cmt3.research.microsoft.com/CS578SPRING2021).

Course participation at Hotseat (https://www.openhotseat.org).

Course Description

Machine learning offers a new paradigm of computing – computer systems that can learn to perform tasks by finding patterns in data, rather than by running code specifically written to accomplish the task by a human programmer. The most common machine-learning scenario requires a human teacher to annotate data (identify relevant phenomenon that occurs in the data), and use a machine-learning algorithm to generalize from these examples. Generalization is at the heart of machine learning – how can the machine go beyond the provided set of examples and make predictions about new data. In this class we will look into different machine learning scenarios (supervised and unsupervised), look into several algorithms, analyze their performance and learn the theory behind them.

Content Delivery

This virtual course will be delivered in two ways:

One synchronized format , from 4:30 pm -- 5:45 pm (US Eastern Time) on Mondays and Wednesdays, via Zoom meeting (meeting link at Brightspace). These class times are allocated by the university to avoid as many conflicts as possible. Attending classes synchronously makes it easier to track your own progress and provides opportunities to ask the instructor questions in real-time. However, attending Zoom meetings are not mandatory, especially considering that students may live in different time zones. 

One asynchronized format . The Zoom meeting sessions will be recorded and uploaded to Brightspace as videos for students to watch. However, there will be quizzes during the class. The quizzes contribute to the final score and will be given in Hotseat. The quizzes will open before each synchronized session and will close after 24 hours of the class session. Hence, please make sure to watch the videos and complete each quiz within a day of the synchronized class session.

Prerequisites

(1) Undergraduate level training or coursework in linear algebra, calculus and multivariate calculus, basic probability and statistics;

(2) Programming skills: at least master one programming language. Python is highly recommended (self-studying scikit-learn and related packages is expected);

(3) Basic skills in using git for maintaining code development. In addition, an undergraduate level course in Artificial Intelligence may be helpful but is not required.

Textbooks and Reading Materials

  • Tom Mitchell, Machine Learning, [url]
  • Christopher M. Bishop, Pattern Recognition and Machine Learning, [url]
  • Trevor Hastie, Robert Tibshirani, Jerome Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, [url] (online access available at Purdue Library)
  • Daphne Koller and Nir Friedman, Probabilistic Graphical Models: Principles and Techniques, [url]
  • A First Encounter with Machine Learning by Max Welling
  • Introduction to Machine Learning by Alex Smola and S.V.N. Vishwanathan
  • A Course in Machine Learning by Hal Daume III
  • Bayesian reasoning and machine learning by David Barber
  • A tutorial by Andrew Moore
  • The Matrix Cookbook by Kaare Brandt Petersen and Michael Syskind Pedersen
  • Calculus by Gilbert Strang
  • Linear Algebra by Gilbert Strang
  • Introduction to Probability and Statistics by Jeremy Orloff and Jonathan Bloom
  • The official Python Tutorial

Course Activities and Evaluation

Attendance: since almost all lectures will be delivered on a virtual whiteboard, attendance is highly encouraged. The content on the virtual whiteboard will be saved in PDF and posted on Brightspace. However, you are highly encouraged to follow the proving steps in class, which is the key to success. The attendance scores will be deterimined mainly by Hotseat quiz participation.

Note-taking: this course will involve heavy virtual whiteboard demonstrations. Therefore, note-taking is absolute necessary. Every student is expected to submit the pdf version of the notes for three lectures starting the third week (Feb 1, assigned by TA). The TA will select the best two notes for each lecture and distribute them as handouts for everybody (posted on Brightspace). The notes are due one week after the lecture. Note-taking assignment and the grading rubrics will be published on Brightspace.

Refreshing knowledge homework: The refreshing knowledge homework intends to check the prerequisites which are required for the success of this class. This homework contributes to 5% to the final score. Everyone is expected to get an almost perfect score on this homework. Please take a second thought whether you should continue in the class if you feel any difficulty in completing this homework.

Course project: MOST important part of this course. Machine learning is a practical field , so it cannot be emphasized more the importance of completing a machine learning project yourself ! In addition, because this is a graduate-level course, one important aspect is basic scientific training , including asking the right questions, commenting others' work, literature review, experimental design, coding, scientific writing, and presentation skills. This can ONLY be trained with a real project. Teamwork: students are encouraged to work as teams of two or three.

We provide a few research thrusts and datasets for your reference (see below). You are encouraged to choose a specific project within the overarching theme of one research thrust in the list, although you are free to choose any project at your will as long as it relates to machine learning. The goal is to nurture GROUND-BREAKING course projects, which have potentials to be developed into innovative research papers in the future. Course projects outside of the suggested thrusts will receive less mentoring from the instructors and the TAs, and therefore are less preferred. We encourage you to combine your domain of expertise with machine learning . To guide you through the project, we split the entire process into five parts: proposal, peer review, mid-term report, final report and presentation.

Course project proposal: the proposal will be evaluated by intellectural merit, broader impact, and tractability (same creteria for NSF proposals). The instructor DO respect that it is a course project, so the bar is much lower. However, the following three aspects are emphasized equally: (i) intellectural merit: how does the project advance machine learning (or your understanding on machine learning); (ii) broad impact: how does the course project bring impact to a practical field via machine learning? (iii) tractability: is this proposal tractable (as a one-semester course project)? [grading rubrics will be posted on Brightspace.]

Course project reivews: Each student is asked to review at least three proposals of others. The student is asked to review proposals based on intellectural merit, broader impact, and tractability. Peer reviews are safety belts for other students. Unrealistic proposals should be flagged out. Gaming does not work: the grading of the original propsal will NOT be affected by how other students review your proposal. [grading rubrics will be posted on Brightspace.]

Course project mid-term progress report: Each group is expected to submit a progress report by the deadline. This is to ensure that all projects are progressing on the right track. [grading rubrics will be posted on Brightspace.]

Course project final report / presentation: The final report and presentation will be graded in a similar way as conference papers (presentations) by the two TAs and the instructor jointly (although the bar is much lower). [grading rubrics will be posted on Brightspace.]

Mid-term and final exams: The midterm and final exams will be open book due to the special situation of remote learning. Students are allowed to consult any materials; however, they cannot discuss exam questions with anybody. Students will have full 24 hours to answer all questions and type the answers into Word or LaTeX for grading.

Grading Scale

Tentative syllabus.

Oct 8-9; vacation.

November 21--24; thanksgiving vacation.

December 8; classes end.

Course Project Thrusts

Thrust 1: fast and accurate probabilistic inference, thrust 1: stochastic optimization: encoding machine learning for decision-making.

In data-driven decision-making, we have to reason about the optimal policy of a system given a stochastic model learned from data. For example, one can use a machine learning model to capture the traffic dynamics of a road network. The decision-making problem is: given the traffic dynamics learned from data, what is the most efficient way to travel between a pair of locations? Notice that the solution can change dynamically, depending on the shift in traffic dynamics. As another example in Physics, machine learning models have been used to predict the band-gap of many metal alloy materials. The decision-making problem is: given the machine learning model, what is the best alloy, which is both cheap to synthesize and has a good band-gap property?

The afromentioned examples are stochastic optimization problems, which make robust interventions that maximize the ``expectation'' of stochastic functions learned from data. It arises naturally in many applications ranging from economics, operational research, and artificial intelligence. Stochastic optimization combines two intractable problems, one of which is the inner probablistic inference problem to compute the expectation across exponentially many probabilistic outcomes, and the other of which is the outer optimization problem to search for the optimal policy.

Research questions: (i) if the inner machine learning model is a decision tree, can you compute the optimal policy in polynomial time? How? (ii) What if the inner machine learning model is a logistic regression, a linear SVM, a kernerized SVM, a random forest, or a probabilistic graphical model? (iii) What if the machine learning model is temporal, such as a recurrent neural netowrk or a LSTM? (iv) In case the inner probabilistic inference problem is intractable, existing approaches to solve stochastic optimization problems approximate the intractable probabilistic inference sub-problems either in variational forms, or via the empirical mean of pre-computed, fixed samples. There is also a recent approach which approximates the intractable sub-problems with optimization queries, subject to randomized constraints (see following papers). Question: how does various approximation schemes of the inner machine learning models affect the overall solution quality of the stochastic optimization problem? (v) Suppose we are solving one stochastic optimization problem for a specific application, can we adapt existing approximation schemes in any way to fit the problem instance for better results?

Yexiang Xue, Zhiyuan Li, Stefano Ermon, Carla P. Gomes, Bart Selman. Solving Marginal MAP Problems with NP Oracles and Parity Constraints In the Proceedings of the 29th Annual Conference on Neural Information Processing Systems (NIPS) , 2016. [pdf] [spotlight video]

Anton J. Kleywegt, Alexander Shapiro, and Tito Homem-de Mello. The sample average approximation method for stochastic discrete optimization. SIAM Journal on Optimization, 2002. [pdf]

Miguel Á. Carreira-Perpiñán and Geoffrey E. Hinton. On contrastive divergence learning. AISTATS , 2005. [pdf]

Martin Dyer and Leen Stougie. Computational complexity of stochastic programming problems. Mathematical Programming, 2006. [springer]

John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML, 2001. [pdf]

Stefano Ermon, Carla Gomes, Ashish Sabharwal, and Bart Selman. Taming the Curse of Dimensionality: Discrete Integration by Hashing and Optimization In Proc. 30th International Conference on Machine Learning (ICML) 2013. [pdf]

Carla P. Gomes, Ashish Sabharwal, Bart Selman. Near-Uniform Sampling of Combinatorial Spaces Using XOR Constraints. NIPS 2006. [pdf]

Carla P. Gomes, Willem Jan van Hoeve, Ashish Sabharwal, Bart Selman. Counting CSP Solutions Using Generalized XOR Constraints. AAAI 2007. [pdf]

Yexiang Xue *, Xiaojian Wu*, Bart Selman, and Carla P. Gomes. XOR-Sampling for Network Design with Correlated Stochastic Events. In Proc. 26th International Joint Conference on Artificial Intelligence (IJCAI) , 2017. [pdf] * indicates equal contribution.

Yexiang Xue , Xiaojian Wu, Dana Morin, Bistra Dilkina, Angela Fuller, J. Andrew Royle, and Carla Gomes. Dynamic Optimization of Landscape Connectivity Embedding Spatial-Capture-Recapture Information. In Proc. 31th AAAI Conference on Artificial Intelligence (AAAI) , 2017. [pdf] [supplementary materials]

Thrust 2: embedding physical constraints into deep neural networks

The emergence of large-scale data-driven machine learning and optimization methodology has led to successful applications in areas as diverse as finance, marketing, retail, and health care. Yet, many application domains remain out of reach for these technologies, when applied in isolation. In the area of medical robotics, for example, it is crucial to develop systems that can recognize, guide, support, or correct surgical procedures. This is particularly important for next-generation trauma care systems that allow life-saving surgery to be performed remotely in presence of unreliable bandwidth communications. For such systems, machine learning models have been developed that can recognize certain commands and procedures, but they are unable to learn complex physical or operational constraints. Constraint-based optimization methods, on the other hand, would be able to generate feasible surgical plans, but currently, have no mechanism to represent and evaluate such complex environments. To leverage the required capabilities of both technologies, we have to find an integrated method that embeds constraint reasoning in machine learning.

In a seminal paper, the authors proposed an approach, which provides a scalable method for machine learning over structured domains. The core idea is to augment machine learning algorithms with a constraint reasoning module that represents physical or operational requirements. Specifically, the authors propose to embed decision diagrams, a popular constraint reasoning tool, as a fully-differentiable layer in deep neural networks. By enforcing the constraints, the output of generative models can now provide assurances of safety, correctness, and/or fairness. Moreover, this approach enjoys a smaller modeling space than traditional machine learning approaches, allowing machine learning algorithms to learn faster and generalize better.

Research questions: (i) are there any other ways to enforce physical constraints other than using a decision diagram in the seminal work? (ii) What if the constraints are too complicated which cannot be fully captured by a decision diagram? (iii) In a specific applicational domain, is there a better way to encode constraints? (iv) Does enforcing physical constraints make machine learning easier or more difficult? Can you quantify the difference? (v) Can we apply this idea in natural language processing, computer vision, reinforcement learning, etc? (vi) Ethics and fairness in machine learning are being discussed in our community. Can we use this technique to guarantee the ethics and/or the fairness of a machine learning model?

Yexiang Xue, Willem-Jan van Hoeve. Embedding Decision Diagrams into Generative Adversarial Networks. In Proc. of the Sixteenth International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research (CPAIOR), 2019. [springer]

Md Masudur Rahman, Natalia Sanchez-Tamayo, Glebys Gonzalez, Mridul Agarwal, Vaneet Aggarwal, Richard M. Voyles, Yexiang Xue, and Juan Wachs. Transferring Dexterous Surgical Skill Knowledge between Robots for Semi-autonomous Teleoperation. In ROMAN , 2019. [pdf]

Naveen Madapana, Md Masudur Rahman, Natalia Sanchez-Tamayo, Mythra V. Balakuntala, Glebys Gonzalez, Jyothsna Padmakumar Bindu, L. N. Vishnunandan Venkatesh, Xingguang Zhang, Juan Barragan Noguera, Thomas Low, Richard M. Voyles, Yexiang Xue, and Juan Wachs DESK: A Robotic Activity Dataset for Dexterous Surgical Skills Transfer to Medical Robots. In IROS, 2019. [pdf]

Matt J. Kusner, Brooks Paige, José Miguel Hernández-Lobato. Grammar Variational Autoencoder. In Proceedings of the 34th International Conference on Machine Learning, ICML, 2017. [pdf]

Chenglong Wang, Kedar Tatwawadi, Marc Brockschmidt, Po-Sen Huang, Yi Mao, Oleksandr Polozov, Rishabh SinghRobust Text-to-SQL Generation with Execution-Guided Decoding [pdf]

Kevin Lin, Ben Bogin, Mark Neumann, Jonathan Berant, Matt Gardner Grammar-based Neural Text-to-SQL Generation [ArXiv]

Thrust 3: machine learning for scientific discovery and/or social good

Machine learning models have defeated the brightest mind in this world (see the story of AlphaGo). Now, instead of using this technology for game playing, can we harness the tremendous progress in AI and machine learning to make our world a better place? In particular, I am curious at problems that have attracted the smartest minds of man kind historically -- the discovery of new science. Besides scientific discovery, can we use machine learning to create positive social impact?

If you think about it: in AlphaGo, machine learning is used to find a strategy in a highly complex space (all possible moves of Go), which beats all opponent's strategies. The problem is similar for scientific discovery, except that we are now playing Go with nature. For example, in materials discovery, we would like to find the best material in a highly complex space (all possible compositions) which enjoys the best properties. Should the strategy which was proven successful for Go work for scientific discovery (and/or AI for social good)?

I am listing a few example papers below in which machine learning are used successfully for scientific discovery and for social good. I hope this can motivate you to discover a good applicational area of machine learning. The key to the success is to combine your domain of expertise with machine learning.

Yexiang Xue, Junwen Bai, Ronan Le Bras, Brendan Rappazzo, Richard Bernstein, Johan Bjorck, Liane Longpre, Santosh K. Suram, Robert B. van Dover, John Gregoire, and Carla Gomes. Phase-Mapper: An AI Platform to Accelerate High Throughput Materials Discovery. In Proc. 29th Annual Conference on Innovative Applications of Artificial Intelligence (IAAI) , 2017. [pdf] [video 1] [video 2] [video 3]

Santosh K. Suram, Yexiang Xue, Junwen Bai, Ronan LeBras, Brendan H Rappazzo, Richard Bernstein, Johan Bjorck, Lan Zhou, R. Bruce van Dover, Carla P. Gomes, and John M. Gregoire. Automated Phase Mapping with AgileFD and its Application to Light Absorber Discovery in the V-Mn-Nb Oxide System. In American Chemical Society Combinatorial Science , Dec, 2016. [DOI] [pdf] [video 1] [video 2] [video 3]

Junwen Bai, Yexiang Xue, Johan Bjorck, Ronan Le Bras, Brendan Rappazzo, Richard Bernstein, Santosh K. Suram, Robert Bruce van Dover, John M. Gregoire, Carla P. Gomes. Phase Mapper: Accelerating Materials Discovery with AI. In AI Magazine , Vol. 39, No 1. 2018. [paper]

Yexiang Xue, Ian Davies, Daniel Fink, Christopher Wood, Carla P. Gomes. Avicaching : A Two Stage Game for Bias Reduction in Citizen Science In the Proceedings of the 15th International Conference on Autonomous Agents and Multiagent Systems (AAMAS) , 2016. [pdf] [supplementary materials] [video]

Giuseppe Carleo and Matthias Troyer Solving the quantum many-body problem with artificial neural networks. In Science, 355, 2017. [website]

Ganesh Hegde and R. Chris Bowen Machine-learned approximation to Density Functional Theory Hamiltons. In Scientific Reports, 7, 2016. [ArXiv]

Graham Roberts, Simon Y. Haile, Rajat Sainju, Danny J. Edwards, Brian Hutchinson and Yuanyuan Zhu Deep Learning for Semantic Segmentation of Defects in Advanced STEM Images of Steels. Scientific Reports, volume 9, 2019. [website]

Thrust 6: machine learning for computational sustainability

Academic policies, late policy.

Assignments are to be submitted by the due date listed. Each person will be allowed two days of extensions which can be applied to any combination of assignments (homework/projects only; exams excluded) during the semester without penalty. After that, a late penalty of 15% per day will be assigned. The use of a partial day will be counted as a full day. Use of extension days must be stated explicitly at the time of the late submission (by accompanying email to ALL TAs and the instructor), otherwise, late penalties will apply. Extensions cannot be used after the final day of classes (ie., April 28). Extension days cannot be rearranged after they are applied to a submission. Additional no-penalty late days may be introduced in the later part of the semester conditioned on the completion of the course evaluations (details to be finalized). Assignments, project reports, etc, will NOT BE accepted if they are more than five days late (and receive zero points). Additional extensions will be granted only due to serious and documented medical or family emergencies. Use the late days wisely!

Attendance Policy during COVID-19

Students are expected to attend all classes remotely unless they are ill or otherwise unable to attend class. If they feel ill, have any symptoms associated with COVID-19, or suspect they have been exposed to the virus, students should stay home and contact the Protect Purdue Health Center (496-INFO).

In the current context of COVID-19, in-person attendance cannot be a factor in the final grades. However, timely completion of alternative assessments can certainly be part of the final grade. Students need to inform the instructor of any conflict that can be anticipated and will affect the timely submission of an assignment or the ability to take an exam.

Classroom engagement is extremely important and associated with your overall success in the course. The importance and value of course engagement and ways in which you can engage with the course content even if you are in quarantine or isolation, will be discussed at the beginning of the semester. Student survey data from Fall 2020 emphasized students’ views of in-person course opportunities as critical to their learning, engagement with faculty/TAs, and ability to interact with peers.

Only the instructor can excuse a student from a course requirement or responsibility. When conflicts can be anticipated, such as for many University-sponsored activities and religious observations, the student should inform the instructor of the situation as far in advance as possible. For unanticipated or emergency conflicts, when advance notification to an instructor is not possible, the student should contact the instructor/instructional team as soon as possible by email, through Brightspace, or by phone. In cases of bereavement, quarantine, or isolation, the student or the student’s representative should contact the Office of the Dean of Students via email or phone at 765-494-1747. Our course Brightspace includes a link to the Dean of Students under ‘Campus Resources.’

Academic Guidance in the Event a Student is Quarantined/Isolated

If you must quarantine or isolate at any point in time during the semester, please reach out to me via email so that we can communicate about how you can continue to learn remotely. Work with the Protect Purdue Health Center (PPHC) to get documentation and support, including access to an Academic Case Manager who can provide you with general guidelines/resources around communicating with your instructors, be available for academic support, and offer suggestions for how to be successful when learning remotely. Your Academic Case Manager can be reached at [email protected]. Importantly, if you find yourself too sick to progress in the course, notify your academic case manager and notify me via email or Brightspace. We will make arrangements based on your particular situation.

Academic honesty

  • Unless stated otherwise, each student should write up their own solutions independently. You need to indicate the names of the people you discussed a problem with; ideally you should discuss with no more than two other people.
  • NO PART OF THE STUDENT'S ASSIGMENT (PROJECT, NOTES, ETC) SHOULD BE COPIED FROM ANOTHER STUDENT OR FROM OTHER RESEARCHERS OR FROM THE WEB (Plagiarism). We encourage you to interact amongst yourselves: you may discuss and obtain help with basic concepts covered in lectures or the textbook, homework specification (but not solution), and general ideas of program implementation (but not the code). However, unless otherwise noted, work turned in should reflect your own efforts and knowledge. Sharing or copying solutions is unacceptable and could result in failure of this course. We use copy detection software, so do not copy code and make changes (either from the Web or from other students). You are expected to take reasonable precautions to prevent others from using your work.
  • Any student not following these guidelines are subject to an automatic F (final grade).

Classroom Guidance Regarding Protect Purdue (in case students use common spaces for studying)

The Protect Purdue Plan, which includes the Protect Purdue Pledge, is campus policy and as such all members of the Purdue community must comply with the required health and safety guidelines. Required behaviors in this class include: staying home and contacting the Protect Purdue Health Center (496-INFO) if you feel ill or know you have been exposed to the virus, properly wearing a mask in classrooms and campus building, at all times (e.g., mask covers nose and mouth, no eating/drinking in the classroom), disinfecting desk/workspace before and after use, maintaining appropriate social distancing with peers and instructors (including when entering/exiting classrooms), refraining from moving furniture, avoiding shared use of personal items, maintaining robust hygiene (e.g., handwashing, disposal of tissues) prior to, during and after class, and following all safety directions from the instructor.

Students who are not engaging in these behaviors (e.g., wearing a mask) will be offered the opportunity to comply. If non-compliance continues, possible results include instructors asking the student to leave class and instructors dismissing the whole class. Students who do not comply with the required health behaviors are violating the University Code of Conduct and will be reported to the Dean of Students Office with sanctions ranging from educational requirements to dismissal from the university.

Any student who has substantial reason to believe that another person in a campus room (e.g., classroom) is threatening the safety of others by not complying (e.g., not properly wearing a mask) may leave the room without consequence. The student is encouraged to report the behavior to and discuss the next steps with their instructor. Students also have the option of reporting the behavior to the Office of the Student Rights and Responsibilities. See also Purdue University Bill of Student Rights.

Nondiscrimination Statement

Purdue University is committed to maintaining a community which recognizes and values the inherent worth and dignity of every person; fosters tolerance, sensitivity, understanding, and mutual respect among its members; and encourages each individual to strive to reach his or her potential. In pursuit of its goal of academic excellence, the University seeks to develop and nurture diversity. The University believes that diversity among its many members strengthens the institution, stimulates creativity, promotes the exchange of ideas, and enriches campus life. A hyperlink to Purdue’s full Nondiscrimination Policy Statement is included here .

Accessbility

Purdue University strives to make learning experiences as accessible as possible. If you anticipate or experience physical or academic barriers based on disability, you are welcome to let me know so that we can discuss options. You are also encouraged to contact the Disability Resource Center at: [email protected] or by phone: 765-494-1247.

Mental Health/Wellness Statement

If you find yourself beginning to feel some stress, anxiety and/or feeling slightly overwhelmed, try WellTrack. Sign in and find information and tools at your fingertips, available to you at any time.

If you need support and information about options and resources, please contact or see the Office of the Dean of Students. Call 765-494-1747. Hours of operation are M-F, 8 am- 5 pm.

If you find yourself struggling to find a healthy balance between academics, social life, stress, etc. sign up for free one-on-one virtual or in-person sessions with a Purdue Wellness Coach at RecWell. Student coaches can help you navigate through barriers and challenges toward your goals throughout the semester. Sign up is completely free and can be done on BoilerConnect. If you have any questions, please contact Purdue Wellness at [email protected].

If you’re struggling and need mental health services: Purdue University is committed to advancing the mental health and well-being of its students. If you or someone you know is feeling overwhelmed, depressed, and/or in need of mental health support, services are available. For help, such individuals should contact Counseling and Psychological Services (CAPS) at 765-494-6995 during and after hours, on weekends and holidays, or by going to the CAPS office on the second floor of the Purdue University Student Health Center (PUSH) during business hours.

Emergency Preparation

In the event of a major campus emergency, course requirements, deadlines and grading percentages are subject to changes that may be necessitated by a revised semester calendar or other circumstances beyond the instructor’s control. Relevant changes to this course will be posted onto the course website or can be obtained by contacting the instructors or TAs via email or phone. You are expected to read your @purdue.edu email on a frequent basis.

Other general course policies can be found here .

~~~Coming soon~~~

eBird citizen scince dataset.

Synthetic and real datasets for materials discovery.

Dataset for the corridor-design problem and landscape optimization problem.

Remote sensing images (a code repository which contains code to download from Google Earth engine).

JIGSAWS dataset for robot visual perception, gesture and skill assessment.

DESK (Dexterous Surgical Skill) dataset. It comprises a set of surgical robotic skills collected during a surgical training task using three robotic platforms: the Taurus II robot, Taurus II simulated robot, and the YuMi robot.

UCI Machine Learning Dataset.

statistical learning homework

Synopsis (摘要)

This course is open to graduates and senior undergraduates in applied mathematics, statistics, and engineering who are interested in learning from data. It covers hot topics in statistical learning, also known as machine learning, featured with various in-class projects in computer vision, pattern recognition, computational advertisement, bioinformatics, and social networks, etc. An emphasis this year is on deep learning with convolutional neural networks. Prerequisite: linear algebra, basic probability and multivariate statistics, convex optimization; familiarity with R, Matlab, and/or Python, Torch for deep learning, etc.

Reference (参考教材)

An Introduction to Statistical Learning, with applications in R. By James, Witten, Hastie, and Tibshirani

ISLR-python, By Jordi Warmenhoven .

ISLR-Python: Labs and Applied, by Matt Caudill .

The Elements of Statistical Learning. 2nd Ed. By Hastie, Tibshirani, and Friedman

statlearning-notebooks , by Sujit Pal, Python implementations of the R labs for the StatLearning: Statistical Learning online course from Stanford taught by Profs Trevor Hastie and Rob Tibshirani.

Instructors:

Time and venue:.

TuTh 4:30-5:50pm Rm4504 (Lift 25/26), Academic Bldg Piazza discussion forum: sign-up link

Homework and Projects:

Weekly homeworks, monthly mini-projects, and a final major project. No final exam. For 3-project plan, homework and projects will be counted in grading by 20-20-20-40 in percentage.

Grading scheme: [ description ]

Teaching Assistant (助教):

Mr. ZHU, Weizhi, Email: statml.hw (add "AT gmail DOT com" afterwards)

Tutorial Material

Schedule (时间表)

  • [ slides in pdf ]
  • Homework 1 [pdf] . Deadline: 09/28/2015, Monday. Mark on the head of your homework: Name - Student ID .
  • Homework 2 [pdf] . Deadline: 10/12/2015, Monday. Mark on the head of your homework: Name - Student ID .
  • Project 1 [pdf] . Deadline: 10/12/2015, Monday. Team work with no more than FIVE (5) collaborators.
  • Jiechao XIONG, A Dynamic Approach to Variable Selection
  • Homework 3 [pdf] . Deadline: 10/19/2015, Monday. Mark on the head of your homework: Name - Student ID .
  • Homework 4 [pdf] . Deadline: 10/26/2015, Monday. Mark on the head of your homework: Name - Student ID .
  • Homework 5 [pdf] . Deadline: 11/02/2015, Monday. Mark on the head of your homework: Name - Student ID .
  • Xuening ZHU, Network Vector Regression
  • Homework 6 [pdf] . Deadline: 11/09/2015, Monday. Mark on the head of your homework: Name - Student ID .

Datasets (to-be-updated)

  • [Animal Sleep Data] Animal species sleeping hours vs. other features
  • [Anzhen Heart Data] Heart Operation Effect Prediction , provided by Dr. Jinwen Wang, Anzhen Hospital
  • [Beer Data] 877 beers dataset , provided by Mr. Richard Sun, Shanghai
  • [Crime Data] Crime rates in 59 US cities during 1970-1992
  • [Real-Time-Bidding Algorithm Competition Data] Contest Website
  • [红楼梦人物事件矩阵] a 376-by-475 matrix (374-by-475 updated by WAN, Mengting) for character-event appearance in A Dream of Red Mansion (Xueqin Cao) [374 Characters.txt (for R/read.table)] [HongLouMeng374.csv] [HongLouMeng376.xls] [.mat] [readme.m]
  • [Keywords Pricing] Keywords and profit index in paid search advertising, by Hansheng Wang (Guanghua, PKU). [readme.txt] [data in csv]
  • [Radon Data] Radon measurements of 12,687 houses in US
  • [Wells Data] Switch unsafe wells for arsenic pollution in Bangladesh
  • to-be-done...

Stat 928, Spring 2011

Statistical learning theory.

Statistical learning theory studies the statistical aspects of machine learning and automated reasoning, through the use of (sampled) data. In particular, the focus is on characterizing the generalization ability of learning algorithms in terms of how well they perform on ``new'' data when trained on some given data set. The focus of the course is on: providing the the fundamental tools used in this analysis; understanding the performance of widely used learning algorithms (with a focus on regression and classification); understanding the ``art'' of designing good algorithms, both in terms of statistical and computational properties. Potential topics include: concentration of measure; empirical process theory; online learning; stochastic optimization; margin based algorithms; feature selection; regularization; PCA.

Prerequisites:

Requirements:, instructor:, time and location:, schedule and notes:.

  • lecture notes pdf
  • lecture 9 notes pdf
  • lecture 10 notes pdf
  • lecture 11 notes pdf
  • lecture 12 notes pdf
  • Review: Norms and Dual Norms
  • lecture 14 notes pdf
  • further reading: S. M. Kakade, S. Shalev-Shwartz, A. Tewari. Regularization Techniques for Learning with Matrices. pdf
  • lecture 16 notes pdf
  • lecture 17 notes pdf
  • lecture 18 notes pdf
  • lecture 19 notes pdf
  • lecture 20 notes pdf
  • lecture 21 notes pdf
  • lecture 22 notes pdf
  • lecture 23 notes pdf
  • lecture 24 notes pdf
  • lecture 25 notes pdf

Advanced Topics in Statistical Learning: Spring 2024

Supplementary notes, other resources.

36-708 Statistical Methods for Machine Learning Instructor: Larry Wasserman Lecture Time: Tuesday and Thursday 1:30 - 2:50 Lecture Location: POS 152 Office Hour: Tuesdays 12:00 - 1:00 Baker Hall 132G Office: Baker Hall 132G Email: [email protected] TA Information Nic Dalmasso Email: [email protected] Office Hours: Wednesdays 4-5 PH 223B Boyan Duan Email: [email protected] Office Hours: Thursdays 12-1 Baker Hall 132 Lounge Syllabus Click here for syllabus Course Description This course is an advanced course focusing on the intsersection of Statistics and Machine Learning. The goal is to study modern methods and the underlying theory for those methods. There are two pre-requisites for this course: 36-705 (Intermediate Statistical Theory) 36-707 (Regression) Lecture Notes Review Density Estimation Nonparametric Regression Linear Regression Sparsity Nonparametric Sparsity Linear Classifiers Nonparametric Classifiers Random Forests Clustering Graphical Models Directed Graphical Models Causal Inference Minimax Theory Nonparametric Bayesian Inference Conformal Prediction Differential Privacy Optimal Transport and Wasserstein Distance Two Sample Testing Dimension Reduction Boosting Support Vector Machines Online Learning Differential Privacy Manifolds --> Additional Notes (Optional: not covered in class) Linear Classifiers Function Spaces Sparsity --> Assignments Assignments are due on Fridays at 3:00 p.m. Upload your assignment in Canvas. No late assignments will be accepted. If you need an extension due to illness, email me BEFORE the deadline. Homework 1 (due Friday Feb 1 3:00. Submit a pdf on Canvas) Homework 2 (due Friday Feb 22 3:00. Submit a pdf on Canvas) Homework 3 (due March 29 3:00. Submit a pdf on Canvas) Homework 4 (due April 19 3:00. Submit a pdf on Canvas) Solutions Homework 1 Solutions Homework 2 Solutions Homework 3 Solutions Homework 4 Solutions

logo that says helpful professor with a mortarboard hat picture next to it

11 Surprising Homework Statistics, Facts & Data

homework pros and cons

The age-old question of whether homework is good or bad for students is unanswerable because there are so many “ it depends ” factors.

For example, it depends on the age of the child, the type of homework being assigned, and even the child’s needs.

There are also many conflicting reports on whether homework is good or bad. This is a topic that largely relies on data interpretation for the researcher to come to their conclusions.

To cut through some of the fog, below I’ve outlined some great homework statistics that can help us understand the effects of homework on children.

Homework Statistics List

1. 45% of parents think homework is too easy for their children.

A study by the Center for American Progress found that parents are almost twice as likely to believe their children’s homework is too easy than to disagree with that statement.

Here are the figures for math homework:

  • 46% of parents think their child’s math homework is too easy.
  • 25% of parents think their child’s math homework is not too easy.
  • 29% of parents offered no opinion.

Here are the figures for language arts homework:

  • 44% of parents think their child’s language arts homework is too easy.
  • 28% of parents think their child’s language arts homework is not too easy.
  • 28% of parents offered no opinion.

These findings are based on online surveys of 372 parents of school-aged children conducted in 2018.

2. 93% of Fourth Grade Children Worldwide are Assigned Homework

The prestigious worldwide math assessment Trends in International Maths and Science Study (TIMSS) took a survey of worldwide homework trends in 2007. Their study concluded that 93% of fourth-grade children are regularly assigned homework, while just 7% never or rarely have homework assigned.

3. 17% of Teens Regularly Miss Homework due to Lack of High-Speed Internet Access

A 2018 Pew Research poll of 743 US teens found that 17%, or almost 2 in every 5 students, regularly struggled to complete homework because they didn’t have reliable access to the internet.

This figure rose to 25% of Black American teens and 24% of teens whose families have an income of less than $30,000 per year.

4. Parents Spend 6.7 Hours Per Week on their Children’s Homework

A 2018 study of 27,500 parents around the world found that the average amount of time parents spend on homework with their child is 6.7 hours per week. Furthermore, 25% of parents spend more than 7 hours per week on their child’s homework.

American parents spend slightly below average at 6.2 hours per week, while Indian parents spend 12 hours per week and Japanese parents spend 2.6 hours per week.

5. Students in High-Performing High Schools Spend on Average 3.1 Hours per night Doing Homework

A study by Galloway, Conner & Pope (2013) conducted a sample of 4,317 students from 10 high-performing high schools in upper-middle-class California. 

Across these high-performing schools, students self-reported that they did 3.1 hours per night of homework.

Graduates from those schools also ended up going on to college 93% of the time.

6. One to Two Hours is the Optimal Duration for Homework

A 2012 peer-reviewed study in the High School Journal found that students who conducted between one and two hours achieved higher results in tests than any other group.

However, the authors were quick to highlight that this “t is an oversimplification of a much more complex problem.” I’m inclined to agree. The greater variable is likely the quality of the homework than time spent on it.

Nevertheless, one result was unequivocal: that some homework is better than none at all : “students who complete any amount of homework earn higher test scores than their peers who do not complete homework.”

7. 74% of Teens cite Homework as a Source of Stress

A study by the Better Sleep Council found that homework is a source of stress for 74% of students. Only school grades, at 75%, rated higher in the study.

That figure rises for girls, with 80% of girls citing homework as a source of stress.

Similarly, the study by Galloway, Conner & Pope (2013) found that 56% of students cite homework as a “primary stressor” in their lives.

8. US Teens Spend more than 15 Hours per Week on Homework

The same study by the Better Sleep Council also found that US teens spend over 2 hours per school night on homework, and overall this added up to over 15 hours per week.

Surprisingly, 4% of US teens say they do more than 6 hours of homework per night. That’s almost as much homework as there are hours in the school day.

The only activity that teens self-reported as doing more than homework was engaging in electronics, which included using phones, playing video games, and watching TV.

9. The 10-Minute Rule

The National Education Association (USA) endorses the concept of doing 10 minutes of homework per night per grade.

For example, if you are in 3rd grade, you should do 30 minutes of homework per night. If you are in 4th grade, you should do 40 minutes of homework per night.

However, this ‘rule’ appears not to be based in sound research. Nevertheless, it is true that homework benefits (no matter the quality of the homework) will likely wane after 2 hours (120 minutes) per night, which would be the NEA guidelines’ peak in grade 12.

10. 21.9% of Parents are Too Busy for their Children’s Homework

An online poll of nearly 300 parents found that 21.9% are too busy to review their children’s homework. On top of this, 31.6% of parents do not look at their children’s homework because their children do not want their help. For these parents, their children’s unwillingness to accept their support is a key source of frustration.

11. 46.5% of Parents find Homework too Hard

The same online poll of parents of children from grades 1 to 12 also found that many parents struggle to help their children with homework because parents find it confusing themselves. Unfortunately, the study did not ask the age of the students so more data is required here to get a full picture of the issue.

Get a Pdf of this article for class

Enjoy subscriber-only access to this article’s pdf

Interpreting the Data

Unfortunately, homework is one of those topics that can be interpreted by different people pursuing differing agendas. All studies of homework have a wide range of variables, such as:

  • What age were the children in the study?
  • What was the homework they were assigned?
  • What tools were available to them?
  • What were the cultural attitudes to homework and how did they impact the study?
  • Is the study replicable?

The more questions we ask about the data, the more we realize that it’s hard to come to firm conclusions about the pros and cons of homework .

Furthermore, questions about the opportunity cost of homework remain. Even if homework is good for children’s test scores, is it worthwhile if the children consequently do less exercise or experience more stress?

Thus, this ends up becoming a largely qualitative exercise. If parents and teachers zoom in on an individual child’s needs, they’ll be able to more effectively understand how much homework a child needs as well as the type of homework they should be assigned.

Related: Funny Homework Excuses

The debate over whether homework should be banned will not be resolved with these homework statistics. But, these facts and figures can help you to pursue a position in a school debate on the topic – and with that, I hope your debate goes well and you develop some great debating skills!

Chris

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 50 Durable Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 100 Consumer Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 30 Globalization Pros and Cons
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 17 Adversity Examples (And How to Overcome Them)

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

IMAGES

  1. Top 5 Statistics Homework Help Benefits For Students

    statistical learning homework

  2. 9.07 Introduction to Statistical Methods Homework 9 Name

    statistical learning homework

  3. statistical_learning_homework/Ex_7_1.py at master · Rhythmblue

    statistical learning homework

  4. Homework 6 with Solutions for Statistical Theory

    statistical learning homework

  5. Advanced Statistical Learning homework 1.pdf

    statistical learning homework

  6. https://www.statisticshomeworkhelper.com/ is the best place to find

    statistical learning homework

VIDEO

  1. 31 HOMEWORK Statistical Functions

  2. Basics Statistics in Overview

  3. What is statistics?

  4. Minimum Values Question #maths #mathchallenge #mathematics

  5. #the #class vs #homework vs #test #trendingshorts #viralshorts

  6. intro to statistics

COMMENTS

  1. An Introduction to Statistical Learning

    An Introduction to Statistical Learning provides a broad and less technical treatment of key topics in statistical learning. This book is appropriate for anyone who wishes to use contemporary tools for data analysis. The first edition of this book, with applications in R (ISLR), was released in 2013. A 2nd Edition of ISLR was published in 2021.

  2. Statistical Learning

    The syllabus includes: linear and polynomial regression, logistic regression and linear discriminant analysis; cross-validation and the bootstrap, model selection and regularization methods (ridge and lasso); nonlinear models, splines and generalized additive models; tree-based methods, random forests and boosting; support-vector machines.

  3. Foundations of Machine Learning

    Bloomberg presents "Foundations of Machine Learning," a training course that was initially delivered internally to the company's software engineers as part of its "Machine Learning EDU" initiative. This course covers a wide variety of topics in machine learning and statistical modeling. The primary goal of the class is to help participants gain ...

  4. PDF CS229T/STAT231: Statistical Learning Theory (Winter 2016)

    methods, and online learning. We will move from very strong assumptions (assuming the data are Gaussian, in asymptotics) to very weak assumptions (assuming the data can be generated by an adversary, in online learning). Kernel methods is a bit of an outlier in this regard; it is more about representational power rather than statistical learning.

  5. Statistics and Probability

    Unit 1 Analyzing categorical data Unit 2 Displaying and comparing quantitative data Unit 3 Summarizing quantitative data Unit 4 Modeling data distributions Unit 5 Exploring bivariate numerical data Unit 6 Study design Unit 7 Probability Unit 8 Counting, permutations, and combinations Unit 9 Random variables Unit 10 Sampling distributions

  6. STAT 430: Basics of Statistical Learning

    STAT 430: Basics of Statistical Learning Schedule - Homework - Quizzes - Projects Syllabus - Compass - R4SL Schedule Week 1 Monday | 2017.8.28 First day of class! Course overview and syllabus discussion. Materials: Syllabus Slides, Full Syllabus ISL Videos: Opening Remarks and Examples, Supervised and Unsupervised Learning Wednesday | 2017.8.30

  7. Statistics and Probability

    This website provides training and tools to help you solve statistics problems quickly, easily, and accurately - without having to ask anyone for help. Online Tutorials Learn at your own pace. Free online tutorials cover statistics, probability, regression, analysis of variance, survey sampling, and matrix algebra - all explained in plain English.

  8. PDF STAT 542: Statistical Learning

    ... Computational Constrained Optimization R packages Real world problems! Overview Basic course information Textbook Course website Homework Project Topics and objectives Textbook ESL The Elements of Statistical Learning: Data Mining, Inference, and Prediction by Hastie, T., Tibshirani, R. and Friedman, J. • Required [free PDF]

  9. ECE 543: Statistical Learning Theory (Spring 2021)

    Homework 4 is posted, due by the end of the day on Tuesday, April 27. Recordings of all lectures up to April 15 are now available. March 25. ... Statistical learning theory is a burgeoning research field at the intersection of probability, statistics, computer science, and optimization that studies the performance of computer algorithms for ...

  10. CS578: Statistical Machine Learning

    CS 578: Statistical Machine Learning (2021 Spring) Course Information. When: Mon/Wed 4:30 pm -- 5:45 pm. ... Refreshing knowledge homework: The refreshing knowledge homework intends to check the prerequisites which are required for the success of this class. This homework contributes to 5% to the final score.

  11. Math4432: Statistical Learning

    Homework and Projects: Weekly homeworks, monthly mini-projects, and a final major project. No final exam. For 3-project plan, homework and projects will be counted in grading by 20-20-20-40 in percentage. Grading scheme: [ description ] Teaching Assistant (助教): Mr. ZHU, Weizhi, Email: statml.hw (add "AT gmail DOT com" afterwards) Tutorial Material

  12. Introduction to Statistics I Stanford Online

    Click "ENROLL NOW" to visit Coursera and get more information on course details and enrollment. Stanford's "Introduction to Statistics" teaches you statistical thinking concepts that are essential for learning from data and communicating insights. By the end of the course, you will be able to perform exploratory data analysis, understand ...

  13. Statistical Learning

    Stat 241B / CS 281B. Instructor: Ryan Tibshirani (ryantibs at berkeley dot edu) GSI: Seunghoon Paik (shpaik at berkeley dot edu) Class times: Tuesdays and Thursdays, 3:30-5pm, Tan 180. Office hours: RT: Wednesdays, 3-4pm, Evans 417. SP: Thursdays, 5-6pm, Evans 444.

  14. Stat 928: Statistical Learning Theory, Spring 2011

    Statistical learning theory studies the statistical aspects of machine learning and automated reasoning, through the use of (sampled) data. In particular, the focus is on characterizing the generalization ability of learning algorithms in terms of how well they perform on ``new'' data when trained on some given data set. ... Homework sets ...

  15. An Introduction To Statistical Learning 2nd Edition Textbook ...

    The given question deals with the study of whether the following given experiments should use the flexible statistical method or not. a. The given experiment has an extremely large sample size of , and the number of predictors is small, hence the flexible statistical learning can be used, since then large number of parameters that are present in the model can be estimated, due to large number ...

  16. Intro to Statistical Learning Course I Stanford Online

    Prerequisites. A conferred Bachelor's degree with an undergraduate GPA of 3.3 or better. Introductory courses in statistics or probability (STATS60), linear algebra (MATH51), and computer programming (CS105) or equivalents. Students will be required to use R and R Studio (preferred) in this course.

  17. An Introduction to Statistical Learning Solutions Manual

    Understanding An Introduction to Statistical Learning homework has never been easier than with Chegg Study. ... Unlike static PDF An Introduction to Statistical Learning solution manuals or printed answer keys, our experts show you how to solve each problem step-by-step. No need to wait for office hours or assignments to be graded to find out ...

  18. Advanced Topics in Statistical Learning: Spring 2024

    Here is the estimated class schedule. It is subject to change, depending on time and class interests. Homework Homework 1: pdf, source Homework 2: Homework 3: Homework 4: Project Supplementary notes Here are some supplementary notes, on some topics adjacent to those from lectures. B-splines: pdf , source

  19. 36-708 Statistical Machine Learning, Spring 2018

    36-708 Statistical Methods for Machine Learning. Click here for syllabus This course is an advanced course focusing on the intsersection of Statistics and Machine Learning. The goal is to study modern methods and the underlying theory for those methods. There are two pre-requisites for this course: 36-705 (Intermediate Statistical Theory)

  20. PDF STA 4241 Spring 2022 Statistical Learning in R

    Homework There will be a homework assignment roughly every two weeks and it will be due via Canvas submission. Exams You will have one take-home exam that is to be assigned roughly in the middle of the semester. Project: Students will be expected to complete a written project at the end of the semester and present their findings to the class.

  21. 11 Surprising Homework Statistics, Facts & Data (2024)

    A 2018 study of 27,500 parents around the world found that the average amount of time parents spend on homework with their child is 6.7 hours per week. Furthermore, 25% of parents spend more than 7 hours per week on their child's homework.

  22. An Introduction To Statistical Learning 1st Edition Textbook ...

    Step-by-step solution. Step 1 of 4. The given question deals with the study of whether the following given experiments should use the flexible statistical method or not. a. The given experiment has an extremely large sample size of , and the number of predictors is small, hence the flexible statistical learning can be used, since then large ...

  23. The Elements Of Statistical Learning 2nd Edition Textbook ...

    The Elements of Statistical Learning 2nd edition We have solutions for your book! This problem has been solved: Problem 1E Chapter CH2 Problem 1E Step-by-step solution Step 1 of 4 Let , we have to show that classifying a pattern to the class i, such that , is same as classifying the pattern to class i, such that the ith index minimizes . That is,

  24. Statistics for AI, Machine Learning, and Data Science

    The inference portion will introduce common statistical concepts that allow us to understand a population and test hypotheses (such as performing A/B tests and calculating and interpreting p-values). The prediction section will begin with the simplest of algorithms (linear regression) and gradually touch upon more advanced topics, such as ...