Day 5 - Supervised Learning - Logistic Regression

Posted on June 2, 2017 by Govind Gopakumar

Lecture slides in pdf form

Prelude

Announcements

Recap

Our first Regression model

Matrix factorization as regression

Probabilistic Classification

Logistic Regression - I

Why do we need this?

Idea from linear regression

Logistic Regression - II

Model overview

Interpretation?

Logistic Regression - III

Learning

Geometry of the solution

Logistic Regression - IV

Learning the parameter

Problems with the squared loss

Logistic Regression - V

Constructing a loss

Two way loss

Logistic Regression - VI

Final cross-entropy loss

Loss function

Logistic Regression - VII

Optimizing this loss

Final expression

Logistic Regression - VIII

Gradient descent

Analyzing the update step

Gradient Descent - I

Improving gradient descent

Speeding up gradient descent

Gradient Descent - II

Mini-batch Gradient Descent

Stochastic Gradient Descent

Logistic Regression - via Probability - I

Choosing a likelihood

Doing “Maximum” probability

Logistic Regression - IX

Multiclass

Comments

Yet another Classifier

Perceptron - I

Extending the Logistic model

Analyzing the new update

Perceptron - II

Mistake driven learning

Geometry of the classifier

Perceptron - III

Significance of Perceptrons

Usage of perceptrons

Halfway round up

General techniques

Loss functions

Probability method

Methods discussed

Classification

Regression

Agenda for next week

Unsupervised Learning and Advanced methods

Greater focus on programming

Conclusion

Concluding Remarks

Takeaways

Announcements

References