Day 2 - Mathematical Background

Posted on May 30, 2017 by Govind Gopakumar

Lecture slides in pdf form

Announcements

Recap

Machine Learning

Divisions in Machine Learning

Overview

Notations

Dealing with data :

Dealing with model:

Mathematics in Machine Learning

Probability

Basics

Definitions

Terms

Random Variables

What are they?

How do we use them?

Distributions - I

Continuous

Discrete

These can be combined together (joint, marginal)

Distributions - II

Gaussian distribution :

Multivariate Gaussian :

Distributions - III

Multiple variables :

Examples in terms of Gaussians :

Bayes theorem - I

Invert the event!

Terms in this expression

Bayes theorem - II

Setting

Inverting the event

Statistics

Statistics of a sample - I

Mean of sample

Variances and covariances

Generally, we do not come across other “moments” of the data in Machine Learning (skew, kurtosis etc).

Statistics of a sample - II

Of standard distributions

Of a sample

Linear Algebra

Spaces

Constituents :

Utility :

Matrix Algebra

Basics

Properties

Other terms

Eigenvalues

Measures of vectors

Functions and Optimization

Function shapes

Convexity

Smoothness and differentiability

Optimization theory

Basics :

Examples of gradients :

Example of gradient descent

Let’s do gradient descent on this!

Modelling

Probabilistic modelling

Coin tossing : model

MLE modelling

Conclusion

Takeaways

References

Next Lecture overview

Our first classifier

Naive method of doing classification?

Formal “names”

Distance from means - I

Overview of model

Drawbacks and strengths?

Distance from means - II

Coming up with our “decision function”

Geometry of the decision function

Distance from means - III

As similarity to training data

What does this mean?

KNN - I

Overview of model

Drawbacks and strenghts?

KNN - II

Geometry of the decision function

Things to consider for this model

KNN - III

What is the optimal K?

Extensions to KNN