You have a good fundamental understanding of machine learning but are concepts like the size of batch in gradient learning etc clear?
What is the difference between boosting and bagging?
What are the different assumptions in different algorithms?
Why use regularisation?
Any pointers from real-life use cases?
The notes are basically a large pdf file that is broken into different sections
- Basic motivation and gradient descent.
- Modelling and data representation
- Regularisation and the need for it
- Different algorithms including Logistic, Bagging, Boosting and a touch of Neural Networks
- Real-life rules of thumb
The images are high quality and notes are best read by downloading them though can be read using Gumroad read as well.
Algorithms, Regularisation, Data Modelling, Gradient Descent, Real Life Rules of Thumb