Wiki
Clone wikiArtificial Neural Networks / CS1156x: CalTech Machine Learning Course - Notes
CS1156x: Caltech Machine Learning Course
This is an introductory course in machine learning (ML) that covers the basic theory, algorithms, and applications. ML is a key technology in Big Data, and in many financial, medical, commercial, and scientific applications. It enables computational systems to automatically learn how to perform a desired task based on information extracted from the data.
ML has become one of the hottest fields of study today, taken up by undergraduate and graduate students from 15 different majors at Caltech.
This course balances theory and practice, and covers the mathematical as well as the heuristic aspects. The lectures follow each other in a story-like fashion:
- What is learning?
- Can a machine learn?
- How to do it?
- How to do it well?
- Take-home lessons.
The topics in the story line are covered by 18 lectures of about 60 minutes each plus Q&A.
- CS1156x: Caltech Machine Learning Course
- Course Prerequisites
- Lecture 1: The Learning Problem
- Lecture 2: Is Learning Feasible?
- Lecture 3: The Linear Model I
- Lecture 4: Error and Noise
- Lecture 5: Training versus Testing
- Lecture 6: Theory of Generalization
- Lecture 7: The VC Dimension
- Lecture 8: Bias-Variance Tradeoff
- Lecture 9: The Linear Model II
- Lecture 10: Neural Networks
- Lecture 11: Overfitting
- Lecture 12: Regularization
- Lecture 13: Validation
- Lecture 14: Support Vector Machines (SVM)
- Lecture 15: Kernel Methods
- Lecture 16: Radial Basis Functions
- Lecture 17: Three Learning Principles
- Lecture 18: Epilogue
- References
Course Prerequisites
- Basic probability,
- matrices, and
- calculus.
- Familiarity with some programming language or platform will help with the homework.
Lecture 1: The Learning Problem
Lecture 2: Is Learning Feasible?
Lecture 3: The Linear Model I
Lecture 4: Error and Noise
Lecture 5: Training versus Testing
Lecture 6: Theory of Generalization
Lecture 7: The VC Dimension
Lecture 8: Bias-Variance Tradeoff
Lecture 9: The Linear Model II
Lecture 10: Neural Networks
Lecture 11: Overfitting
Lecture 12: Regularization
Lecture 13: Validation
Lecture 14: Support Vector Machines (SVM)
Lecture 15: Kernel Methods
Lecture 16: Radial Basis Functions
Lecture 17: Three Learning Principles
Lecture 18: Epilogue
References
- Caltech Machine Learning Library
- <Add another>
Updated