Notebooks
Learnability in Machine Learning
- Input : $\mathbf{x}$ (customer application)
- Output : $y$ (good/bad customer)
- Target function : $f:\mathcal{X}\rightarrow\mathcal{Y}$ (ideal credit approval formula)
- Data : $(\mathbf{x}_1, y_1),(\mathbf{x}_2, y_2),\cdots,(\mathbf{x}_N, y_N)$ (historical records of credit customers)
- Hypothesis set : $\mathcal{H}={h}$. It plays a pivotal role. It can be a linear regression, a neural network, a support vector machineā¦
- Hypothesis : $g:\mathcal{X}\rightarrow\mathcal{Y}$, $g \in \mathcal{H}$. We hope that $g$ approximates $f$ well, that is the goal of learning.
- Learning algorithm: $\mathcal{A}$ (e.g. back-propagation for neural network.) It does the searching and produces $g$.
- The hypothesis set and the learning algorithm $(\mathcal{H}, \mathcal{A})$ together are known as the learning model.
- We don't know $f$, we can only guess what it is from the data. The learning algorithm picks $g\simeq f$ from the hypothesis set $\mathcal{H}$.
Online Resources
-
Machine learning course by Yaser Abu-Mostafa.
-
Especially useful for its introductory discussion on learnability and VC dimension.