Posted on 

CS229 Courses Note

Intro

本课程广泛介绍机器学习和统计模式识别。主题包括:监督学习(生成/判别学习、参数化/非参数化学习、神经网络、支持向量机);无监督学习(聚类、维度缩减、核方法);学习理论(偏差/方差权衡,实用建议);强化学习和自适应控制。本课程还将讨论机器学习的最新应用,如机器人控制、数据挖掘、自主导航、生物信息学、语音识别以及文本和网页数据处理。

Res 2018 年秋季学期

学习大纲、资料:Machine Learning

Youtube 视频:https://www.youtube.com/watch?v=jGwO_UgTS7I&list=PLoROMvodv4rMiGQp3WXShtMGgzqpfVfbU

Bilibili 视频:【斯坦福大学】CS229 机器学习 · 2018年(完结·中英字幕·机翻)_哔哩哔哩 (゜-゜)つロ 干杯~-bilibili

课程笔记

课程笔记根据 2018 年视频和 2020 年教学大概完成。主要分 5 大部分:基础知识复习、有监督学习、神经网络、无监督学习、强化学习、学习理论。

(笔记链接将持续更新)

3.0 基础知识复习

  • CS229 机器学习:基础知识 线性代数和微积分
  • CS229 机器学习:基础知识 概率论
  • CS229 机器学习:基础知识 Python 和 Numpy
  • CS229 机器学习:基础知识 评价指标

3.1 有监督学习

有监督学习:线性回归和梯度下降、逻辑回归、梯度下降、广义线性模型、高斯判别分析、朴素贝叶斯、支持向量机、核方法、决策树和集成方法

3.2 神经网络

神经网络:(课程中应该属于有监督学习,我们暂且将其分出来,因为深度学习现在很火,而且可以作为一个通用工具)

  • CS229 机器学习 神经网络 前向传播
  • CS229 机器学习 神经网络 反向传播
  • CS229 机器学习 神经网络 深度学习

3.3 无监督学习

无监督学习:K 均值、高斯混合模型、期望最大算法、因子分析、主成分分析、独立成分分析

  • CS229 机器学习 无监督学习 K 均值
  • CS229 机器学习 无监督学习 高斯混合模型和它的期望最大算法
  • CS229 机器学习 无监督学习 期望最大算法
  • CS229 机器学习 无监督学习 因子分析
  • CS229 机器学习 无监督学习 主成分分析
  • CS229 机器学习 无监督学习 独立成分分析

3.4 强化学习

强化学习:马尔可夫决策过程、值/策略迭代、梯度策略

  • CS229 机器学习 强化学习马尔可夫决策过程
  • CS229 机器学习 强化学习 值迭代、策略迭代算法
  • CS229 机器学习 强化学习 线性控制系统
  • CS229 机器学习 强化学习 梯度策略

3.5 学习理论

学习理论:偏差/方差权衡,实用建议

  • CS229 机器学习 学习理论 偏差/方差权衡
  • CS229 机器学习 学习理论 机器学习实用建议

参考

CS229 官方讲义:CS229: Machine Learning

CS229 讲义中文翻译:https://github.com/Kivy-CN/Stanford-CS-229-CN/CycleUser:斯坦福大学机器学习 CS229 课程讲义翻译完毕

口仆的学习笔记中 CS229 部分:口仆的学习笔记

Supervised Learning Terms

  • Regression
  • Classification
  • Terms:
    • input Features: $x^{(i)}$
    • output Target: $y^{(i)}$
    • Training sample: a pair $(x^{(i)},y^{(i)})$
    • Training set: a list of $n$ training examples {$(x^{(i)},y^{(i)});i=1,…,n$}
    • $\chi$ denote the space of input values, and $\gamma$ the space of output values. In this example, $\chi$ = $\gamma$ = R.
    • Hypothesis: $h: \chi \mapsto \gamma$ so that $h(x)$ is a “good” predictor for the corresponding value of y

Housing Example Jump Start

First approximate y as a linear function of x:
$$
\begin{align}
h_\theta (x)=\theta_0+\theta_1x_1+\theta_2x_2
\end{align}
$$

  • $\theta_i’s$: Parameters/Weights, parameterizing the space of linear functions mapping from $\Chi$ to $\Upsilon$

  • $h(x)$: No risk of confusion, then drop the $\theta$

Let $x_0$ =1, that $$h(x)=\begin{align}\sum_{i=0}^{d}\theta_ix_i\end{align} = \theta^Tx$$

  • where on the right-hand side above we are viewing θ and x both as vectors, and here d is the number of input variables (not counting x0).

Then how to pick parameters $\theta$ :

  • make $h(x)$ close to $y$, at least for the training examples we have.

  • To formalize this, we will define a function that measures, for each value of the $θ’$s, how close the $h(x^{(i)})$’s are to the corresponding $y^{(i)}$’s. We define the cost function: $\begin{align} J(\theta)=\tfrac{1}{2}\sum_{i=1}^{n}(h_\theta(x^{(i)})- y^{i})^2\end{align}$

  • ordinary least squares regression model

1. LMS Algorithms

choose $\theta$ to minimize $J(\theta)$: search algorithm starts “initial guesses” for θ, and that repeatedly changes $θ$ to make $J(θ)$ smaller, until hopefully converge to a value of $θ$ that minimizes $J(θ)$.

Specifically, let’s consider the gradient descent algorithm, which starts with some initial $θ$, and repeatedly performs the update:
$$
\begin{align}
\theta_j := \theta_j-\alpha\tfrac{\partial}{\partial\theta_j}J_\theta.
\end{align}
$$

This update is simultaneously performed for all values of $j=0,…,d$

  • $\alpha$: Learning rate

To implement the algorithm: work out what is the partial derivative term on the right hand side.

  • Let’s first work it out for the case of if we have only one training example (x,y), so that we can neglect the sum in the definition of J. We have:

$$
\begin{aligned}
\tfrac{\partial}{\partial\theta_j}J_\theta &= \tfrac{\partial}{\partial\theta_j}\tfrac{1}{2}(h_\theta(x)-y)^2 \
& =2\cdot\tfrac{1}{2}(h_\theta(x)-y)\cdot\tfrac{\partial}{\partial\theta_j}(h_\theta(x)-y) \
& = (h_\theta(x)-y)\cdot\tfrac{\partial}{\partial\theta_j}(\sum_{i=0}^{d}\theta_ix_i-y) \
& = (h_\theta(x)-y)\cdot x_j
\end{aligned}
$$

For a single training example, this gives the update rule:1
$$
\theta_j := \theta_j+\alpha(y^{(i)}-h_\theta(x^{(i)}))x^{(i)}_{j}
$$
The rule : LMS update rule (“least mean squares”), and is also known as the Widrow-Hoff learning rule.

  • This rule has several properties that seem natural and intuitive.
  • For instance, the magnitude of the update is proportional to the error term (y(i) − hθ(x(i))); thus, for instance, if we are encountering a training example on which our prediction nearly matches the actual value of y(i), then we find that there is little need to change the parameters;
  • in contrast, a larger change to the parameters will be made if our prediction hθ(x(i)) has a large error (i.e., if it is very far from y(i)).

Derived the LMS rule: only a single training example. There are two ways to modify this method for a training set of more than one example.

1st batch gradient descent.

  • _ replace it with the following algorithm: Repeat Until Convergence —

$$
Repeat: Until: Convergence - {
\theta_j := \theta_j+\alpha\sum_{i=1}^{n}(y^{(i)}-h_\theta(x^{(i)}))x_j^{(i)},(for:every:j)
}
$$

  • By grouping the updates of the coordinates into an update of the vector θ, rewrite:

$$
\theta := \theta+\alpha\sum_{i=1}^{n}(y^{(i)}-h_\theta(x^{(i)}))x^{(i)}
$$

  • can easily verify the quantity in the summation in the update rule above is just $\tfrac{∂J(θ)}{∂θ_j}$(for the original definition of $J$). So, this is simply gradient descent on the original cost function$J$. This method looks at every example in the entire training set on every step, and is called batch gradient descent. Note that, while gradient descent can be susceptible to local minima in general, the optimization problem we have posed here

2nd stochastic gradient descent / incremental gradient descent
$$
\begin{aligned}
Loop: { for: i=1: to: n {\
\theta_j := \theta_j+\alpha(y^{(i)}-h_\theta(x^{(i)}))x_j^{(i)} \
(for:every:j) } \
}
\end{aligned}
$$
By grouping the updates of the coordinates into an update of the vector θ, we can rewrite update (2) in a slightly more succinct way:
$$
\theta := \theta+\alpha(y^{(i)}-h_\theta(x^{(i)}))x^{(i)}
$$

2.The normal equation