感知器在每次迭代中预测输出的方式是通过遵循以下方程式:
yj=f[wTx]=f[w⃗ ⋅x⃗ ]=f[w0+w1x1+w2x2+...+wnxn]
w⃗ w01
n×11×nn×1
请记住,这是针对训练集中的每个输入完成的。此后,更新权重向量以校正预测输出和实际输出之间的误差。
至于决策边界,这是我在这里找到的scikit学习代码的修改:
import numpy as np
from sklearn.linear_model import Perceptron
import matplotlib.pyplot as plt
X = np.array([[2,1],[3,4],[4,2],[3,1]])
Y = np.array([0,0,1,1])
h = .02 # step size in the mesh
# we create an instance of SVM and fit our data. We do not scale our
# data since we want to plot the support vectors
clf = Perceptron(n_iter=100).fit(X, Y)
# create a mesh to plot in
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
fig, ax = plt.subplots()
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
ax.contourf(xx, yy, Z, cmap=plt.cm.Paired)
ax.axis('off')
# Plot also the training points
ax.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired)
ax.set_title('Perceptron')
产生以下图:
基本上,该想法是为覆盖每个点的网格中的每个点预测一个值,然后使用绘制具有适当颜色的每个预测contourf
。