【问题标题】:Plotting 3D Decision Boundary From Linear SVM从线性 SVM 绘制 3D 决策边界
【发布时间】:2016-07-13 22:35:11
【问题描述】:

我已经使用 sklearn.svm.svc() 拟合了 3 个特征数据集。我可以使用 matplotlib 和 Axes3D 绘制每个观察值的点。我想绘制决策边界以查看是否合适。我尝试调整 2D 示例来绘制决策边界,但无济于事。我知道 clf.coef_ 是一个垂直于决策边界的向量。我如何绘制它以查看它在哪里划分点?

【问题讨论】:

    标签: matplotlib scikit-learn svm


    【解决方案1】:

    这是一个玩具数据集的示例。请注意,matplotlib 的 3D 绘图很时髦。有时,位于平面后面的点可能看起来好像在前面,因此您可能不得不摆弄旋转绘图以确定发生了什么。

    import numpy as np
    import matplotlib.pyplot as plt
    from mpl_toolkits.mplot3d import Axes3D
    from sklearn.svm import SVC
    
    rs = np.random.RandomState(1234)
    
    # Generate some fake data.
    n_samples = 200
    # X is the input features by row.
    X = np.zeros((200,3))
    X[:n_samples/2] = rs.multivariate_normal( np.ones(3), np.eye(3), size=n_samples/2)
    X[n_samples/2:] = rs.multivariate_normal(-np.ones(3), np.eye(3), size=n_samples/2)
    # Y is the class labels for each row of X.
    Y = np.zeros(n_samples); Y[n_samples/2:] = 1
    
    # Fit the data with an svm
    svc = SVC(kernel='linear')
    svc.fit(X,Y)
    
    # The equation of the separating plane is given by all x in R^3 such that:
    # np.dot(svc.coef_[0], x) + b = 0. We should solve for the last coordinate
    # to plot the plane in terms of x and y.
    
    z = lambda x,y: (-svc.intercept_[0]-svc.coef_[0][0]*x-svc.coef_[0][1]*y) / svc.coef_[0][2]
    
    tmp = np.linspace(-2,2,51)
    x,y = np.meshgrid(tmp,tmp)
    
    # Plot stuff.
    fig = plt.figure()
    ax  = fig.add_subplot(111, projection='3d')
    ax.plot_surface(x, y, z(x,y))
    ax.plot3D(X[Y==0,0], X[Y==0,1], X[Y==0,2],'ob')
    ax.plot3D(X[Y==1,0], X[Y==1,1], X[Y==1,2],'sr')
    plt.show()
    

    输出:

    编辑(上面评论中的关键数学线性代数语句):

    # The equation of the separating plane is given by all x in R^3 such that:
    # np.dot(coefficients, x_vector) + intercept_value = 0. 
    # We should solve for the last coordinate: x_vector[2] == z
    # to plot the plane in terms of x and y.
    

    【讨论】:

    • 感谢您的出色回答和非常清晰的解释!我误会了拦截的意思!
    • 非常感谢切斯特。只有一个小错误或错字:(-svc.intercept_[0]-svc.coef_[0][0]*x-svc.coef_[0][1] *y )/ svc.coef_[0][2]
    • 对于 Python 3,所有索引除法都应使用“//”运算符进行编码,以使用隐式地板进行整数除法。
    • 惊人的清晰答案。关于线性代数的评论没有写在其他任何地方。我在问题结束时将其隔离,并尝试使其更笼统。
    【解决方案2】:

    您无法将许多特征的决策面可视化。这是因为维度会太多,无法可视化 N 维表面。

    但是,您可以使用 2 个特征并绘制漂亮的决策曲面,如下所示。

    我在这里也写了一篇关于这个的文章: https://towardsdatascience.com/support-vector-machines-svm-clearly-explained-a-python-tutorial-for-classification-problems-29c539f3ad8?source=friends_link&sk=80f72ab272550d76a0cc3730d7c8af35

    案例 1:2 个特征的 2D 图并使用 iris 数据集

    from sklearn.svm import SVC
    import numpy as np
    import matplotlib.pyplot as plt
    from sklearn import svm, datasets
    
    iris = datasets.load_iris()
    X = iris.data[:, :2]  # we only take the first two features.
    y = iris.target
    
    def make_meshgrid(x, y, h=.02):
        x_min, x_max = x.min() - 1, x.max() + 1
        y_min, y_max = y.min() - 1, y.max() + 1
        xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
        return xx, yy
    
    def plot_contours(ax, clf, xx, yy, **params):
        Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
        Z = Z.reshape(xx.shape)
        out = ax.contourf(xx, yy, Z, **params)
        return out
    
    model = svm.SVC(kernel='linear')
    clf = model.fit(X, y)
    
    fig, ax = plt.subplots()
    # title for the plots
    title = ('Decision surface of linear SVC ')
    # Set-up grid for plotting.
    X0, X1 = X[:, 0], X[:, 1]
    xx, yy = make_meshgrid(X0, X1)
    
    plot_contours(ax, clf, xx, yy, cmap=plt.cm.coolwarm, alpha=0.8)
    ax.scatter(X0, X1, c=y, cmap=plt.cm.coolwarm, s=20, edgecolors='k')
    ax.set_ylabel('y label here')
    ax.set_xlabel('x label here')
    ax.set_xticks(())
    ax.set_yticks(())
    ax.set_title(title)
    ax.legend()
    plt.show()
    

    案例 2:2 个特征的 3D 图并使用 iris 数据集

    from sklearn.svm import SVC
    import numpy as np
    import matplotlib.pyplot as plt
    from sklearn import svm, datasets
    from mpl_toolkits.mplot3d import Axes3D
    
    iris = datasets.load_iris()
    X = iris.data[:, :3]  # we only take the first three features.
    Y = iris.target
    
    #make it binary classification problem
    X = X[np.logical_or(Y==0,Y==1)]
    Y = Y[np.logical_or(Y==0,Y==1)]
    
    model = svm.SVC(kernel='linear')
    clf = model.fit(X, Y)
    
    # The equation of the separating plane is given by all x so that np.dot(svc.coef_[0], x) + b = 0.
    # Solve for w3 (z)
    z = lambda x,y: (-clf.intercept_[0]-clf.coef_[0][0]*x -clf.coef_[0][1]*y) / clf.coef_[0][2]
    
    tmp = np.linspace(-5,5,30)
    x,y = np.meshgrid(tmp,tmp)
    
    fig = plt.figure()
    ax  = fig.add_subplot(111, projection='3d')
    ax.plot3D(X[Y==0,0], X[Y==0,1], X[Y==0,2],'ob')
    ax.plot3D(X[Y==1,0], X[Y==1,1], X[Y==1,2],'sr')
    ax.plot_surface(x, y, z(x,y))
    ax.view_init(30, 60)
    plt.show()
    

    【讨论】:

      猜你喜欢
      • 2021-07-30
      • 2016-06-13
      • 1970-01-01
      • 2019-09-10
      • 2017-08-20
      • 2017-10-09
      • 2018-12-20
      • 2021-05-17
      • 2018-12-31
      相关资源
      最近更新 更多