【问题标题】:Visualization (2D) of SVM in PythonPython 中 SVM 的可视化(2D)
【发布时间】:2021-01-22 04:35:42
【问题描述】:

我有一个任务,如下所示。我已经完成了前 5 个任务,但最后一个任务有问题。绘制它。请指导如何做。提前谢谢你。

*(前几天刚开始学习SVM和ML,请注意)

**(因为我认为所有类型的内核的绘图操作顺序应该是相同的。如果你展示其中一个,那就太好了。我会尝试为其他内核调整你的代码)

要遵循的程序:

  1. 从这张地图中随机抽取样本。 (#100) 并将其用于 SVC 的 Python。该数据集包括东向、北向和岩石信息。

  2. 使用这 100 个随机选择的样本,再次随机拆分以训练和测试数据集。

  3. 尝试使用线性、多项式、径向基函数和正切内核运行 SVC。

  4. 例如,如果您使用的是径向基函数,则根据您从准确度得分中获得的准确度,“C”和“gamma”可能是最佳的。

  5. 一旦您有了拟合模型并计算了准确度分数(从测试数据集获得),然后将整个数据集导入获得的 FIT 模型并预测我们在 reference.csv 中拥有的所有 90,000 个样本点的输出.

  6. 向我展示获得的地图以及您从每个 FIT 模型获得的准确度分数。

数据集如下所示:

enter image description here

90000点同款。

代码如下:

import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt

### Importing Info

df = pd.read_csv("C:/Users/Admin/Desktop/RA/step 1/reference.csv", header=0)
df_model = df.sample(n = 100)
df_model.shape

## X-y split

X = df_model.loc[:,df_model.columns!="Rock"]
y = df_model["Rock"]
y_initial = df["Rock"]

### for whole dataset

X_wd = df.loc[:, df_model.columns!="Rock"]
y_wd = df["Rock"]

## Test-train split

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)

## Standardizing the Data

from sklearn.preprocessing import StandardScaler

sc = StandardScaler().fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)

## Linear
### Grid Search

from sklearn.model_selection import GridSearchCV
from sklearn import svm
from sklearn.metrics import accuracy_score, confusion_matrix

params_linear = {'C' : (0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100, 500,1000)}
clf_svm_l = svm.SVC(kernel = 'linear')
svm_grid_linear = GridSearchCV(clf_svm_l, params_linear, n_jobs=-1,
                              cv = 3, verbose = 1, scoring = 'accuracy')

svm_grid_linear.fit(X_train_std, y_train)
svm_grid_linear.best_params_
linsvm_clf = svm_grid_linear.best_estimator_
accuracy_score(y_test, linsvm_clf.predict(X_test_std))

### training svm

clf_svm_l = svm.SVC(kernel = 'linear', C = 0.1)
clf_svm_l.fit(X_train_std, y_train)

### predicting model

y_train_pred_linear = clf_svm_l.predict(X_train_std)
y_test_pred_linear = clf_svm_l.predict(X_test_std)
y_test_pred_linear
clf_svm_l.n_support_

### whole dataset

y_pred_linear_wd = clf_svm_l.predict(X_wd)

### map
        


## Poly
### grid search for poly

params_poly = {'C' : (0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100, 500,1000),
         'degree' : (1,2,3,4,5,6)}
clf_svm_poly = svm.SVC(kernel = 'poly')
svm_grid_poly = GridSearchCV(clf_svm_poly, params_poly, n_jobs = -1,
                            cv = 3, verbose = 1, scoring = 'accuracy')
svm_grid_poly.fit(X_train_std, y_train)
svm_grid_poly.best_params_
polysvm_clf = svm_grid_poly.best_estimator_
accuracy_score(y_test, polysvm_clf.predict(X_test_std))

### training svm

clf_svm_poly = svm.SVC(kernel = 'poly', C = 50, degree = 2)
clf_svm_poly.fit(X_train_std, y_train)

### predicting model

y_train_pred_poly = clf_svm_poly.predict(X_train_std)
y_test_pred_poly = clf_svm_poly.predict(X_test_std)

clf_svm_poly.n_support_

### whole dataset

y_pred_poly_wd = clf_svm_poly.predict(X_wd)

### map            


## RBF

### grid search rbf

params_rbf = {'C' : (0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100, 500,1000),
         'gamma' : (0.001, 0.01, 0.1, 0.5, 1)}
clf_svm_r = svm.SVC(kernel = 'rbf')
svm_grid_r = GridSearchCV(clf_svm_r, params_rbf, n_jobs = -1,
                         cv = 10, verbose = 1, scoring = 'accuracy')
svm_grid_r.fit(X_train_std, y_train)
svm_grid_r.best_params_
rsvm_clf = svm_grid_r.best_estimator_
accuracy_score(y_test, rsvm_clf.predict(X_test_std))

### training svm

clf_svm_r = svm.SVC(kernel = 'rbf', C = 500, gamma = 0.5)
clf_svm_r.fit(X_train_std, y_train)

### predicting model

y_train_pred_r = clf_svm_r.predict(X_train_std)
y_test_pred_r = clf_svm_r.predict(X_test_std)

### whole dataset

y_pred_r_wd = clf_svm_r.predict(X_wd)

### map            


## Tangent

### grid search

params_tangent = {'C' : (0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50),
         'gamma' : (0.001, 0.01, 0.1, 0.5, 1)}
clf_svm_tangent = svm.SVC(kernel = 'sigmoid')
svm_grid_tangent = GridSearchCV(clf_svm_tangent, params_tangent, n_jobs = -1,
                            cv = 10, verbose = 1, scoring = 'accuracy')
svm_grid_tangent.fit(X_train_std, y_train)
svm_grid_tangent.best_params_
tangentsvm_clf = svm_grid_tangent.best_estimator_
accuracy_score(y_test, tangentsvm_clf.predict(X_test_std))

### training svm

clf_svm_tangent = svm.SVC(kernel = 'sigmoid', C = 1, gamma = 0.1)
clf_svm_tangent.fit(X_train_std, y_train)

### predicting model

y_train_pred_tangent = clf_svm_tangent.predict(X_train_std)
y_test_pred_tangent = clf_svm_tangent.predict(X_test_std)

### whole dataset

y_pred_tangent_wd = clf_svm_tangent.predict(X_wd)

### map

【问题讨论】:

  • 您希望如何绘制数据?作为图像,即以东为列,以北为行,以岩石为值?
  • 是的,您的图像是正确的,x 轴表示东移,y 轴表示北移。感谢您澄清输出。

标签: python plot scikit-learn svm svc


【解决方案1】:

从您的示例数据来看,您似乎正在处理定期间隔的数据,并且行/列以单调递增的方式迭代。 这是将此数据集重新整形为二维数组的一种方法(通过将数组重新整形为行)并相应地绘制它:

import pandas as pd
import matplotlib.pyplot as plt
import numpy as np

# create sample data
data = {
    'Easting': [0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3],
    'Northing': [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
    'Rocks': [0, 0, 1, 0, 0, 2, 0, 0, 0, 1, 0, 0],
}
df = pd.DataFrame(data)

# reshape data into 2d matrix (assuming easting / northing steps from 0 to max value)
max_easting = np.max(df['Easting'])
img_data = np.reshape(data['Rocks'], (max_easting, -1))

# plot as image
plt.imshow(img_data)
plt.show()

如果您正在处理不规则间隔数据,即并非每个东/北组合都有值,您可以查看plotting irregular spaced data

【讨论】:

    【解决方案2】:

    这是绘制线性可视化的答案,对于那些会遇到和我一样的问题的人。将这些代码用于其他内核将很容易。

    # Visualising the Training set results
    from matplotlib.colors import ListedColormap
    X_set, y_set = X_train_std, y_train
    X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
                         np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
    plt.contourf(X1, X2, clf_svm_l.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
                 alpha = 0.75, cmap = ListedColormap(('darkblue', 'yellow')))
    plt.xlim(X1.min(), X1.max())
    plt.ylim(X2.min(), X2.max())
    for i, j in enumerate(np.unique(y_set)):
        plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
                    c = ListedColormap(('blue', 'gold'))(i), label = j)
    plt.title('SVM (Training set)')
    plt.xlabel('Easting')
    plt.ylabel('Northing')
    plt.legend()
    plt.show()
    

    【讨论】:

      猜你喜欢
      • 2018-12-19
      • 2018-10-21
      • 2011-02-08
      • 1970-01-01
      • 1970-01-01
      • 2013-06-23
      • 2016-10-23
      • 2013-10-12
      • 1970-01-01
      相关资源
      最近更新 更多