Machine-Learning

為什麼 scikit-learn SVM 解決不了兩個同心圓?

  • June 17, 2020

考慮以下數據集(生成它的代碼在帖子的底部): 在此處輸入圖像描述

運行以下代碼:

from sklearn.svm import SVC
model_2 = SVC(kernel='rbf', degree=2, gamma='auto', C=100)
model_2.fit(X_train, y_train)
print('accuracy (train): %5.2f'%(metric(y_train, model_2.predict(X_train))))
print('accuracy (test): %5.2f'%(metric(y_test, model_2.predict(X_test))))
print('Number of support vectors:', sum(model_2.n_support_))

我得到以下輸出:

accuracy (train):  0.64
accuracy (test):  0.26
Number of support vectors: 55

我還嘗試了不同程度的多項式內核,並得到了或多或少相同的結果。

那麼為什麼它做得這麼差。我剛剛了解了 SVM,我認為二階多項式核可以將這些點投影到拋物面上,結果將是線性可分的。我在哪裡錯了?

參考:這篇文章中的代碼片段的起始代碼來自這個課程

生成數據的代碼:

np.random.seed(0)
data, labels = sklearn.datasets.make_circles()
idx = np.arange(len(labels))
np.random.shuffle(idx)
# train on a random 2/3 and test on the remaining 1/3
idx_train = idx[:2*len(idx)//3]
idx_test = idx[2*len(idx)//3:]
X_train = data[idx_train]
X_test = data[idx_test]

y_train = 2 * labels[idx_train] - 1  # binary -> spin
y_test = 2 * labels[idx_test] - 1

scaler = sklearn.preprocessing.StandardScaler()
normalizer = sklearn.preprocessing.Normalizer()

X_train = scaler.fit_transform(X_train)
X_train = normalizer.fit_transform(X_train)

X_test = scaler.fit_transform(X_test)
X_test = normalizer.fit_transform(X_test)
plt.figure(figsize=(6, 6))
plt.subplot(111)
plt.scatter(data[labels == 0, 0], data[labels == 0, 1], color='navy')
plt.scatter(data[labels == 1, 0], data[labels == 1, 1], color='c')

讓我們從警告開始:

  1. 所有預處理都應使用訓練集的擬合值完成:
X_test = scaler.transform(X_test)
X_test = normalizer.transform(X_test)

  1. degree是多項式內核的超參數,如果內核不是 則被忽略poly
model_2 = SVC(kernel='poly', degree=2, gamma='auto', C=100)

或者

model_2 = SVC(kernel='rbf', gamma='auto', C=100)

  1. 在調試時,在經過預處理後打印最終的數據集,看看你是否已經破壞了數據集:

在此處輸入圖像描述

不要盲目地進行預處理。刪除標準化步驟,因為它只會破壞數據集。您將獲得 100% 的準確率。

引用自:https://stats.stackexchange.com/questions/472666

comments powered by Disqus