Lab_SVM_2_RM
Asmi Ariv
2022-10-05
SVM with multi-class
In the lab1, we used a simulated data set with two-class response variable to build an SVM classifier. In this lab, we will use another data set with multi-class response variable to build an SVM Model.
We will use Khan data set from ISLR package. We have used this data set in Lab2 of Logistic Regression. Kindly go through that lab to know more about the data set. Basically, this data set is comprised of tissue samples related to four different blue cell tumors (cancer types).
Let’s load packages for this lab
library(ISLR)
library(e1071)Loading data
Let’s load the data
names(Khan)## [1] "xtrain" "xtest" "ytrain" "ytest"dim( Khan$xtrain )## [1] 63 2308dim( Khan$xtest )## [1] 20 2308length (Khan$ytrain )## [1] 63length (Khan$ytest )## [1] 20So, the data set is already split into train and test.
2308 dimensions represent expression measurements of that many genes.
As we can see that the number of features is much higher than the number of records. In such cases, usually, classes are easily separable. Hence, we can use simple hyperplane such as “Linear” hyperplane.
train = data.frame(X = Khan$xtrain, y=as.factor(Khan$ytrain))
test = data.frame(X = Khan$xtest, y=as.factor(Khan$ytest))Training the SVM model
svm.model1 = svm(y ~ ., data = train, kernel="linear" ,cost=5)
summary(svm.model1)##
## Call:
## svm(formula = y ~ ., data = train, kernel = "linear", cost = 5)
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: linear
## cost: 5
##
## Number of Support Vectors: 58
##
## ( 20 20 11 7 )
##
##
## Number of Classes: 4
##
## Levels:
## 1 2 3 4Plotting support vectors
Let’s plot the 58 support vectors used by SVM to build the model.
(Note: We are not plotting the model itself as done in lab1. We are just plotting support vectors)
plot(train$X.1, train$X.2, col=as.integer(train$y))
points(train[svm.model1$index,], pch=5, cex=2) #index in SVM returns indices of #support vectorsSo, as we can see that the points in the boxes are support vectors used by SVM for training the model.
Predictions
Train Accuracy
pred.train = fitted(svm.model1)
table(train$y, pred.train)## pred.train
## 1 2 3 4
## 1 8 0 0 0
## 2 0 23 0 0
## 3 0 0 12 0
## 4 0 0 0 20cat('\n Accuracy on train data set:\n',mean((pred.train == train$y))*100, '\n')##
## Accuracy on train data set:
## 100The model has performed extremely well on the train data with 100% accuracy
Test Accuracy
pred.test = predict(svm.model1, newdata=test, type="class")
table(test$y, pred.test)## pred.test
## 1 2 3 4
## 1 3 0 0 0
## 2 0 6 0 0
## 3 0 2 4 0
## 4 0 0 0 5cat('\n Accuracy on test data set:\n',mean((pred.test == test$y))*100, '\n')##
## Accuracy on test data set:
## 90As we see that the model has done well on test data with 90% accuracy.
e1071 package comes with a function tune(), which performs 10-fold cross validation on a set of models using differnt values of parameters supplied to it by the user. For example, in our SVM classifier for the data set we are using, we can provide different values of cost.
tune() will then build different models using the values of the parameter and give us the best model.
Following are some of the different parameters we use in tune() functions:
- cost: is more like a regularization term we have used in other machine learning algorithms
- gamma: is more like a tuning parameter for non-linear functions such as polynomial, radial basis, and sigmoid
- degree: is polynomial term
Let’s provide different values of cost = (0.001 , 0.01, 0.1, 1,10,100)
tune.mod = tune(svm, y~., data=train, kernel = "linear", ranges =list(cost=c(0.001 , 0.01, 0.1, 1,10,100) ))Summary of tune.mod
summary(tune.mod)##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 0.001
##
## - best performance: 0.01666667
##
## - Detailed performance results:
## cost error dispersion
## 1 1e-03 0.01666667 0.05270463
## 2 1e-02 0.01666667 0.05270463
## 3 1e-01 0.01666667 0.05270463
## 4 1e+00 0.01666667 0.05270463
## 5 1e+01 0.01666667 0.05270463
## 6 1e+02 0.01666667 0.05270463If you look at the error, all the models have performed equally well. Let’s see, which model has been selected as the best model by tune function.
svm.best = tune.mod$best.model
summary(svm.best)##
## Call:
## best.tune(method = svm, train.x = y ~ ., data = train, ranges = list(cost = c(0.001,
## 0.01, 0.1, 1, 10, 100)), kernel = "linear")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: linear
## cost: 0.001
##
## Number of Support Vectors: 58
##
## ( 20 20 11 7 )
##
##
## Number of Classes: 4
##
## Levels:
## 1 2 3 4As we can see that tune() has selected the model with least cost, i.e. 0.001, as the best model.
let’s see the performance of the model on test data:
pred.test = predict(svm.best, newdata=test, type="class")
table(test$y, pred.test)## pred.test
## 1 2 3 4
## 1 3 0 0 0
## 2 0 6 0 0
## 3 0 2 4 0
## 4 0 0 0 5cat('\n Accuracy on test data set:\n',mean((pred.test == test$y))*100, '\n')##
## Accuracy on test data set:
## 90It looks the same as before, which is quite obvious as all the models trained by tune have similar error and therefore, similar performance on training data and test data.
Exercise: Try using different Kernel, such as “radial”, “polynomial”, or “sigmoid” and also use tune() function for different values of cost, gamma (for all except linear kernel) and degree (only for polynomial) to see if the performance on test data improves.
Comments
Post a Comment