Orange: SVM
Sumber: https://docs.biolab.si//3/visual-programming/widgets/model/svm.html
Support Vector Machines melakukan mapping dari input ke higher-dimensional feature space.
Input
Data: input dataset Preprocessor: preprocessing method(s)
Output
Learner: linear regression learning algorithm Model: trained model Support Vectors: instances used as support vectors
Support Vector machine (SVM) adalah teknik machine learning yang memisahkan ruang atribut dengan hyperplane, sehingga memaksimalkan margin antara instance class yang berbeda atau nilai class. Teknik ini sering menghasilkan hasil kinerja prediksi tertinggi. Orange menanamkan implementasi populer SVM dari paket LIBSVM. Widget ini adalah graphical user interface-nya.
Untuk task regression, SVM akan melakukan linear regression di high dimension feature space menggunakan ε-insensitive loss. Dia akan mengestimasi keakuratan tergantung pada settingan parameter C, ε dan kernel. Widget akan mengeluarkan prediksi class berdasarkan SVM Regression.
Widget dapat berfungsi untuk task classification dan regression.
- The learner can be given a name under which it will appear in other widgets. The default name is “SVM”.
- SVM type with test error settings. SVM and ν-SVM are based on different minimization of the error function. On the right side, you can set test error bounds:
- SVM:
- Cost: penalty term for loss and applies for classification and regression tasks.
- ε: a parameter to the epsilon-SVR model, applies to regression tasks. Defines the distance from true values within which no penalty is associated with predicted values.
- SVM:
- ν-SVM:
- Cost: penalty term for loss and applies only to regression tasks
- ν: a parameter to the ν-SVR model, applies to classification and regression tasks. An upper bound on the fraction of training errors and a lower bound of the fraction of support vectors.
- ν-SVM:
- Kernel is a function that transforms attribute space to a new feature space to fit the maximum-margin hyperplane, thus allowing the algorithm to create the model with Linear, Polynomial, RBF and Sigmoid kernels. Functions that specify the kernel are presented upon selecting them, and the constants involved are:
- g for the gamma constant in kernel function (the recommended value is 1/k, where k is the number of the attributes, but since there may be no training set given to the widget the default is 0 and the user has to set this option manually),
- c for the constant c0 in the kernel function (default 0), and
- d for the degree of the kernel (default 3).
- Set permitted deviation from the expected value in Numerical Tolerance. Tick the box next to Iteration Limit to set the maximum number of iterations permitted.
- Produce a report.
- Click Apply to commit changes. If you tick the box on the left side of the Apply button, changes will be communicated automatically.
Contoh
In the first (regression) example, we have used housing dataset and split the data into two data subsets (Data Sample and Remaining Data) with Data Sampler. The sample was sent to SVM which produced a Model, which was then used in Predictions to predict the values in Remaining Data. A similar schema can be used if the data is already in two separate files; in this case, two File widgets would be used instead of the File - Data Sampler combination.
The second example shows how to use SVM in combination with Scatter Plot. The following workflow trains a SVM model on iris data and outputs support vectors, which are those data instances that were used as support vectors in the learning phase. We can observe which are these data instances in a scatter plot visualization. Note that for the workflow to work correctly, you must set the links between widgets as demonstrated in the screenshot below.
Referensi
Introduction to SVM on StatSoft.