Difference between revisions of "Orange: AdaBoost"

From OnnoWiki
Jump to navigation Jump to search
(Created page with "Sumber: https://docs.biolab.si//3/visual-programming/widgets/model/adaboost.html An ensemble meta-algorithm that combines weak learners and adapts to the ‘hardness’ of...")
 
Line 8: Line 8:
  
 
     Data: input dataset
 
     Data: input dataset
 
 
     Preprocessor: preprocessing method(s)
 
     Preprocessor: preprocessing method(s)
 
 
     Learner: learning algorithm
 
     Learner: learning algorithm
  
Line 16: Line 14:
  
 
     Learner: AdaBoost learning algorithm
 
     Learner: AdaBoost learning algorithm
 
 
     Model: trained model
 
     Model: trained model
  
Line 23: Line 20:
 
AdaBoost works for both classification and regression.
 
AdaBoost works for both classification and regression.
  
../../_images/AdaBoost-stamped.png
+
[[File:AdaBoost-stamped.png|center|200px|thumb]]
  
 
     The learner can be given a name under which it will appear in other widgets. The default name is “AdaBoost”.
 
     The learner can be given a name under which it will appear in other widgets. The default name is “AdaBoost”.
 
 
     Set the parameters. The base estimator is a tree and you can set:
 
     Set the parameters. The base estimator is a tree and you can set:
  
 
         Number of estimators
 
         Number of estimators
 
 
         Learning rate: it determines to what extent the newly acquired information will override the old information (0 = the agent will not learn anything, 1 = the agent considers only the most recent information)
 
         Learning rate: it determines to what extent the newly acquired information will override the old information (0 = the agent will not learn anything, 1 = the agent considers only the most recent information)
 
 
         Fixed seed for random generator: set a fixed seed to enable reproducing the results.
 
         Fixed seed for random generator: set a fixed seed to enable reproducing the results.
  
 
     Boosting method.
 
     Boosting method.
 
 
         Classification algorithm (if classification on input): SAMME (updates base estimator’s weights with classification results) or SAMME.R (updates base estimator’s weight with probability estimates).
 
         Classification algorithm (if classification on input): SAMME (updates base estimator’s weights with classification results) or SAMME.R (updates base estimator’s weight with probability estimates).
 
 
         Regression loss function (if regression on input): Linear (), Square (), Exponential ().
 
         Regression loss function (if regression on input): Linear (), Square (), Exponential ().
 
 
     Produce a report.
 
     Produce a report.
  
 
     Click Apply after changing the settings. That will put the new learner in the output and, if the training examples are given, construct a new model and output it as well. To communicate changes automatically tick Apply Automatically.
 
     Click Apply after changing the settings. That will put the new learner in the output and, if the training examples are given, construct a new model and output it as well. To communicate changes automatically tick Apply Automatically.
  
Examples
+
==Contoh==
  
 
For classification, we loaded the iris dataset. We used AdaBoost, Tree and Logistic Regression and evaluated the models’ performance in Test & Score.
 
For classification, we loaded the iris dataset. We used AdaBoost, Tree and Logistic Regression and evaluated the models’ performance in Test & Score.
  
../../_images/AdaBoost-classification.png
+
[[File:AdaBoost-classification.png|center|200px|thumb]]
  
 
For regression, we loaded the housing dataset, sent the data instances to two different models (AdaBoost and Tree) and output them to the Predictions widget.
 
For regression, we loaded the housing dataset, sent the data instances to two different models (AdaBoost and Tree) and output them to the Predictions widget.
  
../../_images/AdaBoost-regression.png
+
[[File:AdaBoost-regression.png|center|200px|thumb]]
 +
 
  
  

Revision as of 10:00, 23 January 2020

Sumber: https://docs.biolab.si//3/visual-programming/widgets/model/adaboost.html


An ensemble meta-algorithm that combines weak learners and adapts to the ‘hardness’ of each training sample.

Inputs

   Data: input dataset
   Preprocessor: preprocessing method(s)
   Learner: learning algorithm

Outputs

   Learner: AdaBoost learning algorithm
   Model: trained model

The AdaBoost (short for “Adaptive boosting”) widget is a machine-learning algorithm, formulated by Yoav Freund and Robert Schapire. It can be used with other learning algorithms to boost their performance. It does so by tweaking the weak learners.

AdaBoost works for both classification and regression.

AdaBoost-stamped.png
   The learner can be given a name under which it will appear in other widgets. The default name is “AdaBoost”.
   Set the parameters. The base estimator is a tree and you can set:
       Number of estimators
       Learning rate: it determines to what extent the newly acquired information will override the old information (0 = the agent will not learn anything, 1 = the agent considers only the most recent information)
       Fixed seed for random generator: set a fixed seed to enable reproducing the results.
   Boosting method.
       Classification algorithm (if classification on input): SAMME (updates base estimator’s weights with classification results) or SAMME.R (updates base estimator’s weight with probability estimates).
       Regression loss function (if regression on input): Linear (), Square (), Exponential ().
   Produce a report.
   Click Apply after changing the settings. That will put the new learner in the output and, if the training examples are given, construct a new model and output it as well. To communicate changes automatically tick Apply Automatically.

Contoh

For classification, we loaded the iris dataset. We used AdaBoost, Tree and Logistic Regression and evaluated the models’ performance in Test & Score.

AdaBoost-classification.png

For regression, we loaded the housing dataset, sent the data instances to two different models (AdaBoost and Tree) and output them to the Predictions widget.

AdaBoost-regression.png



Referensi

Pranala Menarik