Difference between revisions of "Orange: Model Evaluation"

From OnnoWiki
Jump to navigation Jump to search
Line 4: Line 4:
 
Evaluate different time series’ models.
 
Evaluate different time series’ models.
  
Inputs
+
==Input==
  
    Time series: Time series as output by As Timeseries widget.
+
Time series: Time series as output by As Timeseries widget.
    Time series model(s): The time series model(s) to evaluate (e.g. VAR or ARIMA).
+
Time series model(s): The time series model(s) to evaluate (e.g. VAR or ARIMA).
  
 
Evaluate different time series’ models. by comparing the errors they make in terms of: root mean squared error (RMSE), median absolute error (MAE), mean absolute percent error (MAPE), prediction of change in direction (POCID), coefficient of determination (R²), Akaike information criterion (AIC), and Bayesian information criterion (BIC).
 
Evaluate different time series’ models. by comparing the errors they make in terms of: root mean squared error (RMSE), median absolute error (MAE), mean absolute percent error (MAPE), prediction of change in direction (POCID), coefficient of determination (R²), Akaike information criterion (AIC), and Bayesian information criterion (BIC).
Line 13: Line 13:
 
[[File:Model-evaluation-stamped.png|center|200px|thumb]]
 
[[File:Model-evaluation-stamped.png|center|200px|thumb]]
  
 
+
* Number of folds for time series cross-validation.
 
+
* Number of forecast steps to produce in each fold.
    Number of folds for time series cross-validation.
+
* Results for various error measures and information criteria on cross-validated and in-sample data.
    Number of forecast steps to produce in each fold.
 
    Results for various error measures and information criteria on cross-validated and in-sample data.
 
  
 
This slide (source) shows how cross validation on time series is performed. In this case, the number of folds (1) is 10 and the number of forecast steps in each fold (2) is 1.
 
This slide (source) shows how cross validation on time series is performed. In this case, the number of folds (1) is 10 and the number of forecast steps in each fold (2) is 1.
Line 23: Line 21:
 
In-sample errors are the errors calculated on the training data itself. A stable model is one where in-sample errors and out-of-sample errors don’t differ significantly.
 
In-sample errors are the errors calculated on the training data itself. A stable model is one where in-sample errors and out-of-sample errors don’t differ significantly.
  
####See also
+
==See also==
  
 
ARIMA Model, VAR Model
 
ARIMA Model, VAR Model

Revision as of 06:27, 30 January 2020

Sumber: https://orange.biolab.si/widget-catalog/time-series/model_evaluation/


Evaluate different time series’ models.

Input

Time series: Time series as output by As Timeseries widget.
Time series model(s): The time series model(s) to evaluate (e.g. VAR or ARIMA).

Evaluate different time series’ models. by comparing the errors they make in terms of: root mean squared error (RMSE), median absolute error (MAE), mean absolute percent error (MAPE), prediction of change in direction (POCID), coefficient of determination (R²), Akaike information criterion (AIC), and Bayesian information criterion (BIC).

Model-evaluation-stamped.png
  • Number of folds for time series cross-validation.
  • Number of forecast steps to produce in each fold.
  • Results for various error measures and information criteria on cross-validated and in-sample data.

This slide (source) shows how cross validation on time series is performed. In this case, the number of folds (1) is 10 and the number of forecast steps in each fold (2) is 1.

In-sample errors are the errors calculated on the training data itself. A stable model is one where in-sample errors and out-of-sample errors don’t differ significantly.

See also

ARIMA Model, VAR Model


Referensi

Pranala Menarik