class Aws::ForecastService::Types::EvaluationParameters

Parameters that define how to split a dataset into training data and testing data, and the number of iterations to perform. These parameters are specified in the predefined algorithms but you can override them in the CreatePredictor request.

@note When making an API call, you may pass EvaluationParameters

data as a hash:

    {
      number_of_backtest_windows: 1,
      back_test_window_offset: 1,
    }

@!attribute [rw] number_of_backtest_windows

The number of times to split the input data. The default is 1. Valid
values are 1 through 5.
@return [Integer]

@!attribute [rw] back_test_window_offset

The point from the end of the dataset where you want to split the
data for model training and testing (evaluation). Specify the value
as the number of data points. The default is the value of the
forecast horizon. `BackTestWindowOffset` can be used to mimic a past
virtual forecast start date. This value must be greater than or
equal to the forecast horizon and less than half of the
TARGET\_TIME\_SERIES dataset length.

`ForecastHorizon` <= `BackTestWindowOffset` < 1/2 *
TARGET\_TIME\_SERIES dataset length
@return [Integer]

@see docs.aws.amazon.com/goto/WebAPI/forecast-2018-06-26/EvaluationParameters AWS API Documentation

Constants

SENSITIVE