class Aws::LookoutEquipment::Types::CreateInferenceSchedulerRequest

@note When making an API call, you may pass CreateInferenceSchedulerRequest

data as a hash:

    {
      model_name: "ModelName", # required
      inference_scheduler_name: "InferenceSchedulerName", # required
      data_delay_offset_in_minutes: 1,
      data_upload_frequency: "PT5M", # required, accepts PT5M, PT10M, PT15M, PT30M, PT1H
      data_input_configuration: { # required
        s3_input_configuration: {
          bucket: "S3Bucket", # required
          prefix: "S3Prefix",
        },
        input_time_zone_offset: "TimeZoneOffset",
        inference_input_name_configuration: {
          timestamp_format: "FileNameTimestampFormat",
          component_timestamp_delimiter: "ComponentTimestampDelimiter",
        },
      },
      data_output_configuration: { # required
        s3_output_configuration: { # required
          bucket: "S3Bucket", # required
          prefix: "S3Prefix",
        },
        kms_key_id: "NameOrArn",
      },
      role_arn: "IamRoleArn", # required
      server_side_kms_key_id: "NameOrArn",
      client_token: "IdempotenceToken", # required
      tags: [
        {
          key: "TagKey", # required
          value: "TagValue", # required
        },
      ],
    }

@!attribute [rw] model_name

The name of the previously trained ML model being used to create the
inference scheduler.
@return [String]

@!attribute [rw] inference_scheduler_name

The name of the inference scheduler being created.
@return [String]

@!attribute [rw] data_delay_offset_in_minutes

A period of time (in minutes) by which inference on the data is
delayed after the data starts. For instance, if you select an offset
delay time of five minutes, inference will not begin on the data
until the first data measurement after the five minute mark. For
example, if five minutes is selected, the inference scheduler will
wake up at the configured frequency with the additional five minute
delay time to check the customer S3 bucket. The customer can upload
data at the same frequency and they don't need to stop and restart
the scheduler when uploading new data.
@return [Integer]

@!attribute [rw] data_upload_frequency

How often data is uploaded to the source S3 bucket for the input
data. The value chosen is the length of time between data uploads.
For instance, if you select 5 minutes, Amazon Lookout for Equipment
will upload the real-time data to the source bucket once every 5
minutes. This frequency also determines how often Amazon Lookout for
Equipment starts a scheduled inference on your data. In this
example, it starts once every 5 minutes.
@return [String]

@!attribute [rw] data_input_configuration

Specifies configuration information for the input data for the
inference scheduler, including delimiter, format, and dataset
location.
@return [Types::InferenceInputConfiguration]

@!attribute [rw] data_output_configuration

Specifies configuration information for the output results for the
inference scheduler, including the S3 location for the output.
@return [Types::InferenceOutputConfiguration]

@!attribute [rw] role_arn

The Amazon Resource Name (ARN) of a role with permission to access
the data source being used for the inference.
@return [String]

@!attribute [rw] server_side_kms_key_id

Provides the identifier of the KMS key used to encrypt inference
scheduler data by Amazon Lookout for Equipment.
@return [String]

@!attribute [rw] client_token

A unique identifier for the request. If you do not set the client
request token, Amazon Lookout for Equipment generates one.

**A suitable default value is auto-generated.** You should normally
not need to pass this option.
@return [String]

@!attribute [rw] tags

Any tags associated with the inference scheduler.
@return [Array<Types::Tag>]

@see docs.aws.amazon.com/goto/WebAPI/lookoutequipment-2020-12-15/CreateInferenceSchedulerRequest AWS API Documentation

Constants

SENSITIVE