class Google::Apis::MlV1::GoogleCloudMlV1TrainingInput

Represents input parameters for a training job. When using the gcloud command to submit your training job, you can specify the input parameters as command- line arguments and/or in a YAML configuration file referenced from the – config command-line argument. For details, see the guide to [submitting a training job](/ai-platform/training/docs/training-jobs).

Attributes

args[RW]

Optional. Command-line arguments passed to the training application when it starts. If your job uses a custom container, then the arguments are passed to the container's `ENTRYPOINT` command. Corresponds to the JSON property `args` @return [Array<String>]

enable_web_access[RW]

Optional. Whether you want AI Platform Training to enable [interactive shell access](cloud.google.com/ai-platform/training/docs/monitor-debug- interactive-shell) to training containers. If set to `true`, you can access interactive shells at the URIs given by TrainingOutput.web_access_uris or HyperparameterOutput.web_access_uris (within TrainingOutput.trials). Corresponds to the JSON property `enableWebAccess` @return [Boolean]

enable_web_access?[RW]

Optional. Whether you want AI Platform Training to enable [interactive shell access](cloud.google.com/ai-platform/training/docs/monitor-debug- interactive-shell) to training containers. If set to `true`, you can access interactive shells at the URIs given by TrainingOutput.web_access_uris or HyperparameterOutput.web_access_uris (within TrainingOutput.trials). Corresponds to the JSON property `enableWebAccess` @return [Boolean]

encryption_config[RW]

Represents a custom encryption key configuration that can be applied to a resource. Corresponds to the JSON property `encryptionConfig` @return [Google::Apis::MlV1::GoogleCloudMlV1EncryptionConfig]

evaluator_config[RW]

Represents the configuration for a replica in a cluster. Corresponds to the JSON property `evaluatorConfig` @return [Google::Apis::MlV1::GoogleCloudMlV1ReplicaConfig]

evaluator_count[RW]

Optional. The number of evaluator replicas to use for the training job. Each replica in the cluster will be of the type specified in `evaluator_type`. This value can only be used when `scale_tier` is set to `CUSTOM`. If you set this value, you must also set `evaluator_type`. The default value is zero. Corresponds to the JSON property `evaluatorCount` @return [Fixnum]

evaluator_type[RW]

Optional. Specifies the type of virtual machine to use for your training job's evaluator nodes. The supported values are the same as those described in the entry for `masterType`. This value must be consistent with the category of machine type that `masterType` uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when `scaleTier` is set to `CUSTOM` and `evaluatorCount` is greater than zero. Corresponds to the JSON property `evaluatorType` @return [String]

hyperparameters[RW]

Represents a set of hyperparameters to optimize. Corresponds to the JSON property `hyperparameters` @return [Google::Apis::MlV1::GoogleCloudMlV1HyperparameterSpec]

job_dir[RW]

Optional. A Google Cloud Storage path in which to store training outputs and other data needed for training. This path is passed to your TensorFlow program as the '–job-dir' command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training. Corresponds to the JSON property `jobDir` @return [String]

master_config[RW]

Represents the configuration for a replica in a cluster. Corresponds to the JSON property `masterConfig` @return [Google::Apis::MlV1::GoogleCloudMlV1ReplicaConfig]

master_type[RW]

Optional. Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when `scaleTier` is set to `CUSTOM`. You can use certain Compute Engine machine types directly in this field. See the [list of compatible Compute Engine machine types](/ai-platform/training/ docs/machine-types#compute-engine-machine-types). Alternatively, you can use the certain legacy machine types in this field. See the [list of legacy machine types](/ai-platform/training/docs/machine-types#legacy-machine-types). Finally, if you want to use a TPU for training, specify `cloud_tpu` in this field. Learn more about the [special configuration options for training with TPUs](/ai-platform/training/docs/using-tpus#configuring_a_custom_tpu_machine). Corresponds to the JSON property `masterType` @return [String]

network[RW]

Optional. The full name of the [Compute Engine network](/vpc/docs/vpc) to which the Job is peered. For example, `projects/12345/global/networks/myVPC`. The format of this field is `projects/`project`/global/networks/`network“, where `project` is a project number (like `12345`) and `network` is network name. Private services access must already be configured for the network. If left unspecified, the Job is not peered with any network. [Learn about using VPC Network Peering.](/ai-platform/training/docs/vpc-peering). Corresponds to the JSON property `network` @return [String]

package_uris[RW]

Required. The Google Cloud Storage location of the packages with the training program and any additional dependencies. The maximum number of package URIs is 100. Corresponds to the JSON property `packageUris` @return [Array<String>]

parameter_server_config[RW]

Represents the configuration for a replica in a cluster. Corresponds to the JSON property `parameterServerConfig` @return [Google::Apis::MlV1::GoogleCloudMlV1ReplicaConfig]

parameter_server_count[RW]

Optional. The number of parameter server replicas to use for the training job. Each replica in the cluster will be of the type specified in ` parameter_server_type`. This value can only be used when `scale_tier` is set to `CUSTOM`. If you set this value, you must also set `parameter_server_type`. The default value is zero. Corresponds to the JSON property `parameterServerCount` @return [Fixnum]

parameter_server_type[RW]

Optional. Specifies the type of virtual machine to use for your training job's parameter server. The supported values are the same as those described in the entry for `master_type`. This value must be consistent with the category of machine type that `masterType` uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when `scaleTier` is set to `CUSTOM` and `parameter_server_count` is greater than zero. Corresponds to the JSON property `parameterServerType` @return [String]

python_module[RW]

Required. The Python module name to run after installing the packages. Corresponds to the JSON property `pythonModule` @return [String]

python_version[RW]

Optional. The version of Python used in training. You must either specify this field or specify `masterConfig.imageUri`. The following Python versions are available: * Python '3.7' is available when `runtime_version` is set to '1.15' or later. * Python '3.5' is available when `runtime_version` is set to a version from '1.4' to '1.14'. * Python '2.7' is available when ` runtime_version` is set to '1.15' or earlier. Read more about the Python versions available for [each runtime version](/ml-engine/docs/runtime-version- list). Corresponds to the JSON property `pythonVersion` @return [String]

region[RW]

Required. The region to run the training job in. See the [available regions](/ ai-platform/training/docs/regions) for AI Platform Training. Corresponds to the JSON property `region` @return [String]

runtime_version[RW]

Optional. The AI Platform runtime version to use for training. You must either specify this field or specify `masterConfig.imageUri`. For more information, see the [runtime version list](/ai-platform/training/docs/runtime-version-list) and learn [how to manage runtime versions](/ai-platform/training/docs/ versioning). Corresponds to the JSON property `runtimeVersion` @return [String]

scale_tier[RW]

Required. Specifies the machine types, the number of replicas for workers and parameter servers. Corresponds to the JSON property `scaleTier` @return [String]

scheduling[RW]

All parameters related to scheduling of training jobs. Corresponds to the JSON property `scheduling` @return [Google::Apis::MlV1::GoogleCloudMlV1Scheduling]

service_account[RW]

Optional. The email address of a service account to use when running the training appplication. You must have the `iam.serviceAccounts.actAs` permission for the specified service account. In addition, the AI Platform Training Google-managed service account must have the `roles/iam. serviceAccountAdmin` role for the specified service account. [Learn more about configuring a service account.](/ai-platform/training/docs/custom-service- account) If not specified, the AI Platform Training Google-managed service account is used by default. Corresponds to the JSON property `serviceAccount` @return [String]

use_chief_in_tf_config[RW]

Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment variable when training with a custom container. Defaults to `false`. [Learn more about this field.](/ai-platform/training/docs/distributed-training- details#chief-versus-master) This field has no effect for training jobs that don't use a custom container. Corresponds to the JSON property `useChiefInTfConfig` @return [Boolean]

use_chief_in_tf_config?[RW]

Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment variable when training with a custom container. Defaults to `false`. [Learn more about this field.](/ai-platform/training/docs/distributed-training- details#chief-versus-master) This field has no effect for training jobs that don't use a custom container. Corresponds to the JSON property `useChiefInTfConfig` @return [Boolean]

worker_config[RW]

Represents the configuration for a replica in a cluster. Corresponds to the JSON property `workerConfig` @return [Google::Apis::MlV1::GoogleCloudMlV1ReplicaConfig]

worker_count[RW]

Optional. The number of worker replicas to use for the training job. Each replica in the cluster will be of the type specified in `worker_type`. This value can only be used when `scale_tier` is set to `CUSTOM`. If you set this value, you must also set `worker_type`. The default value is zero. Corresponds to the JSON property `workerCount` @return [Fixnum]

worker_type[RW]

Optional. Specifies the type of virtual machine to use for your training job's worker nodes. The supported values are the same as those described in the entry for `masterType`. This value must be consistent with the category of machine type that `masterType` uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. If you use ` cloud_tpu` for this value, see special instructions for [configuring a custom TPU machine](/ml-engine/docs/tensorflow/using-tpus# configuring_a_custom_tpu_machine). This value must be present when `scaleTier` is set to `CUSTOM` and `workerCount` is greater than zero. Corresponds to the JSON property `workerType` @return [String]

Public Class Methods

new(**args) click to toggle source
# File lib/google/apis/ml_v1/classes.rb, line 2835
def initialize(**args)
   update!(**args)
end

Public Instance Methods

update!(**args) click to toggle source

Update properties of this object

# File lib/google/apis/ml_v1/classes.rb, line 2840
def update!(**args)
  @args = args[:args] if args.key?(:args)
  @enable_web_access = args[:enable_web_access] if args.key?(:enable_web_access)
  @encryption_config = args[:encryption_config] if args.key?(:encryption_config)
  @evaluator_config = args[:evaluator_config] if args.key?(:evaluator_config)
  @evaluator_count = args[:evaluator_count] if args.key?(:evaluator_count)
  @evaluator_type = args[:evaluator_type] if args.key?(:evaluator_type)
  @hyperparameters = args[:hyperparameters] if args.key?(:hyperparameters)
  @job_dir = args[:job_dir] if args.key?(:job_dir)
  @master_config = args[:master_config] if args.key?(:master_config)
  @master_type = args[:master_type] if args.key?(:master_type)
  @network = args[:network] if args.key?(:network)
  @package_uris = args[:package_uris] if args.key?(:package_uris)
  @parameter_server_config = args[:parameter_server_config] if args.key?(:parameter_server_config)
  @parameter_server_count = args[:parameter_server_count] if args.key?(:parameter_server_count)
  @parameter_server_type = args[:parameter_server_type] if args.key?(:parameter_server_type)
  @python_module = args[:python_module] if args.key?(:python_module)
  @python_version = args[:python_version] if args.key?(:python_version)
  @region = args[:region] if args.key?(:region)
  @runtime_version = args[:runtime_version] if args.key?(:runtime_version)
  @scale_tier = args[:scale_tier] if args.key?(:scale_tier)
  @scheduling = args[:scheduling] if args.key?(:scheduling)
  @service_account = args[:service_account] if args.key?(:service_account)
  @use_chief_in_tf_config = args[:use_chief_in_tf_config] if args.key?(:use_chief_in_tf_config)
  @worker_config = args[:worker_config] if args.key?(:worker_config)
  @worker_count = args[:worker_count] if args.key?(:worker_count)
  @worker_type = args[:worker_type] if args.key?(:worker_type)
end