class Google::Apis::MlV1::GoogleCloudMlV1Version
Represents a version of the model. Each version is a trained model deployed in the cloud, ready to handle prediction requests. A model can have multiple versions. You can get information about all of the versions of a given model by calling projects.models.versions.list.
Attributes
Represents a hardware accelerator request config. Note that the AcceleratorConfig can be used in both Jobs and Versions. Learn more about [ accelerators for training](/ml-engine/docs/using-gpus) and [accelerators for online prediction](/ml-engine/docs/machine-types-online-prediction#gpus). Corresponds to the JSON property `acceleratorConfig` @return [Google::Apis::MlV1::GoogleCloudMlV1AcceleratorConfig]
Options for automatically scaling a model. Corresponds to the JSON property `autoScaling` @return [Google::Apis::MlV1::GoogleCloudMlV1AutoScaling]
Specification of a custom container for serving predictions. This message is a subset of the [Kubernetes Container v1 core specification](kubernetes. io/docs/reference/generated/kubernetes-api/v1.18/#container-v1-core). Corresponds to the JSON property `container` @return [Google::Apis::MlV1::GoogleCloudMlV1ContainerSpec]
Output only. The time the version was created. Corresponds to the JSON property `createTime` @return [String]
The Cloud Storage URI of a directory containing trained model artifacts to be used to create the model version. See the [guide to deploying models](/ai- platform/prediction/docs/deploying-models) for more information. The total number of files under this directory must not exceed 1000. During projects. models.versions.create, AI Platform Prediction copies all files from the specified directory to a location managed by the service. From then on, AI Platform Prediction uses these copies of the model artifacts to serve predictions, not the original files in Cloud Storage, so this location is useful only as a historical record. If you specify container, then this field is optional. Otherwise, it is required. Learn [how to use this field with a custom container](/ai-platform/prediction/docs/custom-container-requirements# artifacts). Corresponds to the JSON property `deploymentUri` @return [String]
Optional. The description specified for the version when it was created. Corresponds to the JSON property `description` @return [String]
Output only. The details of a failure or a cancellation. Corresponds to the JSON property `errorMessage` @return [String]
`etag` is used for optimistic concurrency control as a way to help prevent simultaneous updates of a model from overwriting each other. It is strongly suggested that systems make use of the `etag` in the read-modify-write cycle to perform model updates in order to avoid race conditions: An `etag` is returned in the response to `GetVersion`, and systems are expected to put that etag in the request to `UpdateVersion` to ensure that their change will be applied to the model as intended. Corresponds to the JSON property `etag` NOTE: Values are automatically base64 encoded/decoded in the client library. @return [String]
Message holding configuration options for explaining model predictions. There are three feature attribution methods supported for TensorFlow models: integrated gradients, sampled Shapley, and XRAI. [Learn more about feature attributions.](/ai-platform/prediction/docs/ai-explanations/overview) Corresponds to the JSON property `explanationConfig` @return [Google::Apis::MlV1::GoogleCloudMlV1ExplanationConfig]
Optional. The machine learning framework AI Platform uses to train this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`, `XGBOOST`. If you do not specify a framework, AI Platform will analyze files in the deployment_uri
to determine a framework. If you choose `SCIKIT_LEARN` or ` XGBOOST`, you must also set the runtime version of the model to 1.4 or greater. Do not specify a framework if you're deploying a [custom prediction routine](/ai-platform/prediction/docs/custom-prediction-routines) or if you're using a [custom container](/ai-platform/prediction/docs/use-custom-container). Corresponds to the JSON property `framework` @return [String]
Output only. If true, this version will be used to handle prediction requests that do not specify a version. You can change the default version by calling projects.methods.versions.setDefault. Corresponds to the JSON property `isDefault` @return [Boolean]
Output only. If true, this version will be used to handle prediction requests that do not specify a version. You can change the default version by calling projects.methods.versions.setDefault. Corresponds to the JSON property `isDefault` @return [Boolean]
Optional. One or more labels that you can add, to organize your model versions. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels. Corresponds to the JSON property `labels` @return [Hash<String,String>]
Output only. The [AI Platform (Unified) `Model`](cloud.google.com/ai- platform-unified/docs/reference/rest/v1beta1/projects.locations.models) ID for the last [model migration](cloud.google.com/ai-platform-unified/docs/ start/migrating-to-ai-platform-unified). Corresponds to the JSON property `lastMigrationModelId` @return [String]
Output only. The last time this version was successfully [migrated to AI Platform (Unified)](cloud.google.com/ai-platform-unified/docs/start/ migrating-to-ai-platform-unified). Corresponds to the JSON property `lastMigrationTime` @return [String]
Output only. The time the version was last used for prediction. Corresponds to the JSON property `lastUseTime` @return [String]
Optional. The type of machine on which to serve the model. Currently only applies to online prediction service. To learn about valid values for this field, read [Choosing a machine type for online prediction](/ai-platform/ prediction/docs/machine-types-online-prediction). If this field is not specified and you are using a [regional endpoint](/ai-platform/prediction/docs/ regional-endpoints), then the machine type defaults to `n1-standard-2`. If this field is not specified and you are using the global endpoint (`ml. googleapis.com`), then the machine type defaults to `mls1-c1-m2`. Corresponds to the JSON property `machineType` @return [String]
Options for manually scaling a model. Corresponds to the JSON property `manualScaling` @return [Google::Apis::MlV1::GoogleCloudMlV1ManualScaling]
Required. The name specified for the version when it was created. The version name must be unique within the model it is created in. Corresponds to the JSON property `name` @return [String]
Optional. Cloud Storage paths (`gs://…`) of packages for [custom prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines) or [scikit- learn pipelines with custom code](/ml-engine/docs/scikit/exporting-for- prediction#custom-pipeline-code). For a custom prediction routine, one of these packages must contain your Predictor class (see [`predictionClass`](# Version.FIELDS.prediction_class)). Additionally, include any dependencies used by your Predictor or scikit-learn pipeline uses that are not already included in your selected [runtime version](/ml-engine/docs/tensorflow/runtime-version- list). If you specify this field, you must also set [`runtimeVersion`](# Version.FIELDS.runtime_version) to 1.4 or greater. Corresponds to the JSON property `packageUris` @return [Array<String>]
Optional. The fully qualified name (module_name.class_name) of a class that implements the Predictor interface described in this reference field. The module containing this class should be included in a package provided to the [` packageUris` field](#Version.FIELDS.package_uris). Specify this field if and only if you are deploying a [custom prediction routine (beta)](/ml-engine/docs/ tensorflow/custom-prediction-routines). If you specify this field, you must set [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and you must set `machineType` to a [legacy (MLS1) machine type](/ml-engine/docs/ machine-types-online-prediction). The following code sample provides the Predictor interface: class Predictor(object): “”“Interface for constructing custom predictors.”“” def predict(self, instances, **kwargs): “”“Performs custom prediction. Instances are the decoded values from the request. They have already been deserialized from JSON. Args: instances: A list of prediction input instances. **kwargs: A dictionary of keyword args provided as additional fields on the predict request body. Returns: A list of outputs containing the prediction results. This list must be JSON serializable. ”“” raise NotImplementedError() @classmethod def from_path(cls, model_dir): “”“ Creates an instance of Predictor using the given path. Loading of the predictor should be done in this method. Args: model_dir: The local directory that contains the exported model file along with any additional files uploaded when creating the version resource. Returns: An instance implementing this Predictor class. ”“” raise NotImplementedError() Learn more about [the Predictor interface and custom prediction routines](/ml-engine/docs/tensorflow/ custom-prediction-routines). Corresponds to the JSON property `predictionClass` @return [String]
Required. The version of Python used in prediction. The following Python versions are available: * Python '3.7' is available when `runtime_version` is set to '1.15' or later. * Python '3.5' is available when `runtime_version` is set to a version from '1.4' to '1.14'. * Python '2.7' is available when ` runtime_version
` is set to '1.15' or earlier. Read more about the Python versions available for [each runtime version](/ml-engine/docs/runtime-version- list). Corresponds to the JSON property `pythonVersion` @return [String]
Configuration for logging request-response pairs to a BigQuery table. Online prediction requests to a model version and the responses to these requests are converted to raw strings and saved to the specified BigQuery table. Logging is constrained by [BigQuery quotas and limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits, AI Platform Prediction does not log request- response pairs, but it continues to serve predictions. If you are using [ continuous evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to specify this configuration manually. Setting up continuous evaluation automatically enables logging of request-response pairs. Corresponds to the JSON property `requestLoggingConfig` @return [Google::Apis::MlV1::GoogleCloudMlV1RequestLoggingConfig]
Specifies HTTP paths served by a custom container. AI Platform Prediction sends requests to these paths on the container; the custom container must run an HTTP server that responds to these requests with appropriate responses. Read [Custom container requirements](/ai-platform/prediction/docs/custom- container-requirements) for details on how to create your container image to meet these requirements. Corresponds to the JSON property `routes` @return [Google::Apis::MlV1::GoogleCloudMlV1RouteMap]
Required. The AI Platform runtime version to use for this deployment. For more information, see the [runtime version list](/ml-engine/docs/runtime-version- list) and [how to manage runtime versions](/ml-engine/docs/versioning). Corresponds to the JSON property `runtimeVersion` @return [String]
Optional. Specifies the service account for resource access control. If you specify this field, then you must also specify either the `containerSpec` or the `predictionClass` field. Learn more about [using a custom service account]( /ai-platform/prediction/docs/custom-service-account). Corresponds to the JSON property `serviceAccount` @return [String]
Output only. The state of a version. Corresponds to the JSON property `state` @return [String]
Public Class Methods
# File lib/google/apis/ml_v1/classes.rb, line 3266 def initialize(**args) update!(**args) end
Public Instance Methods
Update properties of this object
# File lib/google/apis/ml_v1/classes.rb, line 3271 def update!(**args) @accelerator_config = args[:accelerator_config] if args.key?(:accelerator_config) @auto_scaling = args[:auto_scaling] if args.key?(:auto_scaling) @container = args[:container] if args.key?(:container) @create_time = args[:create_time] if args.key?(:create_time) @deployment_uri = args[:deployment_uri] if args.key?(:deployment_uri) @description = args[:description] if args.key?(:description) @error_message = args[:error_message] if args.key?(:error_message) @etag = args[:etag] if args.key?(:etag) @explanation_config = args[:explanation_config] if args.key?(:explanation_config) @framework = args[:framework] if args.key?(:framework) @is_default = args[:is_default] if args.key?(:is_default) @labels = args[:labels] if args.key?(:labels) @last_migration_model_id = args[:last_migration_model_id] if args.key?(:last_migration_model_id) @last_migration_time = args[:last_migration_time] if args.key?(:last_migration_time) @last_use_time = args[:last_use_time] if args.key?(:last_use_time) @machine_type = args[:machine_type] if args.key?(:machine_type) @manual_scaling = args[:manual_scaling] if args.key?(:manual_scaling) @name = args[:name] if args.key?(:name) @package_uris = args[:package_uris] if args.key?(:package_uris) @prediction_class = args[:prediction_class] if args.key?(:prediction_class) @python_version = args[:python_version] if args.key?(:python_version) @request_logging_config = args[:request_logging_config] if args.key?(:request_logging_config) @routes = args[:routes] if args.key?(:routes) @runtime_version = args[:runtime_version] if args.key?(:runtime_version) @service_account = args[:service_account] if args.key?(:service_account) @state = args[:state] if args.key?(:state) end