class Azure::CognitiveServices::ComputerVision::V1_0::ComputerVisionClient

A service client - single point of access to the REST API.

Attributes

accept_language[RW]

@return [String] The preferred language for the response.

azure_region[RW]

@return [AzureRegions] Supported Azure regions for Cognitive Services endpoints. Possible values include: 'westus', 'westeurope', 'southeastasia', 'eastus2', 'westcentralus', 'westus2', 'eastus', 'southcentralus', 'northeurope', 'eastasia', 'australiaeast', 'brazilsouth', 'canadacentral', 'centralindia', 'uksouth', 'japaneast'

base_url[R]

@return [String] the base URI of the service.

credentials[R]

@return Credentials needed for the client to connect to Azure.

generate_client_request_id[RW]

@return [Boolean] Whether a unique x-ms-client-request-id should be generated. When set to true a unique x-ms-client-request-id value is generated and included in each request. Default is true.

long_running_operation_retry_timeout[RW]

@return [Integer] The retry timeout in seconds for Long Running Operations. Default value is 30.

Public Class Methods

new(credentials = nil, options = nil) click to toggle source

Creates initializes a new instance of the ComputerVisionClient class. @param credentials [MsRest::ServiceClientCredentials] credentials to authorize HTTP requests made by the service client. @param options [Array] filters to be applied to the HTTP requests.

Calls superclass method
# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 44
def initialize(credentials = nil, options = nil)
  super(credentials, options)
  @base_url = 'https://{AzureRegion}.api.cognitive.microsoft.com/vision/v1.0'

  fail ArgumentError, 'invalid type of credentials input parameter' unless credentials.is_a?(MsRest::ServiceClientCredentials) unless credentials.nil?
  @credentials = credentials

  @accept_language = 'en-US'
  @long_running_operation_retry_timeout = 30
  @generate_client_request_id = true
  add_telemetry
end

Public Instance Methods

analyze_image(url, visual_features:nil, details:nil, language:nil, custom_headers:nil) click to toggle source

This operation extracts a rich set of visual features based on the image content. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. Within your request, there is an optional parameter to allow you to choose which features to return. By default, image categories are returned in the response.

@param url [String] Publicly reachable URL of an image @param visual_features [Array<VisualFeatureTypes>] A string indicating what visual feature types to return. Multiple values should be comma-separated. Valid visual feature types include:Categories - categorizes image content according to a taxonomy defined in documentation. Tags - tags the image with a detailed list of words related to the image content. Description - describes the image content with a complete English sentence. Faces - detects if faces are present. If present, generate coordinates, gender and age. ImageType - detects if image is clipart or a line drawing. Color - determines the accent color, dominant color, and whether an image is black&white.Adult - detects if the image is pornographic in nature (depicts nudity or a sex act). Sexually suggestive content is also detected. @param details [Array<Details>] A string indicating which domain-specific details to return. Multiple values should be comma-separated. Valid visual feature types include:Celebrities - identifies celebrities if detected in the image. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [ImageAnalysis] operation results.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 244
def analyze_image(url, visual_features:nil, details:nil, language:nil, custom_headers:nil)
  response = analyze_image_async(url, visual_features:visual_features, details:details, language:language, custom_headers:custom_headers).value!
  response.body unless response.nil?
end
analyze_image_async(url, visual_features:nil, details:nil, language:nil, custom_headers:nil) click to toggle source

This operation extracts a rich set of visual features based on the image content. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. Within your request, there is an optional parameter to allow you to choose which features to return. By default, image categories are returned in the response.

@param url [String] Publicly reachable URL of an image @param visual_features [Array<VisualFeatureTypes>] A string indicating what visual feature types to return. Multiple values should be comma-separated. Valid visual feature types include:Categories - categorizes image content according to a taxonomy defined in documentation. Tags - tags the image with a detailed list of words related to the image content. Description - describes the image content with a complete English sentence. Faces - detects if faces are present. If present, generate coordinates, gender and age. ImageType - detects if image is clipart or a line drawing. Color - determines the accent color, dominant color, and whether an image is black&white.Adult - detects if the image is pornographic in nature (depicts nudity or a sex act). Sexually suggestive content is also detected. @param details [Array<Details>] A string indicating which domain-specific details to return. Multiple values should be comma-separated. Valid visual feature types include:Celebrities - identifies celebrities if detected in the image. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [Concurrent::Promise] Promise object which holds the HTTP response.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 319
def analyze_image_async(url, visual_features:nil, details:nil, language:nil, custom_headers:nil)
  fail ArgumentError, 'azure_region is nil' if azure_region.nil?
  fail ArgumentError, 'url is nil' if url.nil?

  image_url = ImageUrl.new
  unless url.nil?
    image_url.url = url
  end

  request_headers = {}
  request_headers['Content-Type'] = 'application/json; charset=utf-8'

  # Set Headers
  request_headers['x-ms-client-request-id'] = SecureRandom.uuid
  request_headers['accept-language'] = accept_language unless accept_language.nil?

  # Serialize Request
  request_mapper = Azure::CognitiveServices::ComputerVision::V1_0::Models::ImageUrl.mapper()
  request_content = self.serialize(request_mapper,  image_url)
  request_content = request_content != nil ? JSON.generate(request_content, quirks_mode: true) : nil

  path_template = 'analyze'

  request_url = @base_url || self.base_url
request_url = request_url.gsub('{AzureRegion}', azure_region)

  options = {
      middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]],
      query_params: {'visualFeatures' => visual_features.nil? ? nil : visual_features.join(','),'details' => details.nil? ? nil : details.join(','),'language' => language},
      body: request_content,
      headers: request_headers.merge(custom_headers || {}),
      base_url: request_url
  }
  promise = self.make_request_async(:post, path_template, options)

  promise = promise.then do |result|
    http_response = result.response
    status_code = http_response.status
    response_content = http_response.body
    unless status_code == 200
      error_model = JSON.load(response_content)
      fail MsRest::HttpOperationError.new(result.request, http_response, error_model)
    end

    result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil?
    result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil?
    result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil?
    # Deserialize Response
    if status_code == 200
      begin
        parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content)
        result_mapper = Azure::CognitiveServices::ComputerVision::V1_0::Models::ImageAnalysis.mapper()
        result.body = self.deserialize(result_mapper, parsed_response)
      rescue Exception => e
        fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result)
      end
    end

    result
  end

  promise.execute
end
analyze_image_by_domain(model, url, language:nil, custom_headers:nil) click to toggle source

This operation recognizes content within an image by applying a domain-specific model. The list of domain-specific models that are supported by the Computer Vision API can be retrieved using the /models GET request. Currently, the API only provides a single domain-specific model: celebrities. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.

@param model [String] The domain-specific content to recognize. @param url [String] Publicly reachable URL of an image @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [DomainModelResults] operation results.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 969
def analyze_image_by_domain(model, url, language:nil, custom_headers:nil)
  response = analyze_image_by_domain_async(model, url, language:language, custom_headers:custom_headers).value!
  response.body unless response.nil?
end
analyze_image_by_domain_async(model, url, language:nil, custom_headers:nil) click to toggle source

This operation recognizes content within an image by applying a domain-specific model. The list of domain-specific models that are supported by the Computer Vision API can be retrieved using the /models GET request. Currently, the API only provides a single domain-specific model: celebrities. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.

@param model [String] The domain-specific content to recognize. @param url [String] Publicly reachable URL of an image @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [Concurrent::Promise] Promise object which holds the HTTP response.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1022
def analyze_image_by_domain_async(model, url, language:nil, custom_headers:nil)
  fail ArgumentError, 'azure_region is nil' if azure_region.nil?
  fail ArgumentError, 'model is nil' if model.nil?
  fail ArgumentError, 'url is nil' if url.nil?

  image_url = ImageUrl.new
  unless url.nil?
    image_url.url = url
  end

  request_headers = {}
  request_headers['Content-Type'] = 'application/json; charset=utf-8'

  # Set Headers
  request_headers['x-ms-client-request-id'] = SecureRandom.uuid
  request_headers['accept-language'] = accept_language unless accept_language.nil?

  # Serialize Request
  request_mapper = Azure::CognitiveServices::ComputerVision::V1_0::Models::ImageUrl.mapper()
  request_content = self.serialize(request_mapper,  image_url)
  request_content = request_content != nil ? JSON.generate(request_content, quirks_mode: true) : nil

  path_template = 'models/{model}/analyze'

  request_url = @base_url || self.base_url
request_url = request_url.gsub('{AzureRegion}', azure_region)

  options = {
      middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]],
      path_params: {'model' => model},
      query_params: {'language' => language},
      body: request_content,
      headers: request_headers.merge(custom_headers || {}),
      base_url: request_url
  }
  promise = self.make_request_async(:post, path_template, options)

  promise = promise.then do |result|
    http_response = result.response
    status_code = http_response.status
    response_content = http_response.body
    unless status_code == 200
      error_model = JSON.load(response_content)
      fail MsRest::HttpOperationError.new(result.request, http_response, error_model)
    end

    result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil?
    result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil?
    result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil?
    # Deserialize Response
    if status_code == 200
      begin
        parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content)
        result_mapper = Azure::CognitiveServices::ComputerVision::V1_0::Models::DomainModelResults.mapper()
        result.body = self.deserialize(result_mapper, parsed_response)
      rescue Exception => e
        fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result)
      end
    end

    result
  end

  promise.execute
end
analyze_image_by_domain_in_stream(model, image, language:nil, custom_headers:nil) click to toggle source

This operation recognizes content within an image by applying a domain-specific model. The list of domain-specific models that are supported by the Computer Vision API can be retrieved using the /models GET request. Currently, the API only provides a single domain-specific model: celebrities. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.

@param model [String] The domain-specific content to recognize. @param image An image stream. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [DomainModelResults] operation results.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2052
def analyze_image_by_domain_in_stream(model, image, language:nil, custom_headers:nil)
  response = analyze_image_by_domain_in_stream_async(model, image, language:language, custom_headers:custom_headers).value!
  response.body unless response.nil?
end
analyze_image_by_domain_in_stream_async(model, image, language:nil, custom_headers:nil) click to toggle source

This operation recognizes content within an image by applying a domain-specific model. The list of domain-specific models that are supported by the Computer Vision API can be retrieved using the /models GET request. Currently, the API only provides a single domain-specific model: celebrities. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.

@param model [String] The domain-specific content to recognize. @param image An image stream. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [Concurrent::Promise] Promise object which holds the HTTP response.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2105
def analyze_image_by_domain_in_stream_async(model, image, language:nil, custom_headers:nil)
  fail ArgumentError, 'azure_region is nil' if azure_region.nil?
  fail ArgumentError, 'model is nil' if model.nil?
  fail ArgumentError, 'image is nil' if image.nil?


  request_headers = {}
  request_headers['Content-Type'] = 'application/octet-stream'

  # Set Headers
  request_headers['x-ms-client-request-id'] = SecureRandom.uuid
  request_headers['accept-language'] = accept_language unless accept_language.nil?

  # Serialize Request
  request_mapper = {
    client_side_validation: true,
    required: true,
    serialized_name: 'Image',
    type: {
      name: 'Stream'
    }
  }
  request_content = self.serialize(request_mapper,  image)

  path_template = 'models/{model}/analyze'

  request_url = @base_url || self.base_url
request_url = request_url.gsub('{AzureRegion}', azure_region)

  options = {
      middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]],
      path_params: {'model' => model},
      query_params: {'language' => language},
      body: request_content,
      headers: request_headers.merge(custom_headers || {}),
      base_url: request_url
  }
  promise = self.make_request_async(:post, path_template, options)

  promise = promise.then do |result|
    http_response = result.response
    status_code = http_response.status
    response_content = http_response.body
    unless status_code == 200
      error_model = JSON.load(response_content)
      fail MsRest::HttpOperationError.new(result.request, http_response, error_model)
    end

    result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil?
    result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil?
    result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil?
    # Deserialize Response
    if status_code == 200
      begin
        parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content)
        result_mapper = Azure::CognitiveServices::ComputerVision::V1_0::Models::DomainModelResults.mapper()
        result.body = self.deserialize(result_mapper, parsed_response)
      rescue Exception => e
        fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result)
      end
    end

    result
  end

  promise.execute
end
analyze_image_by_domain_in_stream_with_http_info(model, image, language:nil, custom_headers:nil) click to toggle source

This operation recognizes content within an image by applying a domain-specific model. The list of domain-specific models that are supported by the Computer Vision API can be retrieved using the /models GET request. Currently, the API only provides a single domain-specific model: celebrities. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.

@param model [String] The domain-specific content to recognize. @param image An image stream. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [MsRestAzure::AzureOperationResponse] HTTP response information.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2079
def analyze_image_by_domain_in_stream_with_http_info(model, image, language:nil, custom_headers:nil)
  analyze_image_by_domain_in_stream_async(model, image, language:language, custom_headers:custom_headers).value!
end
analyze_image_by_domain_with_http_info(model, url, language:nil, custom_headers:nil) click to toggle source

This operation recognizes content within an image by applying a domain-specific model. The list of domain-specific models that are supported by the Computer Vision API can be retrieved using the /models GET request. Currently, the API only provides a single domain-specific model: celebrities. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.

@param model [String] The domain-specific content to recognize. @param url [String] Publicly reachable URL of an image @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [MsRestAzure::AzureOperationResponse] HTTP response information.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 996
def analyze_image_by_domain_with_http_info(model, url, language:nil, custom_headers:nil)
  analyze_image_by_domain_async(model, url, language:language, custom_headers:custom_headers).value!
end
analyze_image_in_stream(image, visual_features:nil, details:nil, language:nil, custom_headers:nil) click to toggle source

This operation extracts a rich set of visual features based on the image content.

@param image An image stream. @param visual_features [Array<VisualFeatureTypes>] A string indicating what visual feature types to return. Multiple values should be comma-separated. Valid visual feature types include:Categories - categorizes image content according to a taxonomy defined in documentation. Tags - tags the image with a detailed list of words related to the image content. Description - describes the image content with a complete English sentence. Faces - detects if faces are present. If present, generate coordinates, gender and age. ImageType - detects if image is clipart or a line drawing. Color - determines the accent color, dominant color, and whether an image is black&white.Adult - detects if the image is pornographic in nature (depicts nudity or a sex act). Sexually suggestive content is also detected. @param details [Enum] A string indicating which domain-specific details to return. Multiple values should be comma-separated. Valid visual feature types include:Celebrities - identifies celebrities if detected in the image. Possible values include: 'Celebrities', 'Landmarks' @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [ImageAnalysis] operation results.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1323
def analyze_image_in_stream(image, visual_features:nil, details:nil, language:nil, custom_headers:nil)
  response = analyze_image_in_stream_async(image, visual_features:visual_features, details:details, language:language, custom_headers:custom_headers).value!
  response.body unless response.nil?
end
analyze_image_in_stream_async(image, visual_features:nil, details:nil, language:nil, custom_headers:nil) click to toggle source

This operation extracts a rich set of visual features based on the image content.

@param image An image stream. @param visual_features [Array<VisualFeatureTypes>] A string indicating what visual feature types to return. Multiple values should be comma-separated. Valid visual feature types include:Categories - categorizes image content according to a taxonomy defined in documentation. Tags - tags the image with a detailed list of words related to the image content. Description - describes the image content with a complete English sentence. Faces - detects if faces are present. If present, generate coordinates, gender and age. ImageType - detects if image is clipart or a line drawing. Color - determines the accent color, dominant color, and whether an image is black&white.Adult - detects if the image is pornographic in nature (depicts nudity or a sex act). Sexually suggestive content is also detected. @param details [Enum] A string indicating which domain-specific details to return. Multiple values should be comma-separated. Valid visual feature types include:Celebrities - identifies celebrities if detected in the image. Possible values include: 'Celebrities', 'Landmarks' @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [Concurrent::Promise] Promise object which holds the HTTP response.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1392
def analyze_image_in_stream_async(image, visual_features:nil, details:nil, language:nil, custom_headers:nil)
  fail ArgumentError, 'azure_region is nil' if azure_region.nil?
  fail ArgumentError, 'image is nil' if image.nil?


  request_headers = {}
  request_headers['Content-Type'] = 'application/octet-stream'

  # Set Headers
  request_headers['x-ms-client-request-id'] = SecureRandom.uuid
  request_headers['accept-language'] = accept_language unless accept_language.nil?

  # Serialize Request
  request_mapper = {
    client_side_validation: true,
    required: true,
    serialized_name: 'Image',
    type: {
      name: 'Stream'
    }
  }
  request_content = self.serialize(request_mapper,  image)

  path_template = 'analyze'

  request_url = @base_url || self.base_url
request_url = request_url.gsub('{AzureRegion}', azure_region)

  options = {
      middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]],
      query_params: {'visualFeatures' => visual_features.nil? ? nil : visual_features.join(','),'details' => details,'language' => language},
      body: request_content,
      headers: request_headers.merge(custom_headers || {}),
      base_url: request_url
  }
  promise = self.make_request_async(:post, path_template, options)

  promise = promise.then do |result|
    http_response = result.response
    status_code = http_response.status
    response_content = http_response.body
    unless status_code == 200
      error_model = JSON.load(response_content)
      fail MsRest::HttpOperationError.new(result.request, http_response, error_model)
    end

    result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil?
    result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil?
    result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil?
    # Deserialize Response
    if status_code == 200
      begin
        parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content)
        result_mapper = Azure::CognitiveServices::ComputerVision::V1_0::Models::ImageAnalysis.mapper()
        result.body = self.deserialize(result_mapper, parsed_response)
      rescue Exception => e
        fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result)
      end
    end

    result
  end

  promise.execute
end
analyze_image_in_stream_with_http_info(image, visual_features:nil, details:nil, language:nil, custom_headers:nil) click to toggle source

This operation extracts a rich set of visual features based on the image content.

@param image An image stream. @param visual_features [Array<VisualFeatureTypes>] A string indicating what visual feature types to return. Multiple values should be comma-separated. Valid visual feature types include:Categories - categorizes image content according to a taxonomy defined in documentation. Tags - tags the image with a detailed list of words related to the image content. Description - describes the image content with a complete English sentence. Faces - detects if faces are present. If present, generate coordinates, gender and age. ImageType - detects if image is clipart or a line drawing. Color - determines the accent color, dominant color, and whether an image is black&white.Adult - detects if the image is pornographic in nature (depicts nudity or a sex act). Sexually suggestive content is also detected. @param details [Enum] A string indicating which domain-specific details to return. Multiple values should be comma-separated. Valid visual feature types include:Celebrities - identifies celebrities if detected in the image. Possible values include: 'Celebrities', 'Landmarks' @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [MsRestAzure::AzureOperationResponse] HTTP response information.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1358
def analyze_image_in_stream_with_http_info(image, visual_features:nil, details:nil, language:nil, custom_headers:nil)
  analyze_image_in_stream_async(image, visual_features:visual_features, details:details, language:language, custom_headers:custom_headers).value!
end
analyze_image_with_http_info(url, visual_features:nil, details:nil, language:nil, custom_headers:nil) click to toggle source

This operation extracts a rich set of visual features based on the image content. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. Within your request, there is an optional parameter to allow you to choose which features to return. By default, image categories are returned in the response.

@param url [String] Publicly reachable URL of an image @param visual_features [Array<VisualFeatureTypes>] A string indicating what visual feature types to return. Multiple values should be comma-separated. Valid visual feature types include:Categories - categorizes image content according to a taxonomy defined in documentation. Tags - tags the image with a detailed list of words related to the image content. Description - describes the image content with a complete English sentence. Faces - detects if faces are present. If present, generate coordinates, gender and age. ImageType - detects if image is clipart or a line drawing. Color - determines the accent color, dominant color, and whether an image is black&white.Adult - detects if the image is pornographic in nature (depicts nudity or a sex act). Sexually suggestive content is also detected. @param details [Array<Details>] A string indicating which domain-specific details to return. Multiple values should be comma-separated. Valid visual feature types include:Celebrities - identifies celebrities if detected in the image. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [MsRestAzure::AzureOperationResponse] HTTP response information.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 282
def analyze_image_with_http_info(url, visual_features:nil, details:nil, language:nil, custom_headers:nil)
  analyze_image_async(url, visual_features:visual_features, details:details, language:language, custom_headers:custom_headers).value!
end
describe_image(url, max_candidates:'1', language:nil, custom_headers:nil) click to toggle source

This operation generates a description of an image in human readable language with complete sentences. The description is based on a collection of content tags, which are also returned by the operation. More than one description can be generated for each image. Descriptions are ordered by their confidence score. All descriptions are in English. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL.A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.

@param url [String] Publicly reachable URL of an image @param max_candidates [String] Maximum number of candidate descriptions to be returned. The default is 1. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [ImageDescription] operation results.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 695
def describe_image(url, max_candidates:'1', language:nil, custom_headers:nil)
  response = describe_image_async(url, max_candidates:max_candidates, language:language, custom_headers:custom_headers).value!
  response.body unless response.nil?
end
describe_image_async(url, max_candidates:'1', language:nil, custom_headers:nil) click to toggle source

This operation generates a description of an image in human readable language with complete sentences. The description is based on a collection of content tags, which are also returned by the operation. More than one description can be generated for each image. Descriptions are ordered by their confidence score. All descriptions are in English. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL.A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.

@param url [String] Publicly reachable URL of an image @param max_candidates [String] Maximum number of candidate descriptions to be returned. The default is 1. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [Concurrent::Promise] Promise object which holds the HTTP response.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 750
def describe_image_async(url, max_candidates:'1', language:nil, custom_headers:nil)
  fail ArgumentError, 'azure_region is nil' if azure_region.nil?
  fail ArgumentError, 'url is nil' if url.nil?

  image_url = ImageUrl.new
  unless url.nil?
    image_url.url = url
  end

  request_headers = {}
  request_headers['Content-Type'] = 'application/json; charset=utf-8'

  # Set Headers
  request_headers['x-ms-client-request-id'] = SecureRandom.uuid
  request_headers['accept-language'] = accept_language unless accept_language.nil?

  # Serialize Request
  request_mapper = Azure::CognitiveServices::ComputerVision::V1_0::Models::ImageUrl.mapper()
  request_content = self.serialize(request_mapper,  image_url)
  request_content = request_content != nil ? JSON.generate(request_content, quirks_mode: true) : nil

  path_template = 'describe'

  request_url = @base_url || self.base_url
request_url = request_url.gsub('{AzureRegion}', azure_region)

  options = {
      middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]],
      query_params: {'maxCandidates' => max_candidates,'language' => language},
      body: request_content,
      headers: request_headers.merge(custom_headers || {}),
      base_url: request_url
  }
  promise = self.make_request_async(:post, path_template, options)

  promise = promise.then do |result|
    http_response = result.response
    status_code = http_response.status
    response_content = http_response.body
    unless status_code == 200
      error_model = JSON.load(response_content)
      fail MsRest::HttpOperationError.new(result.request, http_response, error_model)
    end

    result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil?
    result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil?
    result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil?
    # Deserialize Response
    if status_code == 200
      begin
        parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content)
        result_mapper = Azure::CognitiveServices::ComputerVision::V1_0::Models::ImageDescription.mapper()
        result.body = self.deserialize(result_mapper, parsed_response)
      rescue Exception => e
        fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result)
      end
    end

    result
  end

  promise.execute
end
describe_image_in_stream(image, max_candidates:'1', language:nil, custom_headers:nil) click to toggle source

This operation generates a description of an image in human readable language with complete sentences. The description is based on a collection of content tags, which are also returned by the operation. More than one description can be generated for each image. Descriptions are ordered by their confidence score. All descriptions are in English. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL.A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.

@param image An image stream. @param max_candidates [String] Maximum number of candidate descriptions to be returned. The default is 1. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [ImageDescription] operation results.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1774
def describe_image_in_stream(image, max_candidates:'1', language:nil, custom_headers:nil)
  response = describe_image_in_stream_async(image, max_candidates:max_candidates, language:language, custom_headers:custom_headers).value!
  response.body unless response.nil?
end
describe_image_in_stream_async(image, max_candidates:'1', language:nil, custom_headers:nil) click to toggle source

This operation generates a description of an image in human readable language with complete sentences. The description is based on a collection of content tags, which are also returned by the operation. More than one description can be generated for each image. Descriptions are ordered by their confidence score. All descriptions are in English. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL.A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.

@param image An image stream. @param max_candidates [String] Maximum number of candidate descriptions to be returned. The default is 1. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [Concurrent::Promise] Promise object which holds the HTTP response.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1829
def describe_image_in_stream_async(image, max_candidates:'1', language:nil, custom_headers:nil)
  fail ArgumentError, 'azure_region is nil' if azure_region.nil?
  fail ArgumentError, 'image is nil' if image.nil?


  request_headers = {}
  request_headers['Content-Type'] = 'application/octet-stream'

  # Set Headers
  request_headers['x-ms-client-request-id'] = SecureRandom.uuid
  request_headers['accept-language'] = accept_language unless accept_language.nil?

  # Serialize Request
  request_mapper = {
    client_side_validation: true,
    required: true,
    serialized_name: 'Image',
    type: {
      name: 'Stream'
    }
  }
  request_content = self.serialize(request_mapper,  image)

  path_template = 'describe'

  request_url = @base_url || self.base_url
request_url = request_url.gsub('{AzureRegion}', azure_region)

  options = {
      middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]],
      query_params: {'maxCandidates' => max_candidates,'language' => language},
      body: request_content,
      headers: request_headers.merge(custom_headers || {}),
      base_url: request_url
  }
  promise = self.make_request_async(:post, path_template, options)

  promise = promise.then do |result|
    http_response = result.response
    status_code = http_response.status
    response_content = http_response.body
    unless status_code == 200
      error_model = JSON.load(response_content)
      fail MsRest::HttpOperationError.new(result.request, http_response, error_model)
    end

    result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil?
    result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil?
    result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil?
    # Deserialize Response
    if status_code == 200
      begin
        parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content)
        result_mapper = Azure::CognitiveServices::ComputerVision::V1_0::Models::ImageDescription.mapper()
        result.body = self.deserialize(result_mapper, parsed_response)
      rescue Exception => e
        fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result)
      end
    end

    result
  end

  promise.execute
end
describe_image_in_stream_with_http_info(image, max_candidates:'1', language:nil, custom_headers:nil) click to toggle source

This operation generates a description of an image in human readable language with complete sentences. The description is based on a collection of content tags, which are also returned by the operation. More than one description can be generated for each image. Descriptions are ordered by their confidence score. All descriptions are in English. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL.A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.

@param image An image stream. @param max_candidates [String] Maximum number of candidate descriptions to be returned. The default is 1. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [MsRestAzure::AzureOperationResponse] HTTP response information.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1802
def describe_image_in_stream_with_http_info(image, max_candidates:'1', language:nil, custom_headers:nil)
  describe_image_in_stream_async(image, max_candidates:max_candidates, language:language, custom_headers:custom_headers).value!
end
describe_image_with_http_info(url, max_candidates:'1', language:nil, custom_headers:nil) click to toggle source

This operation generates a description of an image in human readable language with complete sentences. The description is based on a collection of content tags, which are also returned by the operation. More than one description can be generated for each image. Descriptions are ordered by their confidence score. All descriptions are in English. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL.A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.

@param url [String] Publicly reachable URL of an image @param max_candidates [String] Maximum number of candidate descriptions to be returned. The default is 1. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [MsRestAzure::AzureOperationResponse] HTTP response information.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 723
def describe_image_with_http_info(url, max_candidates:'1', language:nil, custom_headers:nil)
  describe_image_async(url, max_candidates:max_candidates, language:language, custom_headers:custom_headers).value!
end
generate_thumbnail(width, height, url, smart_cropping:false, custom_headers:nil) click to toggle source

This operation generates a thumbnail image with the user-specified width and height. By default, the service analyzes the image, identifies the region of interest (ROI), and generates smart cropping coordinates based on the ROI. Smart cropping helps when you specify an aspect ratio that differs from that of the input image. A successful response contains the thumbnail image binary. If the request failed, the response contains an error code and a message to help determine what went wrong.

@param width [Integer] Width of the thumbnail. It must be between 1 and 1024. Recommended minimum of 50. @param height [Integer] Height of the thumbnail. It must be between 1 and

  1. Recommended minimum of 50.

@param url [String] Publicly reachable URL of an image @param smart_cropping [Boolean] Boolean flag for enabling smart cropping. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [NOT_IMPLEMENTED] operation results.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 403
def generate_thumbnail(width, height, url, smart_cropping:false, custom_headers:nil)
  response = generate_thumbnail_async(width, height, url, smart_cropping:smart_cropping, custom_headers:custom_headers).value!
  response.body unless response.nil?
end
generate_thumbnail_async(width, height, url, smart_cropping:false, custom_headers:nil) click to toggle source

This operation generates a thumbnail image with the user-specified width and height. By default, the service analyzes the image, identifies the region of interest (ROI), and generates smart cropping coordinates based on the ROI. Smart cropping helps when you specify an aspect ratio that differs from that of the input image. A successful response contains the thumbnail image binary. If the request failed, the response contains an error code and a message to help determine what went wrong.

@param width [Integer] Width of the thumbnail. It must be between 1 and 1024. Recommended minimum of 50. @param height [Integer] Height of the thumbnail. It must be between 1 and

  1. Recommended minimum of 50.

@param url [String] Publicly reachable URL of an image @param smart_cropping [Boolean] Boolean flag for enabling smart cropping. @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [Concurrent::Promise] Promise object which holds the HTTP response.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 452
def generate_thumbnail_async(width, height, url, smart_cropping:false, custom_headers:nil)
  fail ArgumentError, 'azure_region is nil' if azure_region.nil?
  fail ArgumentError, 'width is nil' if width.nil?
  fail ArgumentError, "'width' should satisfy the constraint - 'InclusiveMaximum': '1023'" if !width.nil? && width > 1023
  fail ArgumentError, "'width' should satisfy the constraint - 'InclusiveMinimum': '1'" if !width.nil? && width < 1
  fail ArgumentError, 'height is nil' if height.nil?
  fail ArgumentError, "'height' should satisfy the constraint - 'InclusiveMaximum': '1023'" if !height.nil? && height > 1023
  fail ArgumentError, "'height' should satisfy the constraint - 'InclusiveMinimum': '1'" if !height.nil? && height < 1
  fail ArgumentError, 'url is nil' if url.nil?

  image_url = ImageUrl.new
  unless url.nil?
    image_url.url = url
  end

  request_headers = {}
  request_headers['Content-Type'] = 'application/json; charset=utf-8'

  # Set Headers
  request_headers['x-ms-client-request-id'] = SecureRandom.uuid
  request_headers['accept-language'] = accept_language unless accept_language.nil?

  # Serialize Request
  request_mapper = Azure::CognitiveServices::ComputerVision::V1_0::Models::ImageUrl.mapper()
  request_content = self.serialize(request_mapper,  image_url)
  request_content = request_content != nil ? JSON.generate(request_content, quirks_mode: true) : nil

  path_template = 'generateThumbnail'

  request_url = @base_url || self.base_url
request_url = request_url.gsub('{AzureRegion}', azure_region)

  options = {
      middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]],
      query_params: {'width' => width,'height' => height,'smartCropping' => smart_cropping},
      body: request_content,
      headers: request_headers.merge(custom_headers || {}),
      base_url: request_url
  }
  promise = self.make_request_async(:post, path_template, options)

  promise = promise.then do |result|
    http_response = result.response
    status_code = http_response.status
    response_content = http_response.body
    unless status_code == 200
      error_model = JSON.load(response_content)
      fail MsRestAzure::AzureOperationError.new(result.request, http_response, error_model)
    end

    result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil?
    result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil?
    result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil?
    # Deserialize Response
    if status_code == 200
      begin
        parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content)
        result_mapper = {
          client_side_validation: true,
          required: false,
          serialized_name: 'parsed_response',
          type: {
            name: 'Stream'
          }
        }
        result.body = self.deserialize(result_mapper, parsed_response)
      rescue Exception => e
        fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result)
      end
    end

    result
  end

  promise.execute
end
generate_thumbnail_in_stream(width, height, image, smart_cropping:false, custom_headers:nil) click to toggle source

This operation generates a thumbnail image with the user-specified width and height. By default, the service analyzes the image, identifies the region of interest (ROI), and generates smart cropping coordinates based on the ROI. Smart cropping helps when you specify an aspect ratio that differs from that of the input image. A successful response contains the thumbnail image binary. If the request failed, the response contains an error code and a message to help determine what went wrong.

@param width [Integer] Width of the thumbnail. It must be between 1 and 1024. Recommended minimum of 50. @param height [Integer] Height of the thumbnail. It must be between 1 and

  1. Recommended minimum of 50.

@param image An image stream. @param smart_cropping [Boolean] Boolean flag for enabling smart cropping. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [NOT_IMPLEMENTED] operation results.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1478
def generate_thumbnail_in_stream(width, height, image, smart_cropping:false, custom_headers:nil)
  response = generate_thumbnail_in_stream_async(width, height, image, smart_cropping:smart_cropping, custom_headers:custom_headers).value!
  response.body unless response.nil?
end
generate_thumbnail_in_stream_async(width, height, image, smart_cropping:false, custom_headers:nil) click to toggle source

This operation generates a thumbnail image with the user-specified width and height. By default, the service analyzes the image, identifies the region of interest (ROI), and generates smart cropping coordinates based on the ROI. Smart cropping helps when you specify an aspect ratio that differs from that of the input image. A successful response contains the thumbnail image binary. If the request failed, the response contains an error code and a message to help determine what went wrong.

@param width [Integer] Width of the thumbnail. It must be between 1 and 1024. Recommended minimum of 50. @param height [Integer] Height of the thumbnail. It must be between 1 and

  1. Recommended minimum of 50.

@param image An image stream. @param smart_cropping [Boolean] Boolean flag for enabling smart cropping. @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [Concurrent::Promise] Promise object which holds the HTTP response.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1527
def generate_thumbnail_in_stream_async(width, height, image, smart_cropping:false, custom_headers:nil)
  fail ArgumentError, 'azure_region is nil' if azure_region.nil?
  fail ArgumentError, 'width is nil' if width.nil?
  fail ArgumentError, "'width' should satisfy the constraint - 'InclusiveMaximum': '1023'" if !width.nil? && width > 1023
  fail ArgumentError, "'width' should satisfy the constraint - 'InclusiveMinimum': '1'" if !width.nil? && width < 1
  fail ArgumentError, 'height is nil' if height.nil?
  fail ArgumentError, "'height' should satisfy the constraint - 'InclusiveMaximum': '1023'" if !height.nil? && height > 1023
  fail ArgumentError, "'height' should satisfy the constraint - 'InclusiveMinimum': '1'" if !height.nil? && height < 1
  fail ArgumentError, 'image is nil' if image.nil?


  request_headers = {}
  request_headers['Content-Type'] = 'application/octet-stream'

  # Set Headers
  request_headers['x-ms-client-request-id'] = SecureRandom.uuid
  request_headers['accept-language'] = accept_language unless accept_language.nil?

  # Serialize Request
  request_mapper = {
    client_side_validation: true,
    required: true,
    serialized_name: 'Image',
    type: {
      name: 'Stream'
    }
  }
  request_content = self.serialize(request_mapper,  image)

  path_template = 'generateThumbnail'

  request_url = @base_url || self.base_url
request_url = request_url.gsub('{AzureRegion}', azure_region)

  options = {
      middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]],
      query_params: {'width' => width,'height' => height,'smartCropping' => smart_cropping},
      body: request_content,
      headers: request_headers.merge(custom_headers || {}),
      base_url: request_url
  }
  promise = self.make_request_async(:post, path_template, options)

  promise = promise.then do |result|
    http_response = result.response
    status_code = http_response.status
    response_content = http_response.body
    unless status_code == 200
      error_model = JSON.load(response_content)
      fail MsRestAzure::AzureOperationError.new(result.request, http_response, error_model)
    end

    result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil?
    result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil?
    result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil?
    # Deserialize Response
    if status_code == 200
      begin
        parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content)
        result_mapper = {
          client_side_validation: true,
          required: false,
          serialized_name: 'parsed_response',
          type: {
            name: 'Stream'
          }
        }
        result.body = self.deserialize(result_mapper, parsed_response)
      rescue Exception => e
        fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result)
      end
    end

    result
  end

  promise.execute
end
generate_thumbnail_in_stream_with_http_info(width, height, image, smart_cropping:false, custom_headers:nil) click to toggle source

This operation generates a thumbnail image with the user-specified width and height. By default, the service analyzes the image, identifies the region of interest (ROI), and generates smart cropping coordinates based on the ROI. Smart cropping helps when you specify an aspect ratio that differs from that of the input image. A successful response contains the thumbnail image binary. If the request failed, the response contains an error code and a message to help determine what went wrong.

@param width [Integer] Width of the thumbnail. It must be between 1 and 1024. Recommended minimum of 50. @param height [Integer] Height of the thumbnail. It must be between 1 and

  1. Recommended minimum of 50.

@param image An image stream. @param smart_cropping [Boolean] Boolean flag for enabling smart cropping. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [MsRestAzure::AzureOperationResponse] HTTP response information.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1503
def generate_thumbnail_in_stream_with_http_info(width, height, image, smart_cropping:false, custom_headers:nil)
  generate_thumbnail_in_stream_async(width, height, image, smart_cropping:smart_cropping, custom_headers:custom_headers).value!
end
generate_thumbnail_with_http_info(width, height, url, smart_cropping:false, custom_headers:nil) click to toggle source

This operation generates a thumbnail image with the user-specified width and height. By default, the service analyzes the image, identifies the region of interest (ROI), and generates smart cropping coordinates based on the ROI. Smart cropping helps when you specify an aspect ratio that differs from that of the input image. A successful response contains the thumbnail image binary. If the request failed, the response contains an error code and a message to help determine what went wrong.

@param width [Integer] Width of the thumbnail. It must be between 1 and 1024. Recommended minimum of 50. @param height [Integer] Height of the thumbnail. It must be between 1 and

  1. Recommended minimum of 50.

@param url [String] Publicly reachable URL of an image @param smart_cropping [Boolean] Boolean flag for enabling smart cropping. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [MsRestAzure::AzureOperationResponse] HTTP response information.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 428
def generate_thumbnail_with_http_info(width, height, url, smart_cropping:false, custom_headers:nil)
  generate_thumbnail_async(width, height, url, smart_cropping:smart_cropping, custom_headers:custom_headers).value!
end
get_text_operation_result(operation_id, custom_headers:nil) click to toggle source

This interface is used for getting text operation result. The URL to this interface should be retrieved from 'Operation-Location' field returned from Recognize Text interface.

@param operation_id [String] Id of the text operation returned in the response of the 'Recognize Handwritten Text' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [TextOperationResult] operation results.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1207
def get_text_operation_result(operation_id, custom_headers:nil)
  response = get_text_operation_result_async(operation_id, custom_headers:custom_headers).value!
  response.body unless response.nil?
end
get_text_operation_result_async(operation_id, custom_headers:nil) click to toggle source

This interface is used for getting text operation result. The URL to this interface should be retrieved from 'Operation-Location' field returned from Recognize Text interface.

@param operation_id [String] Id of the text operation returned in the response of the 'Recognize Handwritten Text' @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [Concurrent::Promise] Promise object which holds the HTTP response.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1240
def get_text_operation_result_async(operation_id, custom_headers:nil)
  fail ArgumentError, 'azure_region is nil' if azure_region.nil?
  fail ArgumentError, 'operation_id is nil' if operation_id.nil?


  request_headers = {}
  request_headers['Content-Type'] = 'application/json; charset=utf-8'

  # Set Headers
  request_headers['x-ms-client-request-id'] = SecureRandom.uuid
  request_headers['accept-language'] = accept_language unless accept_language.nil?
  path_template = 'textOperations/{operationId}'

  request_url = @base_url || self.base_url
request_url = request_url.gsub('{AzureRegion}', azure_region)

  options = {
      middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]],
      path_params: {'operationId' => operation_id},
      headers: request_headers.merge(custom_headers || {}),
      base_url: request_url
  }
  promise = self.make_request_async(:get, path_template, options)

  promise = promise.then do |result|
    http_response = result.response
    status_code = http_response.status
    response_content = http_response.body
    unless status_code == 200
      error_model = JSON.load(response_content)
      fail MsRest::HttpOperationError.new(result.request, http_response, error_model)
    end

    result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil?
    result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil?
    result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil?
    # Deserialize Response
    if status_code == 200
      begin
        parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content)
        result_mapper = Azure::CognitiveServices::ComputerVision::V1_0::Models::TextOperationResult.mapper()
        result.body = self.deserialize(result_mapper, parsed_response)
      rescue Exception => e
        fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result)
      end
    end

    result
  end

  promise.execute
end
get_text_operation_result_with_http_info(operation_id, custom_headers:nil) click to toggle source

This interface is used for getting text operation result. The URL to this interface should be retrieved from 'Operation-Location' field returned from Recognize Text interface.

@param operation_id [String] Id of the text operation returned in the response of the 'Recognize Handwritten Text' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [MsRestAzure::AzureOperationResponse] HTTP response information.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1224
def get_text_operation_result_with_http_info(operation_id, custom_headers:nil)
  get_text_operation_result_async(operation_id, custom_headers:custom_headers).value!
end
list_models(custom_headers:nil) click to toggle source

This operation returns the list of domain-specific models that are supported by the Computer Vision API. Currently, the API only supports one domain-specific model: a celebrity recognizer. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.

@param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [ListModelsResult] operation results.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 127
def list_models(custom_headers:nil)
  response = list_models_async(custom_headers:custom_headers).value!
  response.body unless response.nil?
end
list_models_async(custom_headers:nil) click to toggle source

This operation returns the list of domain-specific models that are supported by the Computer Vision API. Currently, the API only supports one domain-specific model: a celebrity recognizer. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.

@param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [Concurrent::Promise] Promise object which holds the HTTP response.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 160
def list_models_async(custom_headers:nil)
  fail ArgumentError, 'azure_region is nil' if azure_region.nil?


  request_headers = {}
  request_headers['Content-Type'] = 'application/json; charset=utf-8'

  # Set Headers
  request_headers['x-ms-client-request-id'] = SecureRandom.uuid
  request_headers['accept-language'] = accept_language unless accept_language.nil?
  path_template = 'models'

  request_url = @base_url || self.base_url
request_url = request_url.gsub('{AzureRegion}', azure_region)

  options = {
      middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]],
      headers: request_headers.merge(custom_headers || {}),
      base_url: request_url
  }
  promise = self.make_request_async(:get, path_template, options)

  promise = promise.then do |result|
    http_response = result.response
    status_code = http_response.status
    response_content = http_response.body
    unless status_code == 200
      error_model = JSON.load(response_content)
      fail MsRest::HttpOperationError.new(result.request, http_response, error_model)
    end

    result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil?
    result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil?
    result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil?
    # Deserialize Response
    if status_code == 200
      begin
        parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content)
        result_mapper = Azure::CognitiveServices::ComputerVision::V1_0::Models::ListModelsResult.mapper()
        result.body = self.deserialize(result_mapper, parsed_response)
      rescue Exception => e
        fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result)
      end
    end

    result
  end

  promise.execute
end
list_models_with_http_info(custom_headers:nil) click to toggle source

This operation returns the list of domain-specific models that are supported by the Computer Vision API. Currently, the API only supports one domain-specific model: a celebrity recognizer. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.

@param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [MsRestAzure::AzureOperationResponse] HTTP response information.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 144
def list_models_with_http_info(custom_headers:nil)
  list_models_async(custom_headers:custom_headers).value!
end
make_request(method, path, options = {}) click to toggle source

Makes a request and returns the body of the response. @param method [Symbol] with any of the following values :get, :put, :post, :patch, :delete. @param path [String] the path, relative to {base_url}. @param options [Hash{String=>String}] specifying any request options like :body. @return [Hash{String=>String}] containing the body of the response. Example:

request_content = "{'location':'westus','tags':{'tag1':'val1','tag2':'val2'}}"
path = "/path"
options = {
  body: request_content,
  query_params: {'api-version' => '2016-02-01'}
}
result = @client.make_request(:put, path, options)
# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 73
def make_request(method, path, options = {})
  result = make_request_with_http_info(method, path, options)
  result.body unless result.nil?
end
make_request_async(method, path, options = {}) click to toggle source

Makes a request asynchronously. @param method [Symbol] with any of the following values :get, :put, :post, :patch, :delete. @param path [String] the path, relative to {base_url}. @param options [Hash{String=>String}] specifying any request options like :body. @return [Concurrent::Promise] Promise object which holds the HTTP response.

Calls superclass method
# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 98
def make_request_async(method, path, options = {})
  fail ArgumentError, 'method is nil' if method.nil?
  fail ArgumentError, 'path is nil' if path.nil?

  request_url = options[:base_url] || @base_url
  if(!options[:headers].nil? && !options[:headers]['Content-Type'].nil?)
    @request_headers['Content-Type'] = options[:headers]['Content-Type']
  end

  request_headers = @request_headers
  request_headers.merge!({'accept-language' => @accept_language}) unless @accept_language.nil?
  options.merge!({headers: request_headers.merge(options[:headers] || {})})
  options.merge!({credentials: @credentials}) unless @credentials.nil?

  super(request_url, method, path, options)
end
make_request_with_http_info(method, path, options = {}) click to toggle source

Makes a request and returns the operation response. @param method [Symbol] with any of the following values :get, :put, :post, :patch, :delete. @param path [String] the path, relative to {base_url}. @param options [Hash{String=>String}] specifying any request options like :body. @return [MsRestAzure::AzureOperationResponse] Operation response containing the request, response and status.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 85
def make_request_with_http_info(method, path, options = {})
  result = make_request_async(method, path, options).value!
  result.body = result.response.body.to_s.empty? ? nil : JSON.load(result.response.body)
  result
end
recognize_printed_text(detect_orientation, url, language:nil, custom_headers:nil) click to toggle source

Optical Character Recognition (OCR) detects printed text in an image and extracts the recognized characters into a machine-usable character stream. Upon success, the OCR results will be returned. Upon failure, the error code together with an error message will be returned. The error code can be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, NotSupportedImage, NotSupportedLanguage, or InternalServerError.

@param detect_orientation [Boolean] Whether detect the text orientation in the image. With detectOrientation=true the OCR service tries to detect the image orientation and correct it before further processing (e.g. if it's upside-down). @param url [String] Publicly reachable URL of an image @param language [OcrLanguages] The BCP-47 language code of the text to be detected in the image. The default value is 'unk'. Possible values include: 'unk', 'zh-Hans', 'zh-Hant', 'cs', 'da', 'nl', 'en', 'fi', 'fr', 'de', 'el', 'hu', 'it', 'ja', 'ko', 'nb', 'pl', 'pt', 'ru', 'es', 'sv', 'tr', 'ar', 'ro', 'sr-Cyrl', 'sr-Latn', 'sk' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [OcrResult] operation results.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 552
def recognize_printed_text(detect_orientation, url, language:nil, custom_headers:nil)
  response = recognize_printed_text_async(detect_orientation, url, language:language, custom_headers:custom_headers).value!
  response.body unless response.nil?
end
recognize_printed_text_async(detect_orientation, url, language:nil, custom_headers:nil) click to toggle source

Optical Character Recognition (OCR) detects printed text in an image and extracts the recognized characters into a machine-usable character stream. Upon success, the OCR results will be returned. Upon failure, the error code together with an error message will be returned. The error code can be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, NotSupportedImage, NotSupportedLanguage, or InternalServerError.

@param detect_orientation [Boolean] Whether detect the text orientation in the image. With detectOrientation=true the OCR service tries to detect the image orientation and correct it before further processing (e.g. if it's upside-down). @param url [String] Publicly reachable URL of an image @param language [OcrLanguages] The BCP-47 language code of the text to be detected in the image. The default value is 'unk'. Possible values include: 'unk', 'zh-Hans', 'zh-Hant', 'cs', 'da', 'nl', 'en', 'fi', 'fr', 'de', 'el', 'hu', 'it', 'ja', 'ko', 'nb', 'pl', 'pt', 'ru', 'es', 'sv', 'tr', 'ar', 'ro', 'sr-Cyrl', 'sr-Latn', 'sk' @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [Concurrent::Promise] Promise object which holds the HTTP response.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 607
def recognize_printed_text_async(detect_orientation, url, language:nil, custom_headers:nil)
  fail ArgumentError, 'azure_region is nil' if azure_region.nil?
  fail ArgumentError, 'detect_orientation is nil' if detect_orientation.nil?
  fail ArgumentError, 'url is nil' if url.nil?

  image_url = ImageUrl.new
  unless url.nil?
    image_url.url = url
  end

  request_headers = {}
  request_headers['Content-Type'] = 'application/json; charset=utf-8'

  # Set Headers
  request_headers['x-ms-client-request-id'] = SecureRandom.uuid
  request_headers['accept-language'] = accept_language unless accept_language.nil?

  # Serialize Request
  request_mapper = Azure::CognitiveServices::ComputerVision::V1_0::Models::ImageUrl.mapper()
  request_content = self.serialize(request_mapper,  image_url)
  request_content = request_content != nil ? JSON.generate(request_content, quirks_mode: true) : nil

  path_template = 'ocr'

  request_url = @base_url || self.base_url
request_url = request_url.gsub('{AzureRegion}', azure_region)

  options = {
      middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]],
      query_params: {'detectOrientation' => detect_orientation,'language' => language},
      body: request_content,
      headers: request_headers.merge(custom_headers || {}),
      base_url: request_url
  }
  promise = self.make_request_async(:post, path_template, options)

  promise = promise.then do |result|
    http_response = result.response
    status_code = http_response.status
    response_content = http_response.body
    unless status_code == 200
      error_model = JSON.load(response_content)
      fail MsRest::HttpOperationError.new(result.request, http_response, error_model)
    end

    result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil?
    result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil?
    result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil?
    # Deserialize Response
    if status_code == 200
      begin
        parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content)
        result_mapper = Azure::CognitiveServices::ComputerVision::V1_0::Models::OcrResult.mapper()
        result.body = self.deserialize(result_mapper, parsed_response)
      rescue Exception => e
        fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result)
      end
    end

    result
  end

  promise.execute
end
recognize_printed_text_in_stream(detect_orientation, image, language:nil, custom_headers:nil) click to toggle source

Optical Character Recognition (OCR) detects printed text in an image and extracts the recognized characters into a machine-usable character stream. Upon success, the OCR results will be returned. Upon failure, the error code together with an error message will be returned. The error code can be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, NotSupportedImage, NotSupportedLanguage, or InternalServerError.

@param detect_orientation [Boolean] Whether detect the text orientation in the image. With detectOrientation=true the OCR service tries to detect the image orientation and correct it before further processing (e.g. if it's upside-down). @param image An image stream. @param language [OcrLanguages] The BCP-47 language code of the text to be detected in the image. The default value is 'unk'. Possible values include: 'unk', 'zh-Hans', 'zh-Hant', 'cs', 'da', 'nl', 'en', 'fi', 'fr', 'de', 'el', 'hu', 'it', 'ja', 'ko', 'nb', 'pl', 'pt', 'ru', 'es', 'sv', 'tr', 'ar', 'ro', 'sr-Cyrl', 'sr-Latn', 'sk' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [OcrResult] operation results.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1629
def recognize_printed_text_in_stream(detect_orientation, image, language:nil, custom_headers:nil)
  response = recognize_printed_text_in_stream_async(detect_orientation, image, language:language, custom_headers:custom_headers).value!
  response.body unless response.nil?
end
recognize_printed_text_in_stream_async(detect_orientation, image, language:nil, custom_headers:nil) click to toggle source

Optical Character Recognition (OCR) detects printed text in an image and extracts the recognized characters into a machine-usable character stream. Upon success, the OCR results will be returned. Upon failure, the error code together with an error message will be returned. The error code can be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, NotSupportedImage, NotSupportedLanguage, or InternalServerError.

@param detect_orientation [Boolean] Whether detect the text orientation in the image. With detectOrientation=true the OCR service tries to detect the image orientation and correct it before further processing (e.g. if it's upside-down). @param image An image stream. @param language [OcrLanguages] The BCP-47 language code of the text to be detected in the image. The default value is 'unk'. Possible values include: 'unk', 'zh-Hans', 'zh-Hant', 'cs', 'da', 'nl', 'en', 'fi', 'fr', 'de', 'el', 'hu', 'it', 'ja', 'ko', 'nb', 'pl', 'pt', 'ru', 'es', 'sv', 'tr', 'ar', 'ro', 'sr-Cyrl', 'sr-Latn', 'sk' @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [Concurrent::Promise] Promise object which holds the HTTP response.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1684
def recognize_printed_text_in_stream_async(detect_orientation, image, language:nil, custom_headers:nil)
  fail ArgumentError, 'azure_region is nil' if azure_region.nil?
  fail ArgumentError, 'detect_orientation is nil' if detect_orientation.nil?
  fail ArgumentError, 'image is nil' if image.nil?


  request_headers = {}
  request_headers['Content-Type'] = 'application/octet-stream'

  # Set Headers
  request_headers['x-ms-client-request-id'] = SecureRandom.uuid
  request_headers['accept-language'] = accept_language unless accept_language.nil?

  # Serialize Request
  request_mapper = {
    client_side_validation: true,
    required: true,
    serialized_name: 'Image',
    type: {
      name: 'Stream'
    }
  }
  request_content = self.serialize(request_mapper,  image)

  path_template = 'ocr'

  request_url = @base_url || self.base_url
request_url = request_url.gsub('{AzureRegion}', azure_region)

  options = {
      middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]],
      query_params: {'language' => language,'detectOrientation' => detect_orientation},
      body: request_content,
      headers: request_headers.merge(custom_headers || {}),
      base_url: request_url
  }
  promise = self.make_request_async(:post, path_template, options)

  promise = promise.then do |result|
    http_response = result.response
    status_code = http_response.status
    response_content = http_response.body
    unless status_code == 200
      error_model = JSON.load(response_content)
      fail MsRest::HttpOperationError.new(result.request, http_response, error_model)
    end

    result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil?
    result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil?
    result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil?
    # Deserialize Response
    if status_code == 200
      begin
        parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content)
        result_mapper = Azure::CognitiveServices::ComputerVision::V1_0::Models::OcrResult.mapper()
        result.body = self.deserialize(result_mapper, parsed_response)
      rescue Exception => e
        fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result)
      end
    end

    result
  end

  promise.execute
end
recognize_printed_text_in_stream_with_http_info(detect_orientation, image, language:nil, custom_headers:nil) click to toggle source

Optical Character Recognition (OCR) detects printed text in an image and extracts the recognized characters into a machine-usable character stream. Upon success, the OCR results will be returned. Upon failure, the error code together with an error message will be returned. The error code can be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, NotSupportedImage, NotSupportedLanguage, or InternalServerError.

@param detect_orientation [Boolean] Whether detect the text orientation in the image. With detectOrientation=true the OCR service tries to detect the image orientation and correct it before further processing (e.g. if it's upside-down). @param image An image stream. @param language [OcrLanguages] The BCP-47 language code of the text to be detected in the image. The default value is 'unk'. Possible values include: 'unk', 'zh-Hans', 'zh-Hant', 'cs', 'da', 'nl', 'en', 'fi', 'fr', 'de', 'el', 'hu', 'it', 'ja', 'ko', 'nb', 'pl', 'pt', 'ru', 'es', 'sv', 'tr', 'ar', 'ro', 'sr-Cyrl', 'sr-Latn', 'sk' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [MsRestAzure::AzureOperationResponse] HTTP response information.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1657
def recognize_printed_text_in_stream_with_http_info(detect_orientation, image, language:nil, custom_headers:nil)
  recognize_printed_text_in_stream_async(detect_orientation, image, language:language, custom_headers:custom_headers).value!
end
recognize_printed_text_with_http_info(detect_orientation, url, language:nil, custom_headers:nil) click to toggle source

Optical Character Recognition (OCR) detects printed text in an image and extracts the recognized characters into a machine-usable character stream. Upon success, the OCR results will be returned. Upon failure, the error code together with an error message will be returned. The error code can be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, NotSupportedImage, NotSupportedLanguage, or InternalServerError.

@param detect_orientation [Boolean] Whether detect the text orientation in the image. With detectOrientation=true the OCR service tries to detect the image orientation and correct it before further processing (e.g. if it's upside-down). @param url [String] Publicly reachable URL of an image @param language [OcrLanguages] The BCP-47 language code of the text to be detected in the image. The default value is 'unk'. Possible values include: 'unk', 'zh-Hans', 'zh-Hant', 'cs', 'da', 'nl', 'en', 'fi', 'fr', 'de', 'el', 'hu', 'it', 'ja', 'ko', 'nb', 'pl', 'pt', 'ru', 'es', 'sv', 'tr', 'ar', 'ro', 'sr-Cyrl', 'sr-Latn', 'sk' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [MsRestAzure::AzureOperationResponse] HTTP response information.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 580
def recognize_printed_text_with_http_info(detect_orientation, url, language:nil, custom_headers:nil)
  recognize_printed_text_async(detect_orientation, url, language:language, custom_headers:custom_headers).value!
end
recognize_text(url, detect_handwriting:false, custom_headers:nil) click to toggle source

Recognize Text operation. When you use the Recognize Text interface, the response contains a field called 'Operation-Location'. The 'Operation-Location' field contains the URL that you must use for your Get Handwritten Text Operation Result operation.

@param url [String] Publicly reachable URL of an image @param detect_handwriting [Boolean] If 'true' is specified, handwriting recognition is performed. If this parameter is set to 'false' or is not specified, printed text recognition is performed. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1102
def recognize_text(url, detect_handwriting:false, custom_headers:nil)
  response = recognize_text_async(url, detect_handwriting:detect_handwriting, custom_headers:custom_headers).value!
  nil
end
recognize_text_async(url, detect_handwriting:false, custom_headers:nil) click to toggle source

Recognize Text operation. When you use the Recognize Text interface, the response contains a field called 'Operation-Location'. The 'Operation-Location' field contains the URL that you must use for your Get Handwritten Text Operation Result operation.

@param url [String] Publicly reachable URL of an image @param detect_handwriting [Boolean] If 'true' is specified, handwriting recognition is performed. If this parameter is set to 'false' or is not specified, printed text recognition is performed. @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [Concurrent::Promise] Promise object which holds the HTTP response.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1141
def recognize_text_async(url, detect_handwriting:false, custom_headers:nil)
  fail ArgumentError, 'azure_region is nil' if azure_region.nil?
  fail ArgumentError, 'url is nil' if url.nil?

  image_url = ImageUrl.new
  unless url.nil?
    image_url.url = url
  end

  request_headers = {}
  request_headers['Content-Type'] = 'application/json; charset=utf-8'

  # Set Headers
  request_headers['x-ms-client-request-id'] = SecureRandom.uuid
  request_headers['accept-language'] = accept_language unless accept_language.nil?

  # Serialize Request
  request_mapper = Azure::CognitiveServices::ComputerVision::V1_0::Models::ImageUrl.mapper()
  request_content = self.serialize(request_mapper,  image_url)
  request_content = request_content != nil ? JSON.generate(request_content, quirks_mode: true) : nil

  path_template = 'recognizeText'

  request_url = @base_url || self.base_url
request_url = request_url.gsub('{AzureRegion}', azure_region)

  options = {
      middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]],
      query_params: {'detectHandwriting' => detect_handwriting},
      body: request_content,
      headers: request_headers.merge(custom_headers || {}),
      base_url: request_url
  }
  promise = self.make_request_async(:post, path_template, options)

  promise = promise.then do |result|
    http_response = result.response
    status_code = http_response.status
    response_content = http_response.body
    unless status_code == 202
      error_model = JSON.load(response_content)
      fail MsRest::HttpOperationError.new(result.request, http_response, error_model)
    end

    result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil?
    result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil?
    result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil?

    result
  end

  promise.execute
end
recognize_text_in_stream(image, detect_handwriting:false, custom_headers:nil) click to toggle source

Recognize Text operation. When you use the Recognize Text interface, the response contains a field called 'Operation-Location'. The 'Operation-Location' field contains the URL that you must use for your Get Handwritten Text Operation Result operation.

@param image An image stream. @param detect_handwriting [Boolean] If 'true' is specified, handwriting recognition is performed. If this parameter is set to 'false' or is not specified, printed text recognition is performed. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2187
def recognize_text_in_stream(image, detect_handwriting:false, custom_headers:nil)
  response = recognize_text_in_stream_async(image, detect_handwriting:detect_handwriting, custom_headers:custom_headers).value!
  nil
end
recognize_text_in_stream_async(image, detect_handwriting:false, custom_headers:nil) click to toggle source

Recognize Text operation. When you use the Recognize Text interface, the response contains a field called 'Operation-Location'. The 'Operation-Location' field contains the URL that you must use for your Get Handwritten Text Operation Result operation.

@param image An image stream. @param detect_handwriting [Boolean] If 'true' is specified, handwriting recognition is performed. If this parameter is set to 'false' or is not specified, printed text recognition is performed. @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [Concurrent::Promise] Promise object which holds the HTTP response.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2226
def recognize_text_in_stream_async(image, detect_handwriting:false, custom_headers:nil)
  fail ArgumentError, 'azure_region is nil' if azure_region.nil?
  fail ArgumentError, 'image is nil' if image.nil?


  request_headers = {}
  request_headers['Content-Type'] = 'application/octet-stream'

  # Set Headers
  request_headers['x-ms-client-request-id'] = SecureRandom.uuid
  request_headers['accept-language'] = accept_language unless accept_language.nil?

  # Serialize Request
  request_mapper = {
    client_side_validation: true,
    required: true,
    serialized_name: 'Image',
    type: {
      name: 'Stream'
    }
  }
  request_content = self.serialize(request_mapper,  image)

  path_template = 'recognizeText'

  request_url = @base_url || self.base_url
request_url = request_url.gsub('{AzureRegion}', azure_region)

  options = {
      middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]],
      query_params: {'detectHandwriting' => detect_handwriting},
      body: request_content,
      headers: request_headers.merge(custom_headers || {}),
      base_url: request_url
  }
  promise = self.make_request_async(:post, path_template, options)

  promise = promise.then do |result|
    http_response = result.response
    status_code = http_response.status
    response_content = http_response.body
    unless status_code == 202
      error_model = JSON.load(response_content)
      fail MsRest::HttpOperationError.new(result.request, http_response, error_model)
    end

    result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil?
    result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil?
    result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil?

    result
  end

  promise.execute
end
recognize_text_in_stream_with_http_info(image, detect_handwriting:false, custom_headers:nil) click to toggle source

Recognize Text operation. When you use the Recognize Text interface, the response contains a field called 'Operation-Location'. The 'Operation-Location' field contains the URL that you must use for your Get Handwritten Text Operation Result operation.

@param image An image stream. @param detect_handwriting [Boolean] If 'true' is specified, handwriting recognition is performed. If this parameter is set to 'false' or is not specified, printed text recognition is performed. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [MsRestAzure::AzureOperationResponse] HTTP response information.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2207
def recognize_text_in_stream_with_http_info(image, detect_handwriting:false, custom_headers:nil)
  recognize_text_in_stream_async(image, detect_handwriting:detect_handwriting, custom_headers:custom_headers).value!
end
recognize_text_with_http_info(url, detect_handwriting:false, custom_headers:nil) click to toggle source

Recognize Text operation. When you use the Recognize Text interface, the response contains a field called 'Operation-Location'. The 'Operation-Location' field contains the URL that you must use for your Get Handwritten Text Operation Result operation.

@param url [String] Publicly reachable URL of an image @param detect_handwriting [Boolean] If 'true' is specified, handwriting recognition is performed. If this parameter is set to 'false' or is not specified, printed text recognition is performed. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [MsRestAzure::AzureOperationResponse] HTTP response information.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1122
def recognize_text_with_http_info(url, detect_handwriting:false, custom_headers:nil)
  recognize_text_async(url, detect_handwriting:detect_handwriting, custom_headers:custom_headers).value!
end
tag_image(url, language:nil, custom_headers:nil) click to toggle source

This operation generates a list of words, or tags, that are relevant to the content of the supplied image. The Computer Vision API can return tags based on objects, living beings, scenery or actions found in images. Unlike categories, tags are not organized according to a hierarchical classification system, but correspond to image content. Tags may contain hints to avoid ambiguity or provide context, for example the tag 'cello' may be accompanied by the hint 'musical instrument'. All tags are in English.

@param url [String] Publicly reachable URL of an image @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [TagResult] operation results.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 834
def tag_image(url, language:nil, custom_headers:nil)
  response = tag_image_async(url, language:language, custom_headers:custom_headers).value!
  response.body unless response.nil?
end
tag_image_async(url, language:nil, custom_headers:nil) click to toggle source

This operation generates a list of words, or tags, that are relevant to the content of the supplied image. The Computer Vision API can return tags based on objects, living beings, scenery or actions found in images. Unlike categories, tags are not organized according to a hierarchical classification system, but correspond to image content. Tags may contain hints to avoid ambiguity or provide context, for example the tag 'cello' may be accompanied by the hint 'musical instrument'. All tags are in English.

@param url [String] Publicly reachable URL of an image @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [Concurrent::Promise] Promise object which holds the HTTP response.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 883
def tag_image_async(url, language:nil, custom_headers:nil)
  fail ArgumentError, 'azure_region is nil' if azure_region.nil?
  fail ArgumentError, 'url is nil' if url.nil?

  image_url = ImageUrl.new
  unless url.nil?
    image_url.url = url
  end

  request_headers = {}
  request_headers['Content-Type'] = 'application/json; charset=utf-8'

  # Set Headers
  request_headers['x-ms-client-request-id'] = SecureRandom.uuid
  request_headers['accept-language'] = accept_language unless accept_language.nil?

  # Serialize Request
  request_mapper = Azure::CognitiveServices::ComputerVision::V1_0::Models::ImageUrl.mapper()
  request_content = self.serialize(request_mapper,  image_url)
  request_content = request_content != nil ? JSON.generate(request_content, quirks_mode: true) : nil

  path_template = 'tag'

  request_url = @base_url || self.base_url
request_url = request_url.gsub('{AzureRegion}', azure_region)

  options = {
      middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]],
      query_params: {'language' => language},
      body: request_content,
      headers: request_headers.merge(custom_headers || {}),
      base_url: request_url
  }
  promise = self.make_request_async(:post, path_template, options)

  promise = promise.then do |result|
    http_response = result.response
    status_code = http_response.status
    response_content = http_response.body
    unless status_code == 200
      error_model = JSON.load(response_content)
      fail MsRest::HttpOperationError.new(result.request, http_response, error_model)
    end

    result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil?
    result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil?
    result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil?
    # Deserialize Response
    if status_code == 200
      begin
        parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content)
        result_mapper = Azure::CognitiveServices::ComputerVision::V1_0::Models::TagResult.mapper()
        result.body = self.deserialize(result_mapper, parsed_response)
      rescue Exception => e
        fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result)
      end
    end

    result
  end

  promise.execute
end
tag_image_in_stream(image, language:nil, custom_headers:nil) click to toggle source

This operation generates a list of words, or tags, that are relevant to the content of the supplied image. The Computer Vision API can return tags based on objects, living beings, scenery or actions found in images. Unlike categories, tags are not organized according to a hierarchical classification system, but correspond to image content. Tags may contain hints to avoid ambiguity or provide context, for example the tag 'cello' may be accompanied by the hint 'musical instrument'. All tags are in English.

@param image An image stream. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [TagResult] operation results.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1915
def tag_image_in_stream(image, language:nil, custom_headers:nil)
  response = tag_image_in_stream_async(image, language:language, custom_headers:custom_headers).value!
  response.body unless response.nil?
end
tag_image_in_stream_async(image, language:nil, custom_headers:nil) click to toggle source

This operation generates a list of words, or tags, that are relevant to the content of the supplied image. The Computer Vision API can return tags based on objects, living beings, scenery or actions found in images. Unlike categories, tags are not organized according to a hierarchical classification system, but correspond to image content. Tags may contain hints to avoid ambiguity or provide context, for example the tag 'cello' may be accompanied by the hint 'musical instrument'. All tags are in English.

@param image An image stream. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [Concurrent::Promise] Promise object which holds the HTTP response.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1964
def tag_image_in_stream_async(image, language:nil, custom_headers:nil)
  fail ArgumentError, 'azure_region is nil' if azure_region.nil?
  fail ArgumentError, 'image is nil' if image.nil?


  request_headers = {}
  request_headers['Content-Type'] = 'application/octet-stream'

  # Set Headers
  request_headers['x-ms-client-request-id'] = SecureRandom.uuid
  request_headers['accept-language'] = accept_language unless accept_language.nil?

  # Serialize Request
  request_mapper = {
    client_side_validation: true,
    required: true,
    serialized_name: 'Image',
    type: {
      name: 'Stream'
    }
  }
  request_content = self.serialize(request_mapper,  image)

  path_template = 'tag'

  request_url = @base_url || self.base_url
request_url = request_url.gsub('{AzureRegion}', azure_region)

  options = {
      middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]],
      query_params: {'language' => language},
      body: request_content,
      headers: request_headers.merge(custom_headers || {}),
      base_url: request_url
  }
  promise = self.make_request_async(:post, path_template, options)

  promise = promise.then do |result|
    http_response = result.response
    status_code = http_response.status
    response_content = http_response.body
    unless status_code == 200
      error_model = JSON.load(response_content)
      fail MsRest::HttpOperationError.new(result.request, http_response, error_model)
    end

    result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil?
    result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil?
    result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil?
    # Deserialize Response
    if status_code == 200
      begin
        parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content)
        result_mapper = Azure::CognitiveServices::ComputerVision::V1_0::Models::TagResult.mapper()
        result.body = self.deserialize(result_mapper, parsed_response)
      rescue Exception => e
        fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result)
      end
    end

    result
  end

  promise.execute
end
tag_image_in_stream_with_http_info(image, language:nil, custom_headers:nil) click to toggle source

This operation generates a list of words, or tags, that are relevant to the content of the supplied image. The Computer Vision API can return tags based on objects, living beings, scenery or actions found in images. Unlike categories, tags are not organized according to a hierarchical classification system, but correspond to image content. Tags may contain hints to avoid ambiguity or provide context, for example the tag 'cello' may be accompanied by the hint 'musical instrument'. All tags are in English.

@param image An image stream. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [MsRestAzure::AzureOperationResponse] HTTP response information.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1940
def tag_image_in_stream_with_http_info(image, language:nil, custom_headers:nil)
  tag_image_in_stream_async(image, language:language, custom_headers:custom_headers).value!
end
tag_image_with_http_info(url, language:nil, custom_headers:nil) click to toggle source

This operation generates a list of words, or tags, that are relevant to the content of the supplied image. The Computer Vision API can return tags based on objects, living beings, scenery or actions found in images. Unlike categories, tags are not organized according to a hierarchical classification system, but correspond to image content. Tags may contain hints to avoid ambiguity or provide context, for example the tag 'cello' may be accompanied by the hint 'musical instrument'. All tags are in English.

@param url [String] Publicly reachable URL of an image @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is &quot;en&quot;.Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.

@return [MsRestAzure::AzureOperationResponse] HTTP response information.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 859
def tag_image_with_http_info(url, language:nil, custom_headers:nil)
  tag_image_async(url, language:language, custom_headers:custom_headers).value!
end

Private Instance Methods

add_telemetry() click to toggle source

Adds telemetry information.

# File lib/1.0/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2287
def add_telemetry
    sdk_information = 'azure_cognitiveservices_computervision'
    sdk_information = "#{sdk_information}/0.20.2"
    add_user_agent_information(sdk_information)
end