class Azure::CognitiveServices::ComputerVision::V2_1::ComputerVisionClient
A service client - single point of access to the REST API.
Attributes
@return [String] The preferred language for the response.
@return [String] the base URI of the service.
@return Subscription credentials which uniquely identify client subscription.
@return Credentials needed for the client to connect to Azure
.
@return [String] Supported Cognitive Services endpoints.
@return [Boolean] Whether a unique x-ms-client-request-id should be generated. When set to true a unique x-ms-client-request-id value is generated and included in each request. Default is true.
@return [Integer] The retry timeout in seconds for Long Running Operations. Default value is 30.
Public Class Methods
Creates initializes a new instance of the ComputerVisionClient
class. @param credentials [MsRest::ServiceClientCredentials] credentials to authorize HTTP requests made by the service client. @param options [Array] filters to be applied to the HTTP requests.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 44 def initialize(credentials = nil, options = nil) super(credentials, options) @base_url = '{Endpoint}/vision/v2.1' fail ArgumentError, 'invalid type of credentials input parameter' unless credentials.is_a?(MsRest::ServiceClientCredentials) unless credentials.nil? @credentials = credentials @accept_language = 'en-US' @long_running_operation_retry_timeout = 30 @generate_client_request_id = true add_telemetry end
Public Instance Methods
This operation extracts a rich set of visual features based on the image content. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. Within your request, there is an optional parameter to allow you to choose which features to return. By default, image categories are returned in the response. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param url [String] Publicly reachable URL of an image. @param visual_features [Array<VisualFeatureTypes>] A string indicating what visual feature types to return. Multiple values should be comma-separated. Valid visual feature types include: Categories - categorizes image content according to a taxonomy defined in documentation. Tags - tags the image with a detailed list of words related to the image content. Description - describes the image content with a complete English sentence. Faces - detects if faces are present. If present, generate coordinates, gender and age. ImageType - detects if image is clipart or a line drawing. Color - determines the accent color, dominant color, and whether an image is black&white. Adult
-
detects if the image is pornographic in nature (depicts nudity or a sex
act), or is gory (depicts extreme violence or blood). Sexually suggestive content (aka racy content) is also detected. Objects - detects various objects within an image, including the approximate location. The Objects argument is only available in English. Brands - detects various brands within an image, including the approximate location. The Brands argument is only available in English. @param details [Array<Details>] A string indicating which domain-specific details to return. Multiple values should be comma-separated. Valid visual feature types include: Celebrities - identifies celebrities if detected in the image, Landmarks - identifies notable landmarks in the image. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param description_exclude [Array<DescriptionExclude>] Turn off specified domain models when generating the description. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [ImageAnalysis] operation results.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 159 def analyze_image(url, visual_features:nil, details:nil, language:nil, description_exclude:nil, custom_headers:nil) response = analyze_image_async(url, visual_features:visual_features, details:details, language:language, description_exclude:description_exclude, custom_headers:custom_headers).value! response.body unless response.nil? end
This operation extracts a rich set of visual features based on the image content. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. Within your request, there is an optional parameter to allow you to choose which features to return. By default, image categories are returned in the response. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param url [String] Publicly reachable URL of an image. @param visual_features [Array<VisualFeatureTypes>] A string indicating what visual feature types to return. Multiple values should be comma-separated. Valid visual feature types include: Categories - categorizes image content according to a taxonomy defined in documentation. Tags - tags the image with a detailed list of words related to the image content. Description - describes the image content with a complete English sentence. Faces - detects if faces are present. If present, generate coordinates, gender and age. ImageType - detects if image is clipart or a line drawing. Color - determines the accent color, dominant color, and whether an image is black&white. Adult
-
detects if the image is pornographic in nature (depicts nudity or a sex
act), or is gory (depicts extreme violence or blood). Sexually suggestive content (aka racy content) is also detected. Objects - detects various objects within an image, including the approximate location. The Objects argument is only available in English. Brands - detects various brands within an image, including the approximate location. The Brands argument is only available in English. @param details [Array<Details>] A string indicating which domain-specific details to return. Multiple values should be comma-separated. Valid visual feature types include: Celebrities - identifies celebrities if detected in the image, Landmarks - identifies notable landmarks in the image. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param description_exclude [Array<DescriptionExclude>] Turn off specified domain models when generating the description. @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 256 def analyze_image_async(url, visual_features:nil, details:nil, language:nil, description_exclude:nil, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'url is nil' if url.nil? image_url = ImageUrl.new unless url.nil? image_url.url = url end request_headers = {} request_headers['Content-Type'] = 'application/json; charset=utf-8' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::ImageUrl.mapper() request_content = self.serialize(request_mapper, image_url) request_content = request_content != nil ? JSON.generate(request_content, quirks_mode: true) : nil path_template = 'analyze' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], query_params: {'visualFeatures' => visual_features.nil? ? nil : visual_features.join(','),'details' => details.nil? ? nil : details.join(','),'language' => language,'descriptionExclude' => description_exclude.nil? ? nil : description_exclude.join(',')}, body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 200 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? # Deserialize Response if status_code == 200 begin parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content) result_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::ImageAnalysis.mapper() result.body = self.deserialize(result_mapper, parsed_response) rescue Exception => e fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result) end end result end promise.execute end
This operation recognizes content within an image by applying a domain-specific model. The list of domain-specific models that are supported by the Computer Vision API can be retrieved using the /models GET request. Currently, the API provides following domain-specific models: celebrities, landmarks. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param model [String] The domain-specific content to recognize. @param url [String] Publicly reachable URL of an image. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [DomainModelResults] operation results.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 714 def analyze_image_by_domain(model, url, language:nil, custom_headers:nil) response = analyze_image_by_domain_async(model, url, language:language, custom_headers:custom_headers).value! response.body unless response.nil? end
This operation recognizes content within an image by applying a domain-specific model. The list of domain-specific models that are supported by the Computer Vision API can be retrieved using the /models GET request. Currently, the API provides following domain-specific models: celebrities, landmarks. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param model [String] The domain-specific content to recognize. @param url [String] Publicly reachable URL of an image. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 771 def analyze_image_by_domain_async(model, url, language:nil, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'model is nil' if model.nil? fail ArgumentError, 'url is nil' if url.nil? image_url = ImageUrl.new unless url.nil? image_url.url = url end request_headers = {} request_headers['Content-Type'] = 'application/json; charset=utf-8' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::ImageUrl.mapper() request_content = self.serialize(request_mapper, image_url) request_content = request_content != nil ? JSON.generate(request_content, quirks_mode: true) : nil path_template = 'models/{model}/analyze' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], path_params: {'model' => model}, query_params: {'language' => language}, body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 200 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? # Deserialize Response if status_code == 200 begin parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content) result_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::DomainModelResults.mapper() result.body = self.deserialize(result_mapper, parsed_response) rescue Exception => e fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result) end end result end promise.execute end
This operation recognizes content within an image by applying a domain-specific model. The list of domain-specific models that are supported by the Computer Vision API can be retrieved using the /models GET request. Currently, the API provides following domain-specific models: celebrities, landmarks. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param model [String] The domain-specific content to recognize. @param image An image stream. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [DomainModelResults] operation results.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2607 def analyze_image_by_domain_in_stream(model, image, language:nil, custom_headers:nil) response = analyze_image_by_domain_in_stream_async(model, image, language:language, custom_headers:custom_headers).value! response.body unless response.nil? end
This operation recognizes content within an image by applying a domain-specific model. The list of domain-specific models that are supported by the Computer Vision API can be retrieved using the /models GET request. Currently, the API provides following domain-specific models: celebrities, landmarks. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param model [String] The domain-specific content to recognize. @param image An image stream. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2664 def analyze_image_by_domain_in_stream_async(model, image, language:nil, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'model is nil' if model.nil? fail ArgumentError, 'image is nil' if image.nil? request_headers = {} request_headers['Content-Type'] = 'application/octet-stream' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = { client_side_validation: true, required: true, serialized_name: 'Image', type: { name: 'Stream' } } request_content = self.serialize(request_mapper, image) path_template = 'models/{model}/analyze' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], path_params: {'model' => model}, query_params: {'language' => language}, body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 200 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? # Deserialize Response if status_code == 200 begin parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content) result_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::DomainModelResults.mapper() result.body = self.deserialize(result_mapper, parsed_response) rescue Exception => e fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result) end end result end promise.execute end
This operation recognizes content within an image by applying a domain-specific model. The list of domain-specific models that are supported by the Computer Vision API can be retrieved using the /models GET request. Currently, the API provides following domain-specific models: celebrities, landmarks. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param model [String] The domain-specific content to recognize. @param image An image stream. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2636 def analyze_image_by_domain_in_stream_with_http_info(model, image, language:nil, custom_headers:nil) analyze_image_by_domain_in_stream_async(model, image, language:language, custom_headers:custom_headers).value! end
This operation recognizes content within an image by applying a domain-specific model. The list of domain-specific models that are supported by the Computer Vision API can be retrieved using the /models GET request. Currently, the API provides following domain-specific models: celebrities, landmarks. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param model [String] The domain-specific content to recognize. @param url [String] Publicly reachable URL of an image. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 743 def analyze_image_by_domain_with_http_info(model, url, language:nil, custom_headers:nil) analyze_image_by_domain_async(model, url, language:language, custom_headers:custom_headers).value! end
This operation extracts a rich set of visual features based on the image content. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. Within your request, there is an optional parameter to allow you to choose which features to return. By default, image categories are returned in the response. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param image An image stream. @param visual_features [Array<VisualFeatureTypes>] A string indicating what visual feature types to return. Multiple values should be comma-separated. Valid visual feature types include: Categories - categorizes image content according to a taxonomy defined in documentation. Tags - tags the image with a detailed list of words related to the image content. Description - describes the image content with a complete English sentence. Faces - detects if faces are present. If present, generate coordinates, gender and age. ImageType - detects if image is clipart or a line drawing. Color - determines the accent color, dominant color, and whether an image is black&white. Adult
-
detects if the image is pornographic in nature (depicts nudity or a sex
act), or is gory (depicts extreme violence or blood). Sexually suggestive content (aka racy content) is also detected. Objects - detects various objects within an image, including the approximate location. The Objects argument is only available in English. Brands - detects various brands within an image, including the approximate location. The Brands argument is only available in English. @param details [Array<Details>] A string indicating which domain-specific details to return. Multiple values should be comma-separated. Valid visual feature types include: Celebrities - identifies celebrities if detected in the image, Landmarks - identifies notable landmarks in the image. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param description_exclude [Array<DescriptionExclude>] Turn off specified domain models when generating the description. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [ImageAnalysis] operation results.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1860 def analyze_image_in_stream(image, visual_features:nil, details:nil, language:nil, description_exclude:nil, custom_headers:nil) response = analyze_image_in_stream_async(image, visual_features:visual_features, details:details, language:language, description_exclude:description_exclude, custom_headers:custom_headers).value! response.body unless response.nil? end
This operation extracts a rich set of visual features based on the image content. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. Within your request, there is an optional parameter to allow you to choose which features to return. By default, image categories are returned in the response. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param image An image stream. @param visual_features [Array<VisualFeatureTypes>] A string indicating what visual feature types to return. Multiple values should be comma-separated. Valid visual feature types include: Categories - categorizes image content according to a taxonomy defined in documentation. Tags - tags the image with a detailed list of words related to the image content. Description - describes the image content with a complete English sentence. Faces - detects if faces are present. If present, generate coordinates, gender and age. ImageType - detects if image is clipart or a line drawing. Color - determines the accent color, dominant color, and whether an image is black&white. Adult
-
detects if the image is pornographic in nature (depicts nudity or a sex
act), or is gory (depicts extreme violence or blood). Sexually suggestive content (aka racy content) is also detected. Objects - detects various objects within an image, including the approximate location. The Objects argument is only available in English. Brands - detects various brands within an image, including the approximate location. The Brands argument is only available in English. @param details [Array<Details>] A string indicating which domain-specific details to return. Multiple values should be comma-separated. Valid visual feature types include: Celebrities - identifies celebrities if detected in the image, Landmarks - identifies notable landmarks in the image. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param description_exclude [Array<DescriptionExclude>] Turn off specified domain models when generating the description. @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1957 def analyze_image_in_stream_async(image, visual_features:nil, details:nil, language:nil, description_exclude:nil, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'image is nil' if image.nil? request_headers = {} request_headers['Content-Type'] = 'application/octet-stream' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = { client_side_validation: true, required: true, serialized_name: 'Image', type: { name: 'Stream' } } request_content = self.serialize(request_mapper, image) path_template = 'analyze' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], query_params: {'visualFeatures' => visual_features.nil? ? nil : visual_features.join(','),'details' => details.nil? ? nil : details.join(','),'language' => language,'descriptionExclude' => description_exclude.nil? ? nil : description_exclude.join(',')}, body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 200 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? # Deserialize Response if status_code == 200 begin parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content) result_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::ImageAnalysis.mapper() result.body = self.deserialize(result_mapper, parsed_response) rescue Exception => e fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result) end end result end promise.execute end
This operation extracts a rich set of visual features based on the image content. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. Within your request, there is an optional parameter to allow you to choose which features to return. By default, image categories are returned in the response. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param image An image stream. @param visual_features [Array<VisualFeatureTypes>] A string indicating what visual feature types to return. Multiple values should be comma-separated. Valid visual feature types include: Categories - categorizes image content according to a taxonomy defined in documentation. Tags - tags the image with a detailed list of words related to the image content. Description - describes the image content with a complete English sentence. Faces - detects if faces are present. If present, generate coordinates, gender and age. ImageType - detects if image is clipart or a line drawing. Color - determines the accent color, dominant color, and whether an image is black&white. Adult
-
detects if the image is pornographic in nature (depicts nudity or a sex
act), or is gory (depicts extreme violence or blood). Sexually suggestive content (aka racy content) is also detected. Objects - detects various objects within an image, including the approximate location. The Objects argument is only available in English. Brands - detects various brands within an image, including the approximate location. The Brands argument is only available in English. @param details [Array<Details>] A string indicating which domain-specific details to return. Multiple values should be comma-separated. Valid visual feature types include: Celebrities - identifies celebrities if detected in the image, Landmarks - identifies notable landmarks in the image. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param description_exclude [Array<DescriptionExclude>] Turn off specified domain models when generating the description. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1909 def analyze_image_in_stream_with_http_info(image, visual_features:nil, details:nil, language:nil, description_exclude:nil, custom_headers:nil) analyze_image_in_stream_async(image, visual_features:visual_features, details:details, language:language, description_exclude:description_exclude, custom_headers:custom_headers).value! end
This operation extracts a rich set of visual features based on the image content. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. Within your request, there is an optional parameter to allow you to choose which features to return. By default, image categories are returned in the response. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param url [String] Publicly reachable URL of an image. @param visual_features [Array<VisualFeatureTypes>] A string indicating what visual feature types to return. Multiple values should be comma-separated. Valid visual feature types include: Categories - categorizes image content according to a taxonomy defined in documentation. Tags - tags the image with a detailed list of words related to the image content. Description - describes the image content with a complete English sentence. Faces - detects if faces are present. If present, generate coordinates, gender and age. ImageType - detects if image is clipart or a line drawing. Color - determines the accent color, dominant color, and whether an image is black&white. Adult
-
detects if the image is pornographic in nature (depicts nudity or a sex
act), or is gory (depicts extreme violence or blood). Sexually suggestive content (aka racy content) is also detected. Objects - detects various objects within an image, including the approximate location. The Objects argument is only available in English. Brands - detects various brands within an image, including the approximate location. The Brands argument is only available in English. @param details [Array<Details>] A string indicating which domain-specific details to return. Multiple values should be comma-separated. Valid visual feature types include: Celebrities - identifies celebrities if detected in the image, Landmarks - identifies notable landmarks in the image. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param description_exclude [Array<DescriptionExclude>] Turn off specified domain models when generating the description. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 208 def analyze_image_with_http_info(url, visual_features:nil, details:nil, language:nil, description_exclude:nil, custom_headers:nil) analyze_image_async(url, visual_features:visual_features, details:details, language:language, description_exclude:description_exclude, custom_headers:custom_headers).value! end
Use this interface to get the result of a Read operation, employing the state-of-the-art Optical Character Recognition (OCR) algorithms optimized for text-heavy documents. When you use the Read File interface, the response contains a field called 'Operation-Location'. The 'Operation-Location' field contains the URL that you must use for your 'GetReadOperationResult' operation to access OCR results.
@param url [String] Publicly reachable URL of an image. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1628 def batch_read_file(url, custom_headers:nil) response = batch_read_file_async(url, custom_headers:custom_headers).value! nil end
Use this interface to get the result of a Read operation, employing the state-of-the-art Optical Character Recognition (OCR) algorithms optimized for text-heavy documents. When you use the Read File interface, the response contains a field called 'Operation-Location'. The 'Operation-Location' field contains the URL that you must use for your 'GetReadOperationResult' operation to access OCR results.
@param url [String] Publicly reachable URL of an image. @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1665 def batch_read_file_async(url, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'url is nil' if url.nil? image_url = ImageUrl.new unless url.nil? image_url.url = url end request_headers = {} request_headers['Content-Type'] = 'application/json; charset=utf-8' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::ImageUrl.mapper() request_content = self.serialize(request_mapper, image_url) request_content = request_content != nil ? JSON.generate(request_content, quirks_mode: true) : nil path_template = 'read/core/asyncBatchAnalyze' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 202 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? result end promise.execute end
Use this interface to get the result of a Read Document operation, employing the state-of-the-art Optical Character Recognition (OCR) algorithms optimized for text-heavy documents. When you use the Read Document interface, the response contains a field called 'Operation-Location'. The 'Operation-Location' field contains the URL that you must use for your 'Get Read Result operation' to access OCR results.
@param image An image stream. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 3150 def batch_read_file_in_stream(image, custom_headers:nil) response = batch_read_file_in_stream_async(image, custom_headers:custom_headers).value! nil end
Use this interface to get the result of a Read Document operation, employing the state-of-the-art Optical Character Recognition (OCR) algorithms optimized for text-heavy documents. When you use the Read Document interface, the response contains a field called 'Operation-Location'. The 'Operation-Location' field contains the URL that you must use for your 'Get Read Result operation' to access OCR results.
@param image An image stream. @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 3187 def batch_read_file_in_stream_async(image, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'image is nil' if image.nil? request_headers = {} request_headers['Content-Type'] = 'application/octet-stream' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = { client_side_validation: true, required: true, serialized_name: 'Image', type: { name: 'Stream' } } request_content = self.serialize(request_mapper, image) path_template = 'read/core/asyncBatchAnalyze' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 202 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? result end promise.execute end
Use this interface to get the result of a Read Document operation, employing the state-of-the-art Optical Character Recognition (OCR) algorithms optimized for text-heavy documents. When you use the Read Document interface, the response contains a field called 'Operation-Location'. The 'Operation-Location' field contains the URL that you must use for your 'Get Read Result operation' to access OCR results.
@param image An image stream. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 3169 def batch_read_file_in_stream_with_http_info(image, custom_headers:nil) batch_read_file_in_stream_async(image, custom_headers:custom_headers).value! end
Use this interface to get the result of a Read operation, employing the state-of-the-art Optical Character Recognition (OCR) algorithms optimized for text-heavy documents. When you use the Read File interface, the response contains a field called 'Operation-Location'. The 'Operation-Location' field contains the URL that you must use for your 'GetReadOperationResult' operation to access OCR results.
@param url [String] Publicly reachable URL of an image. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1647 def batch_read_file_with_http_info(url, custom_headers:nil) batch_read_file_async(url, custom_headers:custom_headers).value! end
This operation generates a description of an image in human readable language with complete sentences. The description is based on a collection of content tags, which are also returned by the operation. More than one description can be generated for each image. Descriptions are ordered by their confidence score. Descriptions may include results from celebrity and landmark domain models, if applicable. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param url [String] Publicly reachable URL of an image. @param max_candidates [Integer] Maximum number of candidate descriptions to be returned. The default is 1. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param description_exclude [Array<DescriptionExclude>] Turn off specified domain models when generating the description. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [ImageDescription] operation results.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 348 def describe_image(url, max_candidates:1, language:nil, description_exclude:nil, custom_headers:nil) response = describe_image_async(url, max_candidates:max_candidates, language:language, description_exclude:description_exclude, custom_headers:custom_headers).value! response.body unless response.nil? end
This operation generates a description of an image in human readable language with complete sentences. The description is based on a collection of content tags, which are also returned by the operation. More than one description can be generated for each image. Descriptions are ordered by their confidence score. Descriptions may include results from celebrity and landmark domain models, if applicable. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param url [String] Publicly reachable URL of an image. @param max_candidates [Integer] Maximum number of candidate descriptions to be returned. The default is 1. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param description_exclude [Array<DescriptionExclude>] Turn off specified domain models when generating the description. @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 413 def describe_image_async(url, max_candidates:1, language:nil, description_exclude:nil, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'url is nil' if url.nil? image_url = ImageUrl.new unless url.nil? image_url.url = url end request_headers = {} request_headers['Content-Type'] = 'application/json; charset=utf-8' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::ImageUrl.mapper() request_content = self.serialize(request_mapper, image_url) request_content = request_content != nil ? JSON.generate(request_content, quirks_mode: true) : nil path_template = 'describe' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], query_params: {'maxCandidates' => max_candidates,'language' => language,'descriptionExclude' => description_exclude.nil? ? nil : description_exclude.join(',')}, body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 200 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? # Deserialize Response if status_code == 200 begin parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content) result_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::ImageDescription.mapper() result.body = self.deserialize(result_mapper, parsed_response) rescue Exception => e fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result) end end result end promise.execute end
This operation generates a description of an image in human readable language with complete sentences. The description is based on a collection of content tags, which are also returned by the operation. More than one description can be generated for each image. Descriptions are ordered by their confidence score. Descriptions may include results from celebrity and landmark domain models, if applicable. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param image An image stream. @param max_candidates [Integer] Maximum number of candidate descriptions to be returned. The default is 1. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param description_exclude [Array<DescriptionExclude>] Turn off specified domain models when generating the description. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [ImageDescription] operation results.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2173 def describe_image_in_stream(image, max_candidates:1, language:nil, description_exclude:nil, custom_headers:nil) response = describe_image_in_stream_async(image, max_candidates:max_candidates, language:language, description_exclude:description_exclude, custom_headers:custom_headers).value! response.body unless response.nil? end
This operation generates a description of an image in human readable language with complete sentences. The description is based on a collection of content tags, which are also returned by the operation. More than one description can be generated for each image. Descriptions are ordered by their confidence score. Descriptions may include results from celebrity and landmark domain models, if applicable. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param image An image stream. @param max_candidates [Integer] Maximum number of candidate descriptions to be returned. The default is 1. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param description_exclude [Array<DescriptionExclude>] Turn off specified domain models when generating the description. @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2238 def describe_image_in_stream_async(image, max_candidates:1, language:nil, description_exclude:nil, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'image is nil' if image.nil? request_headers = {} request_headers['Content-Type'] = 'application/octet-stream' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = { client_side_validation: true, required: true, serialized_name: 'Image', type: { name: 'Stream' } } request_content = self.serialize(request_mapper, image) path_template = 'describe' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], query_params: {'maxCandidates' => max_candidates,'language' => language,'descriptionExclude' => description_exclude.nil? ? nil : description_exclude.join(',')}, body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 200 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? # Deserialize Response if status_code == 200 begin parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content) result_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::ImageDescription.mapper() result.body = self.deserialize(result_mapper, parsed_response) rescue Exception => e fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result) end end result end promise.execute end
This operation generates a description of an image in human readable language with complete sentences. The description is based on a collection of content tags, which are also returned by the operation. More than one description can be generated for each image. Descriptions are ordered by their confidence score. Descriptions may include results from celebrity and landmark domain models, if applicable. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param image An image stream. @param max_candidates [Integer] Maximum number of candidate descriptions to be returned. The default is 1. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param description_exclude [Array<DescriptionExclude>] Turn off specified domain models when generating the description. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2206 def describe_image_in_stream_with_http_info(image, max_candidates:1, language:nil, description_exclude:nil, custom_headers:nil) describe_image_in_stream_async(image, max_candidates:max_candidates, language:language, description_exclude:description_exclude, custom_headers:custom_headers).value! end
This operation generates a description of an image in human readable language with complete sentences. The description is based on a collection of content tags, which are also returned by the operation. More than one description can be generated for each image. Descriptions are ordered by their confidence score. Descriptions may include results from celebrity and landmark domain models, if applicable. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param url [String] Publicly reachable URL of an image. @param max_candidates [Integer] Maximum number of candidate descriptions to be returned. The default is 1. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param description_exclude [Array<DescriptionExclude>] Turn off specified domain models when generating the description. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 381 def describe_image_with_http_info(url, max_candidates:1, language:nil, description_exclude:nil, custom_headers:nil) describe_image_async(url, max_candidates:max_candidates, language:language, description_exclude:description_exclude, custom_headers:custom_headers).value! end
Performs object detection on the specified image. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param url [String] Publicly reachable URL of an image. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [DetectResult] operation results.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 491 def detect_objects(url, custom_headers:nil) response = detect_objects_async(url, custom_headers:custom_headers).value! response.body unless response.nil? end
Performs object detection on the specified image. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param url [String] Publicly reachable URL of an image. @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 528 def detect_objects_async(url, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'url is nil' if url.nil? image_url = ImageUrl.new unless url.nil? image_url.url = url end request_headers = {} request_headers['Content-Type'] = 'application/json; charset=utf-8' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::ImageUrl.mapper() request_content = self.serialize(request_mapper, image_url) request_content = request_content != nil ? JSON.generate(request_content, quirks_mode: true) : nil path_template = 'detect' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 200 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? # Deserialize Response if status_code == 200 begin parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content) result_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::DetectResult.mapper() result.body = self.deserialize(result_mapper, parsed_response) rescue Exception => e fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result) end end result end promise.execute end
Performs object detection on the specified image. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param image An image stream. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [DetectResult] operation results.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2318 def detect_objects_in_stream(image, custom_headers:nil) response = detect_objects_in_stream_async(image, custom_headers:custom_headers).value! response.body unless response.nil? end
Performs object detection on the specified image. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param image An image stream. @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2355 def detect_objects_in_stream_async(image, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'image is nil' if image.nil? request_headers = {} request_headers['Content-Type'] = 'application/octet-stream' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = { client_side_validation: true, required: true, serialized_name: 'Image', type: { name: 'Stream' } } request_content = self.serialize(request_mapper, image) path_template = 'detect' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 200 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? # Deserialize Response if status_code == 200 begin parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content) result_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::DetectResult.mapper() result.body = self.deserialize(result_mapper, parsed_response) rescue Exception => e fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result) end end result end promise.execute end
Performs object detection on the specified image. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param image An image stream. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2337 def detect_objects_in_stream_with_http_info(image, custom_headers:nil) detect_objects_in_stream_async(image, custom_headers:custom_headers).value! end
Performs object detection on the specified image. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param url [String] Publicly reachable URL of an image. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 510 def detect_objects_with_http_info(url, custom_headers:nil) detect_objects_async(url, custom_headers:custom_headers).value! end
This operation generates a thumbnail image with the user-specified width and height. By default, the service analyzes the image, identifies the region of interest (ROI), and generates smart cropping coordinates based on the ROI. Smart cropping helps when you specify an aspect ratio that differs from that of the input image. A successful response contains the thumbnail image binary. If the request failed, the response contains an error code and a message to help determine what went wrong. Upon failure, the error code and an error message are returned. The error code could be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, InvalidThumbnailSize, NotSupportedImage, FailedToProcess, Timeout, or InternalServerError.
@param width [Integer] Width of the thumbnail, in pixels. It must be between 1 and 1024. Recommended minimum of 50. @param height [Integer] Height of the thumbnail, in pixels. It must be between 1 and 1024. Recommended minimum of 50. @param url [String] Publicly reachable URL of an image. @param smart_cropping [Boolean] Boolean flag for enabling smart cropping. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [NOT_IMPLEMENTED] operation results.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1156 def generate_thumbnail(width, height, url, smart_cropping:false, custom_headers:nil) response = generate_thumbnail_async(width, height, url, smart_cropping:smart_cropping, custom_headers:custom_headers).value! response.body unless response.nil? end
This operation generates a thumbnail image with the user-specified width and height. By default, the service analyzes the image, identifies the region of interest (ROI), and generates smart cropping coordinates based on the ROI. Smart cropping helps when you specify an aspect ratio that differs from that of the input image. A successful response contains the thumbnail image binary. If the request failed, the response contains an error code and a message to help determine what went wrong. Upon failure, the error code and an error message are returned. The error code could be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, InvalidThumbnailSize, NotSupportedImage, FailedToProcess, Timeout, or InternalServerError.
@param width [Integer] Width of the thumbnail, in pixels. It must be between 1 and 1024. Recommended minimum of 50. @param height [Integer] Height of the thumbnail, in pixels. It must be between 1 and 1024. Recommended minimum of 50. @param url [String] Publicly reachable URL of an image. @param smart_cropping [Boolean] Boolean flag for enabling smart cropping. @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1215 def generate_thumbnail_async(width, height, url, smart_cropping:false, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'width is nil' if width.nil? fail ArgumentError, "'width' should satisfy the constraint - 'InclusiveMaximum': '1024'" if !width.nil? && width > 1024 fail ArgumentError, "'width' should satisfy the constraint - 'InclusiveMinimum': '1'" if !width.nil? && width < 1 fail ArgumentError, 'height is nil' if height.nil? fail ArgumentError, "'height' should satisfy the constraint - 'InclusiveMaximum': '1024'" if !height.nil? && height > 1024 fail ArgumentError, "'height' should satisfy the constraint - 'InclusiveMinimum': '1'" if !height.nil? && height < 1 fail ArgumentError, 'url is nil' if url.nil? image_url = ImageUrl.new unless url.nil? image_url.url = url end request_headers = {} request_headers['Content-Type'] = 'application/json; charset=utf-8' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::ImageUrl.mapper() request_content = self.serialize(request_mapper, image_url) request_content = request_content != nil ? JSON.generate(request_content, quirks_mode: true) : nil path_template = 'generateThumbnail' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], query_params: {'width' => width,'height' => height,'smartCropping' => smart_cropping}, body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 200 error_model = JSON.load(response_content) fail MsRestAzure::AzureOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? # Deserialize Response if status_code == 200 begin parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content) result_mapper = { client_side_validation: true, required: false, serialized_name: 'parsed_response', type: { name: 'Stream' } } result.body = self.deserialize(result_mapper, parsed_response) rescue Exception => e fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result) end end result end promise.execute end
This operation generates a thumbnail image with the user-specified width and height. By default, the service analyzes the image, identifies the region of interest (ROI), and generates smart cropping coordinates based on the ROI. Smart cropping helps when you specify an aspect ratio that differs from that of the input image. A successful response contains the thumbnail image binary. If the request failed, the response contains an error code and a message to help determine what went wrong. Upon failure, the error code and an error message are returned. The error code could be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, InvalidThumbnailSize, NotSupportedImage, FailedToProcess, Timeout, or InternalServerError.
@param width [Integer] Width of the thumbnail, in pixels. It must be between 1 and 1024. Recommended minimum of 50. @param height [Integer] Height of the thumbnail, in pixels. It must be between 1 and 1024. Recommended minimum of 50. @param image An image stream. @param smart_cropping [Boolean] Boolean flag for enabling smart cropping. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [NOT_IMPLEMENTED] operation results.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2445 def generate_thumbnail_in_stream(width, height, image, smart_cropping:false, custom_headers:nil) response = generate_thumbnail_in_stream_async(width, height, image, smart_cropping:smart_cropping, custom_headers:custom_headers).value! response.body unless response.nil? end
This operation generates a thumbnail image with the user-specified width and height. By default, the service analyzes the image, identifies the region of interest (ROI), and generates smart cropping coordinates based on the ROI. Smart cropping helps when you specify an aspect ratio that differs from that of the input image. A successful response contains the thumbnail image binary. If the request failed, the response contains an error code and a message to help determine what went wrong. Upon failure, the error code and an error message are returned. The error code could be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, InvalidThumbnailSize, NotSupportedImage, FailedToProcess, Timeout, or InternalServerError.
@param width [Integer] Width of the thumbnail, in pixels. It must be between 1 and 1024. Recommended minimum of 50. @param height [Integer] Height of the thumbnail, in pixels. It must be between 1 and 1024. Recommended minimum of 50. @param image An image stream. @param smart_cropping [Boolean] Boolean flag for enabling smart cropping. @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2504 def generate_thumbnail_in_stream_async(width, height, image, smart_cropping:false, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'width is nil' if width.nil? fail ArgumentError, "'width' should satisfy the constraint - 'InclusiveMaximum': '1024'" if !width.nil? && width > 1024 fail ArgumentError, "'width' should satisfy the constraint - 'InclusiveMinimum': '1'" if !width.nil? && width < 1 fail ArgumentError, 'height is nil' if height.nil? fail ArgumentError, "'height' should satisfy the constraint - 'InclusiveMaximum': '1024'" if !height.nil? && height > 1024 fail ArgumentError, "'height' should satisfy the constraint - 'InclusiveMinimum': '1'" if !height.nil? && height < 1 fail ArgumentError, 'image is nil' if image.nil? request_headers = {} request_headers['Content-Type'] = 'application/octet-stream' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = { client_side_validation: true, required: true, serialized_name: 'Image', type: { name: 'Stream' } } request_content = self.serialize(request_mapper, image) path_template = 'generateThumbnail' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], query_params: {'width' => width,'height' => height,'smartCropping' => smart_cropping}, body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 200 error_model = JSON.load(response_content) fail MsRestAzure::AzureOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? # Deserialize Response if status_code == 200 begin parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content) result_mapper = { client_side_validation: true, required: false, serialized_name: 'parsed_response', type: { name: 'Stream' } } result.body = self.deserialize(result_mapper, parsed_response) rescue Exception => e fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result) end end result end promise.execute end
This operation generates a thumbnail image with the user-specified width and height. By default, the service analyzes the image, identifies the region of interest (ROI), and generates smart cropping coordinates based on the ROI. Smart cropping helps when you specify an aspect ratio that differs from that of the input image. A successful response contains the thumbnail image binary. If the request failed, the response contains an error code and a message to help determine what went wrong. Upon failure, the error code and an error message are returned. The error code could be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, InvalidThumbnailSize, NotSupportedImage, FailedToProcess, Timeout, or InternalServerError.
@param width [Integer] Width of the thumbnail, in pixels. It must be between 1 and 1024. Recommended minimum of 50. @param height [Integer] Height of the thumbnail, in pixels. It must be between 1 and 1024. Recommended minimum of 50. @param image An image stream. @param smart_cropping [Boolean] Boolean flag for enabling smart cropping. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2475 def generate_thumbnail_in_stream_with_http_info(width, height, image, smart_cropping:false, custom_headers:nil) generate_thumbnail_in_stream_async(width, height, image, smart_cropping:smart_cropping, custom_headers:custom_headers).value! end
This operation generates a thumbnail image with the user-specified width and height. By default, the service analyzes the image, identifies the region of interest (ROI), and generates smart cropping coordinates based on the ROI. Smart cropping helps when you specify an aspect ratio that differs from that of the input image. A successful response contains the thumbnail image binary. If the request failed, the response contains an error code and a message to help determine what went wrong. Upon failure, the error code and an error message are returned. The error code could be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, InvalidThumbnailSize, NotSupportedImage, FailedToProcess, Timeout, or InternalServerError.
@param width [Integer] Width of the thumbnail, in pixels. It must be between 1 and 1024. Recommended minimum of 50. @param height [Integer] Height of the thumbnail, in pixels. It must be between 1 and 1024. Recommended minimum of 50. @param url [String] Publicly reachable URL of an image. @param smart_cropping [Boolean] Boolean flag for enabling smart cropping. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1186 def generate_thumbnail_with_http_info(width, height, url, smart_cropping:false, custom_headers:nil) generate_thumbnail_async(width, height, url, smart_cropping:smart_cropping, custom_headers:custom_headers).value! end
This operation returns a bounding box around the most important area of the image. A successful response will be returned in JSON. If the request failed, the response contains an error code and a message to help determine what went wrong. Upon failure, the error code and an error message are returned. The error code could be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, NotSupportedImage, FailedToProcess, Timeout, or InternalServerError.
@param url [String] Publicly reachable URL of an image. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [AreaOfInterestResult] operation results.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1308 def get_area_of_interest(url, custom_headers:nil) response = get_area_of_interest_async(url, custom_headers:custom_headers).value! response.body unless response.nil? end
This operation returns a bounding box around the most important area of the image. A successful response will be returned in JSON. If the request failed, the response contains an error code and a message to help determine what went wrong. Upon failure, the error code and an error message are returned. The error code could be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, NotSupportedImage, FailedToProcess, Timeout, or InternalServerError.
@param url [String] Publicly reachable URL of an image. @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1349 def get_area_of_interest_async(url, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'url is nil' if url.nil? image_url = ImageUrl.new unless url.nil? image_url.url = url end request_headers = {} request_headers['Content-Type'] = 'application/json; charset=utf-8' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::ImageUrl.mapper() request_content = self.serialize(request_mapper, image_url) request_content = request_content != nil ? JSON.generate(request_content, quirks_mode: true) : nil path_template = 'areaOfInterest' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 200 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? # Deserialize Response if status_code == 200 begin parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content) result_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::AreaOfInterestResult.mapper() result.body = self.deserialize(result_mapper, parsed_response) rescue Exception => e fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result) end end result end promise.execute end
This operation returns a bounding box around the most important area of the image. A successful response will be returned in JSON. If the request failed, the response contains an error code and a message to help determine what went wrong. Upon failure, the error code and an error message are returned. The error code could be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, NotSupportedImage, FailedToProcess, Timeout, or InternalServerError.
@param image An image stream. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [AreaOfInterestResult] operation results.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2039 def get_area_of_interest_in_stream(image, custom_headers:nil) response = get_area_of_interest_in_stream_async(image, custom_headers:custom_headers).value! response.body unless response.nil? end
This operation returns a bounding box around the most important area of the image. A successful response will be returned in JSON. If the request failed, the response contains an error code and a message to help determine what went wrong. Upon failure, the error code and an error message are returned. The error code could be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, NotSupportedImage, FailedToProcess, Timeout, or InternalServerError.
@param image An image stream. @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2080 def get_area_of_interest_in_stream_async(image, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'image is nil' if image.nil? request_headers = {} request_headers['Content-Type'] = 'application/octet-stream' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = { client_side_validation: true, required: true, serialized_name: 'Image', type: { name: 'Stream' } } request_content = self.serialize(request_mapper, image) path_template = 'areaOfInterest' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 200 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? # Deserialize Response if status_code == 200 begin parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content) result_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::AreaOfInterestResult.mapper() result.body = self.deserialize(result_mapper, parsed_response) rescue Exception => e fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result) end end result end promise.execute end
This operation returns a bounding box around the most important area of the image. A successful response will be returned in JSON. If the request failed, the response contains an error code and a message to help determine what went wrong. Upon failure, the error code and an error message are returned. The error code could be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, NotSupportedImage, FailedToProcess, Timeout, or InternalServerError.
@param image An image stream. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2060 def get_area_of_interest_in_stream_with_http_info(image, custom_headers:nil) get_area_of_interest_in_stream_async(image, custom_headers:custom_headers).value! end
This operation returns a bounding box around the most important area of the image. A successful response will be returned in JSON. If the request failed, the response contains an error code and a message to help determine what went wrong. Upon failure, the error code and an error message are returned. The error code could be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, NotSupportedImage, FailedToProcess, Timeout, or InternalServerError.
@param url [String] Publicly reachable URL of an image. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1329 def get_area_of_interest_with_http_info(url, custom_headers:nil) get_area_of_interest_async(url, custom_headers:custom_headers).value! end
This interface is used for getting OCR results of Read operation. The URL to this interface should be retrieved from 'Operation-Location' field returned from Batch Read File interface.
@param operation_id [String] Id of read operation returned in the response of the 'Batch Read File' interface. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [ReadOperationResult] operation results.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1730 def get_read_operation_result(operation_id, custom_headers:nil) response = get_read_operation_result_async(operation_id, custom_headers:custom_headers).value! response.body unless response.nil? end
This interface is used for getting OCR results of Read operation. The URL to this interface should be retrieved from 'Operation-Location' field returned from Batch Read File interface.
@param operation_id [String] Id of read operation returned in the response of the 'Batch Read File' interface. @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1763 def get_read_operation_result_async(operation_id, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'operation_id is nil' if operation_id.nil? request_headers = {} request_headers['Content-Type'] = 'application/json; charset=utf-8' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? path_template = 'read/operations/{operationId}' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], path_params: {'operationId' => operation_id}, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:get, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 200 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? # Deserialize Response if status_code == 200 begin parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content) result_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::ReadOperationResult.mapper() result.body = self.deserialize(result_mapper, parsed_response) rescue Exception => e fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result) end end result end promise.execute end
This interface is used for getting OCR results of Read operation. The URL to this interface should be retrieved from 'Operation-Location' field returned from Batch Read File interface.
@param operation_id [String] Id of read operation returned in the response of the 'Batch Read File' interface. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1747 def get_read_operation_result_with_http_info(operation_id, custom_headers:nil) get_read_operation_result_async(operation_id, custom_headers:custom_headers).value! end
This interface is used for getting text operation result. The URL to this interface should be retrieved from 'Operation-Location' field returned from Recognize Text interface.
@param operation_id [String] Id of the text operation returned in the response of the 'Recognize Text' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [TextOperationResult] operation results.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1529 def get_text_operation_result(operation_id, custom_headers:nil) response = get_text_operation_result_async(operation_id, custom_headers:custom_headers).value! response.body unless response.nil? end
This interface is used for getting text operation result. The URL to this interface should be retrieved from 'Operation-Location' field returned from Recognize Text interface.
@param operation_id [String] Id of the text operation returned in the response of the 'Recognize Text' @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1562 def get_text_operation_result_async(operation_id, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'operation_id is nil' if operation_id.nil? request_headers = {} request_headers['Content-Type'] = 'application/json; charset=utf-8' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? path_template = 'textOperations/{operationId}' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], path_params: {'operationId' => operation_id}, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:get, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 200 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? # Deserialize Response if status_code == 200 begin parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content) result_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::TextOperationResult.mapper() result.body = self.deserialize(result_mapper, parsed_response) rescue Exception => e fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result) end end result end promise.execute end
This interface is used for getting text operation result. The URL to this interface should be retrieved from 'Operation-Location' field returned from Recognize Text interface.
@param operation_id [String] Id of the text operation returned in the response of the 'Recognize Text' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1546 def get_text_operation_result_with_http_info(operation_id, custom_headers:nil) get_text_operation_result_async(operation_id, custom_headers:custom_headers).value! end
This operation returns the list of domain-specific models that are supported by the Computer Vision API. Currently, the API supports following domain-specific models: celebrity recognizer, landmark recognizer. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [ListModelsResult] operation results.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 604 def list_models(custom_headers:nil) response = list_models_async(custom_headers:custom_headers).value! response.body unless response.nil? end
This operation returns the list of domain-specific models that are supported by the Computer Vision API. Currently, the API supports following domain-specific models: celebrity recognizer, landmark recognizer. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 639 def list_models_async(custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? request_headers = {} request_headers['Content-Type'] = 'application/json; charset=utf-8' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? path_template = 'models' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:get, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 200 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? # Deserialize Response if status_code == 200 begin parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content) result_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::ListModelsResult.mapper() result.body = self.deserialize(result_mapper, parsed_response) rescue Exception => e fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result) end end result end promise.execute end
This operation returns the list of domain-specific models that are supported by the Computer Vision API. Currently, the API supports following domain-specific models: celebrity recognizer, landmark recognizer. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 622 def list_models_with_http_info(custom_headers:nil) list_models_async(custom_headers:custom_headers).value! end
Makes a request and returns the body of the response. @param method [Symbol] with any of the following values :get, :put, :post, :patch, :delete. @param path [String] the path, relative to {base_url}. @param options [Hash{String=>String}] specifying any request options like :body. @return [Hash{String=>String}] containing the body of the response. Example:
request_content = "{'location':'westus','tags':{'tag1':'val1','tag2':'val2'}}" path = "/path" options = { body: request_content, query_params: {'api-version' => '2016-02-01'} } result = @client.make_request(:put, path, options)
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 73 def make_request(method, path, options = {}) result = make_request_with_http_info(method, path, options) result.body unless result.nil? end
Makes a request asynchronously. @param method [Symbol] with any of the following values :get, :put, :post, :patch, :delete. @param path [String] the path, relative to {base_url}. @param options [Hash{String=>String}] specifying any request options like :body. @return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 98 def make_request_async(method, path, options = {}) fail ArgumentError, 'method is nil' if method.nil? fail ArgumentError, 'path is nil' if path.nil? request_url = options[:base_url] || @base_url if(!options[:headers].nil? && !options[:headers]['Content-Type'].nil?) @request_headers['Content-Type'] = options[:headers]['Content-Type'] end request_headers = @request_headers request_headers.merge!({'accept-language' => @accept_language}) unless @accept_language.nil? options.merge!({headers: request_headers.merge(options[:headers] || {})}) options.merge!({credentials: @credentials}) unless @credentials.nil? super(request_url, method, path, options) end
Makes a request and returns the operation response. @param method [Symbol] with any of the following values :get, :put, :post, :patch, :delete. @param path [String] the path, relative to {base_url}. @param options [Hash{String=>String}] specifying any request options like :body. @return [MsRestAzure::AzureOperationResponse] Operation response containing the request, response and status.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 85 def make_request_with_http_info(method, path, options = {}) result = make_request_async(method, path, options).value! result.body = result.response.body.to_s.empty? ? nil : JSON.load(result.response.body) result end
Optical Character Recognition (OCR) detects text in an image and extracts the recognized characters into a machine-usable character stream. Upon success, the OCR results will be returned. Upon failure, the error code together with an error message will be returned. The error code can be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, NotSupportedImage, NotSupportedLanguage, or InternalServerError.
@param detect_orientation [Boolean] Whether detect the text orientation in the image. With detectOrientation=true the OCR service tries to detect the image orientation and correct it before further processing (e.g. if it's upside-down). @param url [String] Publicly reachable URL of an image. @param language [OcrLanguages] The BCP-47 language code of the text to be detected in the image. The default value is 'unk'. Possible values include: 'unk', 'zh-Hans', 'zh-Hant', 'cs', 'da', 'nl', 'en', 'fi', 'fr', 'de', 'el', 'hu', 'it', 'ja', 'ko', 'nb', 'pl', 'pt', 'ru', 'es', 'sv', 'tr', 'ar', 'ro', 'sr-Cyrl', 'sr-Latn', 'sk' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [OcrResult] operation results.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 861 def recognize_printed_text(detect_orientation, url, language:nil, custom_headers:nil) response = recognize_printed_text_async(detect_orientation, url, language:language, custom_headers:custom_headers).value! response.body unless response.nil? end
Optical Character Recognition (OCR) detects text in an image and extracts the recognized characters into a machine-usable character stream. Upon success, the OCR results will be returned. Upon failure, the error code together with an error message will be returned. The error code can be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, NotSupportedImage, NotSupportedLanguage, or InternalServerError.
@param detect_orientation [Boolean] Whether detect the text orientation in the image. With detectOrientation=true the OCR service tries to detect the image orientation and correct it before further processing (e.g. if it's upside-down). @param url [String] Publicly reachable URL of an image. @param language [OcrLanguages] The BCP-47 language code of the text to be detected in the image. The default value is 'unk'. Possible values include: 'unk', 'zh-Hans', 'zh-Hant', 'cs', 'da', 'nl', 'en', 'fi', 'fr', 'de', 'el', 'hu', 'it', 'ja', 'ko', 'nb', 'pl', 'pt', 'ru', 'es', 'sv', 'tr', 'ar', 'ro', 'sr-Cyrl', 'sr-Latn', 'sk' @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 918 def recognize_printed_text_async(detect_orientation, url, language:nil, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'detect_orientation is nil' if detect_orientation.nil? fail ArgumentError, 'url is nil' if url.nil? image_url = ImageUrl.new unless url.nil? image_url.url = url end request_headers = {} request_headers['Content-Type'] = 'application/json; charset=utf-8' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::ImageUrl.mapper() request_content = self.serialize(request_mapper, image_url) request_content = request_content != nil ? JSON.generate(request_content, quirks_mode: true) : nil path_template = 'ocr' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], query_params: {'detectOrientation' => detect_orientation,'language' => language}, body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 200 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? # Deserialize Response if status_code == 200 begin parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content) result_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::OcrResult.mapper() result.body = self.deserialize(result_mapper, parsed_response) rescue Exception => e fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result) end end result end promise.execute end
Optical Character Recognition (OCR) detects text in an image and extracts the recognized characters into a machine-usable character stream. Upon success, the OCR results will be returned. Upon failure, the error code together with an error message will be returned. The error code can be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, NotSupportedImage, NotSupportedLanguage, or InternalServerError.
@param detect_orientation [Boolean] Whether detect the text orientation in the image. With detectOrientation=true the OCR service tries to detect the image orientation and correct it before further processing (e.g. if it's upside-down). @param image An image stream. @param language [OcrLanguages] The BCP-47 language code of the text to be detected in the image. The default value is 'unk'. Possible values include: 'unk', 'zh-Hans', 'zh-Hant', 'cs', 'da', 'nl', 'en', 'fi', 'fr', 'de', 'el', 'hu', 'it', 'ja', 'ko', 'nb', 'pl', 'pt', 'ru', 'es', 'sv', 'tr', 'ar', 'ro', 'sr-Cyrl', 'sr-Latn', 'sk' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [OcrResult] operation results.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2756 def recognize_printed_text_in_stream(detect_orientation, image, language:nil, custom_headers:nil) response = recognize_printed_text_in_stream_async(detect_orientation, image, language:language, custom_headers:custom_headers).value! response.body unless response.nil? end
Optical Character Recognition (OCR) detects text in an image and extracts the recognized characters into a machine-usable character stream. Upon success, the OCR results will be returned. Upon failure, the error code together with an error message will be returned. The error code can be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, NotSupportedImage, NotSupportedLanguage, or InternalServerError.
@param detect_orientation [Boolean] Whether detect the text orientation in the image. With detectOrientation=true the OCR service tries to detect the image orientation and correct it before further processing (e.g. if it's upside-down). @param image An image stream. @param language [OcrLanguages] The BCP-47 language code of the text to be detected in the image. The default value is 'unk'. Possible values include: 'unk', 'zh-Hans', 'zh-Hant', 'cs', 'da', 'nl', 'en', 'fi', 'fr', 'de', 'el', 'hu', 'it', 'ja', 'ko', 'nb', 'pl', 'pt', 'ru', 'es', 'sv', 'tr', 'ar', 'ro', 'sr-Cyrl', 'sr-Latn', 'sk' @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2813 def recognize_printed_text_in_stream_async(detect_orientation, image, language:nil, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'detect_orientation is nil' if detect_orientation.nil? fail ArgumentError, 'image is nil' if image.nil? request_headers = {} request_headers['Content-Type'] = 'application/octet-stream' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = { client_side_validation: true, required: true, serialized_name: 'Image', type: { name: 'Stream' } } request_content = self.serialize(request_mapper, image) path_template = 'ocr' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], query_params: {'detectOrientation' => detect_orientation,'language' => language}, body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 200 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? # Deserialize Response if status_code == 200 begin parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content) result_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::OcrResult.mapper() result.body = self.deserialize(result_mapper, parsed_response) rescue Exception => e fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result) end end result end promise.execute end
Optical Character Recognition (OCR) detects text in an image and extracts the recognized characters into a machine-usable character stream. Upon success, the OCR results will be returned. Upon failure, the error code together with an error message will be returned. The error code can be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, NotSupportedImage, NotSupportedLanguage, or InternalServerError.
@param detect_orientation [Boolean] Whether detect the text orientation in the image. With detectOrientation=true the OCR service tries to detect the image orientation and correct it before further processing (e.g. if it's upside-down). @param image An image stream. @param language [OcrLanguages] The BCP-47 language code of the text to be detected in the image. The default value is 'unk'. Possible values include: 'unk', 'zh-Hans', 'zh-Hant', 'cs', 'da', 'nl', 'en', 'fi', 'fr', 'de', 'el', 'hu', 'it', 'ja', 'ko', 'nb', 'pl', 'pt', 'ru', 'es', 'sv', 'tr', 'ar', 'ro', 'sr-Cyrl', 'sr-Latn', 'sk' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2785 def recognize_printed_text_in_stream_with_http_info(detect_orientation, image, language:nil, custom_headers:nil) recognize_printed_text_in_stream_async(detect_orientation, image, language:language, custom_headers:custom_headers).value! end
Optical Character Recognition (OCR) detects text in an image and extracts the recognized characters into a machine-usable character stream. Upon success, the OCR results will be returned. Upon failure, the error code together with an error message will be returned. The error code can be one of InvalidImageUrl, InvalidImageFormat, InvalidImageSize, NotSupportedImage, NotSupportedLanguage, or InternalServerError.
@param detect_orientation [Boolean] Whether detect the text orientation in the image. With detectOrientation=true the OCR service tries to detect the image orientation and correct it before further processing (e.g. if it's upside-down). @param url [String] Publicly reachable URL of an image. @param language [OcrLanguages] The BCP-47 language code of the text to be detected in the image. The default value is 'unk'. Possible values include: 'unk', 'zh-Hans', 'zh-Hant', 'cs', 'da', 'nl', 'en', 'fi', 'fr', 'de', 'el', 'hu', 'it', 'ja', 'ko', 'nb', 'pl', 'pt', 'ru', 'es', 'sv', 'tr', 'ar', 'ro', 'sr-Cyrl', 'sr-Latn', 'sk' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 890 def recognize_printed_text_with_http_info(detect_orientation, url, language:nil, custom_headers:nil) recognize_printed_text_async(detect_orientation, url, language:language, custom_headers:custom_headers).value! end
Recognize Text operation. When you use the Recognize Text interface, the response contains a field called 'Operation-Location'. The 'Operation-Location' field contains the URL that you must use for your Get Recognize Text Operation Result operation.
@param mode [TextRecognitionMode] Type of text to recognize. Possible values include: 'Handwritten', 'Printed' @param url [String] Publicly reachable URL of an image. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1425 def recognize_text(url, mode, custom_headers:nil) response = recognize_text_async(url, mode, custom_headers:custom_headers).value! nil end
Recognize Text operation. When you use the Recognize Text interface, the response contains a field called 'Operation-Location'. The 'Operation-Location' field contains the URL that you must use for your Get Recognize Text Operation Result operation.
@param mode [TextRecognitionMode] Type of text to recognize. Possible values include: 'Handwritten', 'Printed' @param url [String] Publicly reachable URL of an image. @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1462 def recognize_text_async(url, mode, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'mode is nil' if mode.nil? fail ArgumentError, 'url is nil' if url.nil? image_url = ImageUrl.new unless url.nil? image_url.url = url end request_headers = {} request_headers['Content-Type'] = 'application/json; charset=utf-8' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::ImageUrl.mapper() request_content = self.serialize(request_mapper, image_url) request_content = request_content != nil ? JSON.generate(request_content, quirks_mode: true) : nil path_template = 'recognizeText' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], query_params: {'mode' => mode}, body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 202 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? result end promise.execute end
Recognize Text operation. When you use the Recognize Text interface, the response contains a field called 'Operation-Location'. The 'Operation-Location' field contains the URL that you must use for your Get Recognize Text Operation Result operation.
@param image An image stream. @param mode [TextRecognitionMode] Type of text to recognize. Possible values include: 'Handwritten', 'Printed' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 3043 def recognize_text_in_stream(image, mode, custom_headers:nil) response = recognize_text_in_stream_async(image, mode, custom_headers:custom_headers).value! nil end
Recognize Text operation. When you use the Recognize Text interface, the response contains a field called 'Operation-Location'. The 'Operation-Location' field contains the URL that you must use for your Get Recognize Text Operation Result operation.
@param image An image stream. @param mode [TextRecognitionMode] Type of text to recognize. Possible values include: 'Handwritten', 'Printed' @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 3080 def recognize_text_in_stream_async(image, mode, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'image is nil' if image.nil? fail ArgumentError, 'mode is nil' if mode.nil? request_headers = {} request_headers['Content-Type'] = 'application/octet-stream' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = { client_side_validation: true, required: true, serialized_name: 'Image', type: { name: 'Stream' } } request_content = self.serialize(request_mapper, image) path_template = 'recognizeText' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], query_params: {'mode' => mode}, body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 202 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? result end promise.execute end
Recognize Text operation. When you use the Recognize Text interface, the response contains a field called 'Operation-Location'. The 'Operation-Location' field contains the URL that you must use for your Get Recognize Text Operation Result operation.
@param image An image stream. @param mode [TextRecognitionMode] Type of text to recognize. Possible values include: 'Handwritten', 'Printed' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 3062 def recognize_text_in_stream_with_http_info(image, mode, custom_headers:nil) recognize_text_in_stream_async(image, mode, custom_headers:custom_headers).value! end
Recognize Text operation. When you use the Recognize Text interface, the response contains a field called 'Operation-Location'. The 'Operation-Location' field contains the URL that you must use for your Get Recognize Text Operation Result operation.
@param mode [TextRecognitionMode] Type of text to recognize. Possible values include: 'Handwritten', 'Printed' @param url [String] Publicly reachable URL of an image. @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1444 def recognize_text_with_http_info(url, mode, custom_headers:nil) recognize_text_async(url, mode, custom_headers:custom_headers).value! end
This operation generates a list of words, or tags, that are relevant to the content of the supplied image. The Computer Vision API can return tags based on objects, living beings, scenery or actions found in images. Unlike categories, tags are not organized according to a hierarchical classification system, but correspond to image content. Tags may contain hints to avoid ambiguity or provide context, for example the tag “ascomycete” may be accompanied by the hint “fungus”. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param url [String] Publicly reachable URL of an image. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [TagResult] operation results.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1008 def tag_image(url, language:nil, custom_headers:nil) response = tag_image_async(url, language:language, custom_headers:custom_headers).value! response.body unless response.nil? end
This operation generates a list of words, or tags, that are relevant to the content of the supplied image. The Computer Vision API can return tags based on objects, living beings, scenery or actions found in images. Unlike categories, tags are not organized according to a hierarchical classification system, but correspond to image content. Tags may contain hints to avoid ambiguity or provide context, for example the tag “ascomycete” may be accompanied by the hint “fungus”. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param url [String] Publicly reachable URL of an image. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1067 def tag_image_async(url, language:nil, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'url is nil' if url.nil? image_url = ImageUrl.new unless url.nil? image_url.url = url end request_headers = {} request_headers['Content-Type'] = 'application/json; charset=utf-8' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::ImageUrl.mapper() request_content = self.serialize(request_mapper, image_url) request_content = request_content != nil ? JSON.generate(request_content, quirks_mode: true) : nil path_template = 'tag' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], query_params: {'language' => language}, body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 200 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? # Deserialize Response if status_code == 200 begin parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content) result_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::TagResult.mapper() result.body = self.deserialize(result_mapper, parsed_response) rescue Exception => e fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result) end end result end promise.execute end
This operation generates a list of words, or tags, that are relevant to the content of the supplied image. The Computer Vision API can return tags based on objects, living beings, scenery or actions found in images. Unlike categories, tags are not organized according to a hierarchical classification system, but correspond to image content. Tags may contain hints to avoid ambiguity or provide context, for example the tag “ascomycete” may be accompanied by the hint “fungus”. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param image An image stream. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [TagResult] operation results.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2905 def tag_image_in_stream(image, language:nil, custom_headers:nil) response = tag_image_in_stream_async(image, language:language, custom_headers:custom_headers).value! response.body unless response.nil? end
This operation generates a list of words, or tags, that are relevant to the content of the supplied image. The Computer Vision API can return tags based on objects, living beings, scenery or actions found in images. Unlike categories, tags are not organized according to a hierarchical classification system, but correspond to image content. Tags may contain hints to avoid ambiguity or provide context, for example the tag “ascomycete” may be accompanied by the hint “fungus”. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param image An image stream. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [Concurrent::Promise] Promise object which holds the HTTP response.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2964 def tag_image_in_stream_async(image, language:nil, custom_headers:nil) fail ArgumentError, 'endpoint is nil' if endpoint.nil? fail ArgumentError, 'image is nil' if image.nil? request_headers = {} request_headers['Content-Type'] = 'application/octet-stream' # Set Headers request_headers['x-ms-client-request-id'] = SecureRandom.uuid request_headers['accept-language'] = accept_language unless accept_language.nil? # Serialize Request request_mapper = { client_side_validation: true, required: true, serialized_name: 'Image', type: { name: 'Stream' } } request_content = self.serialize(request_mapper, image) path_template = 'tag' request_url = @base_url || self.base_url request_url = request_url.gsub('{Endpoint}', endpoint) options = { middlewares: [[MsRest::RetryPolicyMiddleware, times: 3, retry: 0.02], [:cookie_jar]], query_params: {'language' => language}, body: request_content, headers: request_headers.merge(custom_headers || {}), base_url: request_url } promise = self.make_request_async(:post, path_template, options) promise = promise.then do |result| http_response = result.response status_code = http_response.status response_content = http_response.body unless status_code == 200 error_model = JSON.load(response_content) fail MsRest::HttpOperationError.new(result.request, http_response, error_model) end result.request_id = http_response['x-ms-request-id'] unless http_response['x-ms-request-id'].nil? result.correlation_request_id = http_response['x-ms-correlation-request-id'] unless http_response['x-ms-correlation-request-id'].nil? result.client_request_id = http_response['x-ms-client-request-id'] unless http_response['x-ms-client-request-id'].nil? # Deserialize Response if status_code == 200 begin parsed_response = response_content.to_s.empty? ? nil : JSON.load(response_content) result_mapper = Azure::CognitiveServices::ComputerVision::V2_1::Models::TagResult.mapper() result.body = self.deserialize(result_mapper, parsed_response) rescue Exception => e fail MsRest::DeserializationError.new('Error occurred in deserializing the response', e.message, e.backtrace, result) end end result end promise.execute end
This operation generates a list of words, or tags, that are relevant to the content of the supplied image. The Computer Vision API can return tags based on objects, living beings, scenery or actions found in images. Unlike categories, tags are not organized according to a hierarchical classification system, but correspond to image content. Tags may contain hints to avoid ambiguity or provide context, for example the tag “ascomycete” may be accompanied by the hint “fungus”. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param image An image stream. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 2935 def tag_image_in_stream_with_http_info(image, language:nil, custom_headers:nil) tag_image_in_stream_async(image, language:language, custom_headers:custom_headers).value! end
This operation generates a list of words, or tags, that are relevant to the content of the supplied image. The Computer Vision API can return tags based on objects, living beings, scenery or actions found in images. Unlike categories, tags are not organized according to a hierarchical classification system, but correspond to image content. Tags may contain hints to avoid ambiguity or provide context, for example the tag “ascomycete” may be accompanied by the hint “fungus”. Two input methods are supported – (1) Uploading an image or (2) specifying an image URL. A successful response will be returned in JSON. If the request failed, the response will contain an error code and a message to help understand what went wrong.
@param url [String] Publicly reachable URL of an image. @param language [Enum] The desired language for output generation. If this parameter is not specified, the default value is "en".Supported languages:en - English, Default. es - Spanish, ja - Japanese, pt - Portuguese, zh - Simplified Chinese. Possible values include: 'en', 'es', 'ja', 'pt', 'zh' @param custom_headers [Hash{String => String}] A hash of custom headers that will be added to the HTTP request.
@return [MsRestAzure::AzureOperationResponse] HTTP response information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 1038 def tag_image_with_http_info(url, language:nil, custom_headers:nil) tag_image_async(url, language:language, custom_headers:custom_headers).value! end
Private Instance Methods
Adds telemetry information.
# File lib/2.1/generated/azure_cognitiveservices_computervision/computer_vision_client.rb, line 3247 def add_telemetry sdk_information = 'azure_cognitiveservices_computervision' sdk_information = "#{sdk_information}/0.20.2" add_user_agent_information(sdk_information) end