class Google::Apis::DialogflowV3::GoogleCloudDialogflowV2beta1QueryResult
Represents the result of conversational query or event processing.
Attributes
The action name from the matched intent. Corresponds to the JSON property `action` @return [String]
This field is set to: - `false` if the matched intent has required parameters and not all of the required parameter values have been collected. - `true` if all required parameter values have been collected, or if the matched intent doesn't contain any required parameters. Corresponds to the JSON property `allRequiredParamsPresent` @return [Boolean]
This field is set to: - `false` if the matched intent has required parameters and not all of the required parameter values have been collected. - `true` if all required parameter values have been collected, or if the matched intent doesn't contain any required parameters. Corresponds to the JSON property `allRequiredParamsPresent` @return [Boolean]
Indicates whether the conversational query triggers a cancellation for slot filling. Corresponds to the JSON property `cancelsSlotFilling` @return [Boolean]
Indicates whether the conversational query triggers a cancellation for slot filling. Corresponds to the JSON property `cancelsSlotFilling` @return [Boolean]
Free-form diagnostic information for the associated detect intent request. The fields of this data can change without notice, so you should not write code that depends on its structure. The data may contain: - webhook call latency - webhook errors Corresponds to the JSON property `diagnosticInfo` @return [Hash<String,Object>]
The collection of rich messages to present to the user. Corresponds to the JSON property `fulfillmentMessages` @return [Array<Google::Apis::DialogflowV3::GoogleCloudDialogflowV2beta1IntentMessage>]
The text to be pronounced to the user or shown on the screen. Note: This is a legacy field, `fulfillment_messages` should be preferred. Corresponds to the JSON property `fulfillmentText` @return [String]
An intent categorizes an end-user's intention for one conversation turn. For each agent, you define many intents, where your combined intents can handle a complete conversation. When an end-user writes or says something, referred to as an end-user expression or end-user input, Dialogflow matches the end-user input to the best intent in your agent. Matching an intent is also known as intent classification. For more information, see the [intent guide](https:// cloud.google.com/dialogflow/docs/intents-overview). Corresponds to the JSON property `intent` @return [Google::Apis::DialogflowV3::GoogleCloudDialogflowV2beta1Intent]
The intent detection confidence. Values range from 0.0 (completely uncertain) to 1.0 (completely certain). This value is for informational purpose only and is only used to help match the best intent within the classification threshold. This value may change for the same end-user expression at any time due to a model retraining or change in implementation. If there are `multiple knowledge_answers
` messages, this value is set to the greatest ` knowledgeAnswers.match_confidence` value in the list. Corresponds to the JSON property `intentDetectionConfidence` @return [Float]
Represents the result of querying a Knowledge base. Corresponds to the JSON property `knowledgeAnswers` @return [Google::Apis::DialogflowV3::GoogleCloudDialogflowV2beta1KnowledgeAnswers]
The language that was triggered during intent detection. See [Language Support] (cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Corresponds to the JSON property `languageCode` @return [String]
The collection of output contexts. If applicable, `output_contexts.parameters` contains entries with name `.original` containing the original parameter values before the query. Corresponds to the JSON property `outputContexts` @return [Array<Google::Apis::DialogflowV3::GoogleCloudDialogflowV2beta1Context>]
The collection of extracted parameters. Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs: - MapKey type: string - MapKey value: parameter name - MapValue type: - If parameter's entity type is a composite entity: map - Else: depending on parameter value type, could be one of string, number, boolean, null, list or map - MapValue value: - If parameter's entity type is a composite entity: map from composite entity property names to property values - Else: parameter value Corresponds to the JSON property `parameters` @return [Hash<String,Object>]
The original conversational query text: - If natural language text was provided as input, `query_text` contains a copy of the input. - If natural language speech audio was provided as input, `query_text` contains the speech recognition result. If speech recognizer produced multiple alternatives, a particular one is picked. - If automatic spell correction is enabled, ` query_text
` will contain the corrected user input. Corresponds to the JSON property `queryText` @return [String]
The result of sentiment analysis. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral. For Participants.DetectIntent, it needs to be configured in DetectIntentRequest.query_params. For Participants. StreamingDetectIntent, it needs to be configured in StreamingDetectIntentRequest.query_params. And for Participants.AnalyzeContent and Participants.StreamingAnalyzeContent, it needs to be configured in ConversationProfile.human_agent_assistant_config Corresponds to the JSON property `sentimentAnalysisResult` @return [Google::Apis::DialogflowV3::GoogleCloudDialogflowV2beta1SentimentAnalysisResult]
The Speech recognition confidence between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set. This field is not guaranteed to be accurate or set. In particular this field isn't set for StreamingDetectIntent since the streaming endpoint has separate confidence estimates per portion of the audio in StreamingRecognitionResult. Corresponds to the JSON property `speechRecognitionConfidence` @return [Float]
If the query was fulfilled by a webhook call, this field is set to the value of the `payload` field returned in the webhook response. Corresponds to the JSON property `webhookPayload` @return [Hash<String,Object>]
If the query was fulfilled by a webhook call, this field is set to the value of the `source` field returned in the webhook response. Corresponds to the JSON property `webhookSource` @return [String]
Public Class Methods
# File lib/google/apis/dialogflow_v3/classes.rb, line 14173 def initialize(**args) update!(**args) end
Public Instance Methods
Update properties of this object
# File lib/google/apis/dialogflow_v3/classes.rb, line 14178 def update!(**args) @action = args[:action] if args.key?(:action) @all_required_params_present = args[:all_required_params_present] if args.key?(:all_required_params_present) @cancels_slot_filling = args[:cancels_slot_filling] if args.key?(:cancels_slot_filling) @diagnostic_info = args[:diagnostic_info] if args.key?(:diagnostic_info) @fulfillment_messages = args[:fulfillment_messages] if args.key?(:fulfillment_messages) @fulfillment_text = args[:fulfillment_text] if args.key?(:fulfillment_text) @intent = args[:intent] if args.key?(:intent) @intent_detection_confidence = args[:intent_detection_confidence] if args.key?(:intent_detection_confidence) @knowledge_answers = args[:knowledge_answers] if args.key?(:knowledge_answers) @language_code = args[:language_code] if args.key?(:language_code) @output_contexts = args[:output_contexts] if args.key?(:output_contexts) @parameters = args[:parameters] if args.key?(:parameters) @query_text = args[:query_text] if args.key?(:query_text) @sentiment_analysis_result = args[:sentiment_analysis_result] if args.key?(:sentiment_analysis_result) @speech_recognition_confidence = args[:speech_recognition_confidence] if args.key?(:speech_recognition_confidence) @webhook_payload = args[:webhook_payload] if args.key?(:webhook_payload) @webhook_source = args[:webhook_source] if args.key?(:webhook_source) end