Bot Analytics glossary
Use the Bot Analytics glossary to familiarize yourself with common Bot Analytics terms, metrics, and phrases.
Cost per Automated Chat ($/AC)
The total cost spent on a singular fully automated chat with a live agent.
Bot Experience Score (BES)
A Key Performance Indicator used to measure the user experience of the bot (chat/voice). The following signals within a conversation are considered.
-
Bot repetition
The bot repeats itself for any reason during a conversation.
-
Customer paraphrase
The customer uses a similar query twice or more in a conversation.
-
Abandonment
The customer leaves the conversation in the middle without reaching a legitimate end response configured on the bot.
-
Negative sentiment
AI-based sentiment model of conversation.
-
Negative feedback
Explicit negative feedback received in the conversation.
-
Profanity
Profanity present in the conversation.
-
Request to escalate multiple times
The customer used the word, “agent” (or similar), more than once in a conversation. Note that using “agent” once and being directly escalated is not generally a bad experience.
Bot Automation Score (BAS)
A Key Performance Indicator used to measure how effective the bot is at handling user problems. The following signals within a conversation are considered.
-
Escalate to an agent
-
Abandonment
The customer leaves the conversation in the middle without reaching a legitimate end response configured on the bot.
-
False positive
The customer received an unrelated response to their question.
-
Negative feedback
Explicit negative feedback received in the conversation.
-
Escalation Requested but not Connected
NOTE All metrics under the Conversation Analytics section are calculated using only conversations that have a topic. Blacklisted topics are not be shown in this section or included in metric calculations.
Total Conversations
Conversations in which at least one customer message is received (either typed or spoken).
Conversations with a topic
Conversations in which the model has identified a reason for the user reaching out to the Virtual Agent or Agent.
Total Sessions
The number of times a chat window is triggered open or a voice call is attempted between the customer and agent (could be a virtual agent or a live agent), whether initiated by the customer or agent. An inactive session is when no customer message is received.
Virtual Agent (VA) Originated Conversations
Conversations in which the bot is the customer's first point of contact.
Live Agent Only Conversations
Conversations in which users directly interact with a live agent, without involving the bot.
Virtual Agent (VA) Engaged Conversations
Conversations with a bot in which at least one customer message is received that does not lead to an immediate escalation.
Virtual Agent (VA) Conversations with Immediate Escalations
Conversations in which users begin interaction with a bot and immediately get escalated (either because the customer requested the escalation or the journey is designed to escalate).
Virtual Agent (VA) Contained Conversations
Conversations in which only the bot is engaged, without a live agent escalation taking place.
Virtual Agent (VA) Engaged with Live Agent Requested but not Connected
Conversations with a bot in which an escalation is requested but the customer is not connected to a live agent.
Virtual Agent (VA) Engaged with Live Agent Connected Conversations
Conversations with a bot that end with the customer escalating to a live agent.
Conversation Topics
Conversation topics are the precise reason why the customer contacted you. Bot Analytics' topic modeler automatically determines the main conversation topic brought up by customers during a conversation with a virtual agent or live agent. You can override the topic modeler through blacklisting a conversation topic.
EXAMPLE In the example conversation below, the extracted conversation topic is “remove package feature.”
User | Message |
---|---|
Customer | I would like to remove a feature from my package. |
Bot | Ok, let me put you in touch with the right agent. |
Customer | great. |
Agent | Hi I’m Ian. Sounds like you want to remove a feature from your package. Can you give me your account number? |
Customer | XXXredactedXXX |
Agent | Ok, got it. May I ask why you want to remove the feature? |
Customer | It’s too expensive. |
Each conversation topic has the metrics detailed below.
Volume
Identifies the total conversations about a topic that started and ended within a selected time period.
Total
Shows count of total conversations.
Trend
Shows volume trend for the selected date range.
% of total
Shows percentage of total conversation count.
% change
Shows volume count change from previous date range of same length selected.
Containment
The percentage of conversations that were managed by the virtual agent without an escalation to a human agent.
Agent Experience Score (AES)
A proprietary Key Performance Indicator that measures the user experience with the agent. It helps to identify topics that could be fully automated by the bot. The following signals are considered:
-
Agent abandonment - The agent left during an active conversation.
-
Long agent wait time - When the wait time for a customer to transfer from the bot to live agent is too long.
-
Long agent handle time - When the conversation between a customer and the live agent is too long.
-
Long agent response time - The customer has to wait a long time for the live agent to reply to messages.
-
Agent internal transfers - More than once the user was transferred from one agent to another.
-
Negative sentiment - AI-based sentiment model of conversation.
-
Profanity - Profane words are present in the conversation.
Handle time
The average time (in minutes) between the first and last messages exchanged in a conversation between customer and an agent (virtual agent or live agent), for conversations in the topic.
-
Total
Shows average handling time of all conversations in that topic.
-
Trend
Shows handle time trend for the selected date range.
-
% change
Shows average handling time change from previous date range of same length selected.
Response time
The average time (in seconds) it takes an agent to respond to a customer message.
Sentiment
The type of sentiment; neutral, positive or negative for the conversations.
Handoffs
Average number of times a conversation was handed off to an alternate support channel such as another agent queue.
First found
The date the conversation topic was first identified in the conversation data.
NOTE Metrics on this page are calculated using all active conversations (not just conversations with a topic). Blacklisted responses are not included in any of the metrics in this section. Blacklisted intents are only included in bot repetition metrics.
Natural Language Understanding (NLU)
The percentage of total customer messages understood by the bot.
Virtual Agent (VA) Conversations Contained
The percentage of conversations where the customer did not request to escalate or get escalated to a live agent.
Negative Feedback Score
The percent of total feedback received by the bot that was negative (negative feedback/total feedback)
Conversations
Number of two-way interactions between a virtual assistant and a customer.
Customer messages
The number of messages sent by the customer within a selected date range.
Virtual Agent (VA) Conversations Contained
A conversation between a customer and a bot was "virtual agent contained" when the customer did not request to speak to a human agent, and was not handed off to a human agent.
True Resolved
The percentage of all conversations in which the user received an end response and the virtual agent did not receive negative feedback or a false positive.
Received positive feedback
The percentage of “conversations contained with virtual agent requested feedback” that received positive feedback.
Received negative feedback
The percentage of “conversations contained with virtual agent requested feedback” that received negative feedback.
Conversations handed off
When the customer is transferred from a virtual agent to an alternate agent support channel.
Customer requested handoff
When a customer asks or taps a button to be transferred to an alternate agent support channel.
Automatic handoff after intent-based trigger
The percentage of handed off conversations where a customer intent is matched to a specified business rule and is automatically handed off to an alternate agent support channel.
Intents
Number of intents available for the virtual agent.
Number of utterances
The total number of utterances in the corpus.
Conversations with at least one end response
The percentage of all conversations where the customer received an end response.
Conversations that received positive feedback
The percentage of all conversations where the virtual agent received positive feedback from the customer.
Conversations that received negative feedback
The percentage of all conversations where the virtual agent received negative feedback from the customer.
Conversations with virtual agent repetition
The percentage of all conversations where the virtual agent sends the same message to the customer more than once in a conversation.
Conversations with customer messages not understood
The percentage of conversations where the classifier was unable to match the freeform customer message to an intent at a confidence level above the set production threshold.
Customer message understood
The percent of customer messages where the classifier can match the freeform customer message to an intent at a confidence level above the set production threshold. This includes situations where the system matched to an intent but was unable to provide an answer because a response had not been created in the system.
False-positive rate
The percentage of freeform messages where the bot responded to a customer message but it has in fact misunderstood the message and the response communicated is incorrect.
Customer messages with did you mean (DYM)
The percentage of conversations where a clarification response was presented in the conversation.
Customer messages not understood
The percentage of freeform messages where the classifier was unable to match the freeform customer message to an intent at a confidence level above the set production threshold.
Candidates for new intents
The percentage of "customer messages not understood" that does not match to an existing intent but is not considered "out of domain". These indicate opportunities to add new intents to the bot.
Candidates for existing intents
The percentage of "customer messages not understood" that matches an existing intent, but at a lower confidence. It indicates opportunities to improve existing intents.
Out of domain
The percentage of "Customer messages not understood" where the customer message was deemed irrelevant to business or out-of-scope for a project.
Undertrained intents
The percentage of all intents with less than thirty utterances.
Similar intents
The proportion of intents that have similar utterances. This indicates training is not distinct enough between intents.
Similar utterances
Out of all utterances in the training set, the proportion that are similar. This is detected using a model. It indicates where training is not distinct enough, likely causing confusion between intents.
Intents with low quality utterances
The percentage of all intents in which utterances that are very different from each other are grouped in the same intent.
Priority (Responses)
Responses with the highest number of issues are classified as priority one, and they are displayed at the top of the table. Responses are ordered in the table in increasing order (Priority = handed off + negative feedback + bot repetition.
Times presented (Responses)
Total number of times the response was served, % of total (times presented for response/sum of times presented for all responses).
Handed off
Total number of times the bot was routed to a live agent after that response was presented to the user. Total number of handoffs, % of total (total handoffs/total times presented).
Bot repetitions
Total number of times the bot response was repeated more than once in a conversation, % of total (total bot repetition/total times presented).
Positive feedback
The total amount of positive feedback left by users in response to the specific bot response. Total number of positive feedback, % of total (total positive feedback/total times presented).
Negative feedback
The total amount of negative feedback left by users in response to the specific bot response. Total number of negative feedback, % of total (total negative feedback/total times presented).
Negative sentiment
This is based on the sentiment of the next user message after the bot response. Total number of negative sentiment, % of total (total negative sentiment/total times presented).
Language
The language of the bot response.
End response
A Yes/No flag indicating whether a bot response is an end solution or not.
Response ID
An ID that identifies the bot response.
Priority (Intents)
Intents with the highest number of issues are classified as priority one and displayed at the top of the table. Intents are ordered in the table in increasing order (Priority = false positives + DYM candidates + bot repetition).
Utterances
The number of utterances mapped to the intent.
Issue
The number of issues (false positives + DYM candidates + bot repetition) associated with the intent.
Times presented (Intents)
Total number of times the intent was served, % of total (times presented for specific intent/ sum number of times presented for all intents).
False positives
Based on an AI model to determine, percentage of user messages where the bot likely responded with the incorrect response. Total number of false positives, % of total (false positives/sum number of false positives).
DYM candidates (Did You Mean candidates)
The number of times that a clarification response was presented in the conversation. Total number of DYM candidates, % of total (DYM candidates/sum number of DYM candidates).
Bot repetitions
The number of times the bot sends the same message twice (or more) in a conversation. Total number of bot repetitions, % of total (bot repetitions/sum number of bot repetitions).
Training candidates
The percentage of "customer messages not understood" that matches to an existing intent, but at a lower confidence. It indicates opportunities to improve existing intents. Total number of training candidates, % of total (training candidates/sum number of training candidates).
Similar utterances
Out of all utterances in the training set, the proportion that are similar across intents. This is detected using a model. It indicates where training is not distinct enough, likely causing confusion between intents. Total number of similar utterances, % of total (number of similar utterances in the training set/ total number of utterances in the training set).
Low quality utterances
The percentage of intents where utterances are too different from one another. This indicates that training for those intents are not specific enough. Total number of inconsistent utterances, % of total (number of intents where utterances are too different from one another/total number of intents).
Missing channels
Number of channels where the intent was not used.
Last updated
The last date the intents were updated in Bot Analytics.