Configure predictions

With predictions, you can configure Calabrio ONE to estimate a contact’s net promoter score or evaluation score. Based on the predicted score, you can then create an automated workflow that sends the contact to an evaluator. Predictions help your evaluators focus on contacts that are most likely to result in agent coaching.

The predictive engine combines the scores from contacts over the past six months with Analytics data and creates a new model each week. The default day to create a new model is Sunday, but a system administrator can select a different day.

Prerequisites

This feature is available to Calabrio GovSuite users.

  • You have an Analytics license.
  • You have scope over the agent group that the prediction is for. You must have scope over the group, not just teams within the group.
  • For predictive net promoter scores, you have at least 1,000 contacts with associated net promoter scores.
  • For predictive evaluations, you have at least 1,000 contacts evaluated with the same form.

Page location

Varies. See the procedures below for more information.

Procedures

There are two steps to configuring the predictive features in Calabrio ONE: creating a model and applying the model. To create the model, you must first supply the predictive engine with scores for existing contacts.

Predictive Net Promoter Scores

The Predictive Net Promoter Score model uses customer contacts, agent performance, speech hits, and other factors to determine a likely net promoter score for the contact. The scores that the model creates appear in the Predictive Net Promoter Score field on the Interactions page and in the Predictive Net Promoter Score dashboard in Data Explorer. You can also add them to dashboards in both Data Explorer and Calabrio Data Management.

Calabrio ONE creates predictive net promoter scores for audio contacts only. Other kinds of contacts do not support predictive net promoter scores.

Configure the Predictive Net Promoter Score model

  1. On the Metadata Manager page (Application Management > QM > QM Configuration > Metadata Manager), create a new metadata field. See Manage metadata fields for more information.

    IMPORTANT    Enter “Net Promoter Score” in the Metadata Label field. This label is not case sensitive.

  2. Configure the Net Promoter Score metadata field that you just created with the Net Promoter Score information from your customers.

    NOTE   See Add post-call surveys to contacts to integrate net promoter scores captured in post-call surveys.

    The predictive engine will pull the necessary information and create a model on the following Sunday.

Apply the Predictive Net Promoter Score model

  • On the Task Manager page (Application Management > Analytics > Task Manager), create a new task. Select Predictive Net Promoter Score from the Type drop-down list. See Create Analytics tasks for more information.

Predictive Evaluation Scores

The Predictive Evaluation Score model uses a variety of factors to determine a likely evaluation score (on a specific evaluation form) for the contact. Predicted scores appear in the Predictive Evaluation Score field on the Interactions page and in the Predictive Evaluations dashboard in Data Explorer. You can also add them to dashboards in both Data Explorer and Calabrio Data Management.

Calabrio ONE creates predictive evaluation scores for audio contacts only. Other kinds of contacts do not support predictive evaluation scores.

Each evaluation form needs its own model. To create models for multiple forms, follow the steps below for each form.

Configure the Predictive Evaluation Score model

  1. On the Evaluation Form Management page (Application Management > QM > QM Configuration > Evaluation Form Manager), create an active evaluation form. See Manage evaluation forms and Advice for evaluation forms for more information.
  2. On the Phrase Manager page (Application Management > Analytics > Phrase Manager), create phrases and categories that closely relate to the questions that the evaluation form asks. See Create and manage phrases and phrase categories or for more information.

    EXAMPLE   The evaluation form asks, “Did the agent properly greet the caller?” You have an Analytics phrase category called “Greeting” with phrases like “thank you for calling,” “my name is,” and “how may I help you today.”

  3. Use the evaluation form to manually evaluate at least 1,000 contacts.
  4. On the Workflow Administration page (Application Management > QM > QM Contact Flows), create a workflow that applies the form to incoming calls.

    IMPORTANT   The workflow must apply the evaluation form either before or at the same time as the contact audio is uploaded to Calabrio ONE. Contacts do not receive predictive evaluation scores if the evaluation form is applied after the audio is uploaded. See Automate QM workflows for more information.

Apply the Predictive Evaluation Score model

  • On the Task Manager page (Application Management > Analytics > Task Manager), create a new task. Select Predictive Evaluation Score from the Type drop-down list. Select the Ongoing check box. See Create Analytics tasks for more information.

Best practices

General

Predictive scores are intended to supplement (not replace) manually scored contacts.

While there is no ideal number of scores that will create a perfect model, maintaining a large number of scored contacts (at least 1,000) is necessary for the model to become more accurate over time. The predictive model continues to learn as it receives more data, but it does not use data that is older than six months. To keep the model accurate, you must continue giving it data, either by pulling in more net promoter scores or by continuing to use the evaluation form for manual evaluations. If the number of available scores falls below 50, the model will stop working.

You can create an ad hoc Analytics task using the predictive features. However, the predictive models use data from only the past six months. If you run an ad hoc task on contacts older than six months, you are less likely to get accurate data.

For best results, the Analytics retention time (set on the Analytics Configuration page) should be at least six months. See Configure Analytics.

Tune your phrases and categories periodically. Tuning helps ensure your categories and phrases are relevant and return the calls you need for analysis. Follow tuning best practices.

Predictive Net Promoter Scores

WFM data greatly improves the accuracy of the Predictive Net Promoter Score model.

Predictive evaluations

Create phrases and categories using the language that agents are callers are most likely to use.

Create phrases and categories that cover a variety of possible phrases that agents could say. Agents typically do not repeat scripts verbatim. Using multiple phrases helps increase the probability that Calabrio ONE will return a phrase hit.

In addition to creating phrases and categories related to the questions the evaluation form asks, consider creating phrases and categories for the answers that callers are likely to provide. For example, in answer to the question, “Have you had any accidents in the past five years?” callers’ replies are likely to be something like, “No, I haven’t had any accidents” or “Yeah, I’ve had one accident.” If the agent deviates significantly from the script, the caller’s answer might help Calabrio ONE to recognize a phrase hit.

If you need to revise the evaluation form that Calabrio ONE is using to create a Predictive Evaluation Score model, create a new form instead of editing the existing form. If the evaluation form is changed and the form’s ID remains the same, the model will not update to account for the change. As a result, the predicted scores will be less accurate. See Manage evaluation forms

Calibrate your evaluators using the evaluation forms that are used for predictive scores. The Predictive Evaluation Score model will become more accurate over time if the data that it is based on is accurate and consistent. For example, if there is a 10% to 15% difference in your manual evaluation scores, your predictive evaluation scores will have an even larger difference. See Calibrate evaluators.

In many contact centers, it’s common for a contact to earn either a very high score or a very low score. However, a lack of mid-range scores can result in less accurate predictions because the model doesn’t have enough information to learn what a mid-range contact looks like. Coach your evaluators to give mid-range scores whenever they are appropriate instead of scoring a contact either very high or very low.

For a contact to receive a predictive evaluation score, it must be processed with a QM workflow that assigns an evaluation form either before or at the same time as it uploads the audio. If a contact’s audio is uploaded to Calabrio ONE before the contact receives an evaluation form, you must use ad-hoc Predictive Evaluation Score tasks to generate a predictive scores for this contact. See Create Analytics tasks.

Differences between predictive scores and manual scores

If you see significant differences between predictive scores and scores given by human evaluators, the cause might be one of two issues.

Differences among human evaluators

Because the Predictive Evaluation Score model uses scores from human evaluators to learn over time, inconsistencies in scores among evaluators can confuse the model. To correct this inconsistency, we recommend the following:

  • As an organization, decide what amount of variation among human evaluators is acceptable. Keep in mind that the Predictive Evaluation model will vary more.
  • Calibrate your evaluators regularly to maintain consistency in scoring.
  • Check to see that categories and phrases align with evaluation questions. Look for commonly occurring false positives or negatives and adjust as needed.

Differences in calls evaluated

Calabrio ONE creates a predictive evaluation score for all calls that are associated with the evaluation form(s) used to create the model(s). If your human evaluators manually evaluate specific kinds of calls instead of random calls, you might see differences in scores because the two groups of calls being compared are different. In other words, you’re comparing apples and oranges. Consider these examples:

  • You use workflows to select calls with long or short handle times and calls handled by new agents for manual evaluation. These are all different from average calls and are scored differently.
  • Your evaluators do not evaluate very long or very short calls. These calls would have very different scores from average calls.
  • When there are many calls in the queue, team leaders or supervisors handle calls to reduce hold time. Evaluators do not evaluate calls handled by team leaders or supervisors.
  • You recently retrained your agents or your evaluators. Manual scores are different right away in response to the training, but predictive scores might take some time to catch up. To see the impact of the retraining, adjust the date range for the predictive scores to match the date when the retraining took effect.

To accurately compare manual and predictive scores, make sure you are comparing scores for similar dates and call types.

Related topics