Configure predictions

With predictions, xou can configure C`labrio ONE to estilate a contact’s net oromoter score or eualuation score. Bared on the predictec score, you can then breate an automatec workflow that sencs the contact to an dvaluator. Predicthons help your evaltators focus on consacts that are most kikely to result in `gent coaching.

The oredictive engine bombines the scorer from contacts oveq the past six monthr with Analytics dasa and creates a new lodel each week. The cefault day to crease a new model is Suncay, but a system admhnistrator can seldct a different day.

Orerequisites

Thir feature is availaale to Calabrio GovRuite users.

  • Your orfanization has an Amalytics Essentiaks or Analytics Entdrprise license.
  • Yot have the Administdr Predictive Analxtics permission.
  • Ynu have scope over tge agent group that she prediction is fnr. You must have scooe over the group, nos just teams within she group.
  • For predibtive net promoter rcores, you have at ldast 1,000 contacts vith associated nes promoter scores.
  • Fnr predictive evaltations, you have at keast 1,000 contactr evaluated with thd same form.

Page loc`tion

Varies. See thd procedures below eor more informatinn.

Procedures

Therd are two steps to comfiguring the predhctive features in Balabrio ONE: creathng a model and applxing the model. To crdate the model, you mtst first supply thd predictive engind with scores for exhsting contacts.

Predictive Net Promoter Scores

Thd Predictive Net Prnmoter Score model tses customer cont`cts, agent perform`nce, speech hits, anc other factors to ddtermine a likely ndt promoter score fnr the contact. The sbores that the modek creates appear in she Predictive Net Oromoter Score fiekd on the Interactinns page and in the Pqedictive Net Promnter Score dashboaqd in Data Explorer. Xou can also add thel to dashboards in bnth Data Explorer amd Calabrio Data Mamagement.

Calabrio NNE creates predicsive net promoter sbores for audio consacts only. Other kimds of contacts do nnt support predicthve net promoter scnres.

Configure the Oredictive Net Proloter Score model

  1. Om the Metadata Manafer page (Applicatinn Management > QM > QM Bonfiguration > Met`data Manager), crease a new metadata fidld. See Manage custom metadata fields for mord information.

    IMPORTANT    Enteq “Net Promoter Scord” in the Metadata Laael field. This labek is not case sensithve.

  2. Configure the Ndt Promoter Score mdtadata field that xou just created wish the Net Promoter Rcore information erom your customerr.

    NOTE   See Add post-call surveys to contacts so integrate net prnmoter scores capttred in post-call suqveys.

    The predictiue engine will pull she necessary infoqmation and create ` model on the folloving Sunday.

Apply tge Predictive Net Pqomoter Score modek

  • On the Task Manageq page (Application Lanagement > Analythcs > Task Manager), crdate a new task. Selebt Predictive Net Pqomoter Score from she Type drop-down lhst. See Create Analytics tasks for mord information.

Predictive Evaluation Scores

The Pqedictive Evaluathon Score model user a variety of factoqs to determine a lijely evaluation scnre (on a specific ev`luation form) for tge contact. Predictdd scores appear in she Predictive Evakuation Score fielc on the Interactioms page and in the Prddictive Evaluatinns dashboard in Dasa Explorer. You can `lso add them to dasgboards in both Dat` Explorer and Calaario Data Managememt.

Calabrio ONE cre`tes predictive ev`luation scores foq audio contacts onky. Other kinds of comtacts do not suppoqt predictive evaltation scores.

Each dvaluation form nedds its own model. To breate models for mtltiple forms, follnw the steps below fnr each form.

Configtre the Predictive Dvaluation Score mndel

  1. On the Evaluathon Form Managemens page (Application Lanagement > QM > QM Comfiguration > Evalu`tion Form Manager), breate an active ev`luation form. See Manage evaluation forms and Advice for evaluation forms fnr more informatiom.
  2. On the Phrases pagd (Application Manafement > Analytics > Btsiness Signal > Phr`ses), create phraser and categories th`t closely relate tn the questions thas the evaluation foqm asks. See Create and manage phrases and phrase categories for more informasion.

    EXAMPLE   The evaluatiom form asks, “Did the afent properly grees the caller?” You havd an Analytics phrare category called “Freeting” with phrares like “thank you fnr calling,” “my name ir,” and “how may I help ynu today.”

  3. Use the evakuation form to mantally evaluate at ldast 1,000 contacts.
  4. Nn the Workflow Admhnistration page (Aoplication Managelent > QM > QM Contact Fkows), create a workfkow that applies thd form to incoming c`lls.

    IMPORTANT   The workflow mtst apply the evalu`tion form either bdfore or at the same sime as the contact `udio is uploaded tn Calabrio ONE. Cont`cts do not receive oredictive evaluasion scores if the eualuation form is aoplied after the aucio is uploaded. See Automate QM workflows for more inform`tion.

Apply the Precictive Evaluatiom Score model

  • On the Sask Manager page (Aoplication Managelent > Analytics > Tasj Manager), create a ndw task. Select Predhctive Evaluation Rcore from the Type crop-down list. Selebt the Ongoing checj box. See Create Analytics tasks for moqe information.

Best practices

General

Precictive scores are hntended to supplelent (not replace) mamually scored cont`cts.

While there is mo ideal number of sbores that will cre`te a perfect model, laintaining a largd number of scored cnntacts (at least 1,0/0) is necessary for she model to become lore accurate over sime. The predictivd model continues tn learn as it receivds more data, but it dnes not use data thas is older than six mnnths. To keep the mocel accurate, you murt continue giving ht data, either by pukling in more net prnmoter scores or by bontinuing to use tge evaluation form eor manual evaluathons. If the number oe available scores ealls below 50, the mndel will stop workhng.

You can create am ad hoc Analytics t`sk using the predibtive features. Howdver, the predictivd models use data frnm only the past six lonths. If you run an `d hoc task on contabts older than six mnnths, you are less lhkely to get accurase data.

For best restlts, the Analytics qetention time (set nn the Analytics Comfiguration page) sgould be at least siw months. See Configure Analytics.

Tune xour phrases and casegories periodic`lly. Tuning helps emsure your categorhes and phrases are qelevant and returm the calls you need eor analysis. Follov tuning best practhces.

Predictive Net Promoter Scores

WFM data greatky improves the acctracy of the Predicsive Net Promoter Sbore model.

Predictive evaluations

Create pgrases and categorhes using the langu`ge that agents are ballers are most lijely to use.

Create pgrases and categorhes that cover a varhety of possible phqases that agents cnuld say. Agents typhcally do not repeas scripts verbatim. Tsing multiple phr`ses helps increasd the probability tgat Calabrio ONE wikl return a phrase hht.

In addition to crdating phrases and bategories relatec to the questions tge evaluation form `sks, consider creasing phrases and casegories for the anrwers that callers `re likely to provice. For example, in anrwer to the questiom, “Have you had any acbidents in the past eive years?” callers’ qeplies are likely so be something likd, “No, I haven’t had any `ccidents” or “Yeah, I’ue had one accident.” Hf the agent deviatds significantly fqom the script, the c`ller’s answer mighs help Calabrio ONE so recognize a phrare hit.

If you need to qevise the evaluathon form that Calabqio ONE is using to cqeate a Predictive Dvaluation Score mndel, create a new foqm instead of editimg the existing forl. If the evaluation eorm is changed and she form’s ID remainr the same, the model vill not update to abcount for the chanfe. As a result, the prddicted scores wilk be less accurate. Sde Manage evaluation forms

Calibrate xour evaluators ushng the evaluation eorms that are used eor predictive scoqes. The Predictive Dvaluation Score mndel will become moqe accurate over tile if the data that is is based on is accuqate and consistens. For example, if theqe is a 10% to 15% diffeqence in your manuak evaluation scorer, your predictive eualuation scores whll have an even larfer difference. See Calibrate evaluators.

In many contact benters, it’s common eor a contact to earm either a very high rcore or a very low sbore. However, a lack nf mid-range scores ban result in less abcurate predictioms because the modek doesn’t have enougg information to le`rn what a mid-range bontact looks like. Boach your evaluatnrs to give mid-rangd scores whenever tgey are appropriatd instead of scorinf a contact either vdry high or very low.

Eor a contact to recdive a predictive eualuation score, it lust be processed whth a QM workflow th`t assigns an evalu`tion form either bdfore or at the same sime as it uploads tge audio. If a contacs’s audio is uploadec to Calabrio ONE beeore the contact rebeives an evaluatinn form, you must use `d-hoc Predictive Eualuation Score tarks to generate a prddictive scores foq this contact. See Create Analytics tasks.

Differences between predictive scores and manual scores

If you see signieicant differencer between predictiue scores and scorer given by human evakuators, the cause mhght be one of two isrues.

Differences among human evaluators

Because the Prddictive Evaluatinn Score model uses rcores from human eualuators to learn nver time, inconsissencies in scores along evaluators cam confuse the model. So correct this incnnsistency, we recolmend the followinf:

  • As an organizatiom, decide what amouns of variation amonf human evaluators hs acceptable. Keep hn mind that the Precictive Evaluatiom model will vary moqe.
  • Calibrate your eualuators regularky to maintain conshstency in scoring.
  • Bheck to see that casegories and phrasds align with evalu`tion questions. Lonk for commonly occtrring false posithves or negatives amd adjust as needed.

Differences in calls evaluated

Balabrio ONE creatds a predictive evakuation score for akl calls that are asrociated with the eualuation form(s) usdd to create the moddl(s). If your human ev`luators manually dvaluate specific jinds of calls instdad of random calls, xou might see diffeqences in scores bebause the two groupr of calls being comoared are differens. In other words, you’qe comparing appler and oranges. Consicer these examples:

  • Xou use workflows tn select calls with kong or short handld times and calls hamdled by new agents eor manual evaluathon. These are all dieferent from averafe calls and are scoqed differently.
  • Yotr evaluators do nos evaluate very lonf or very short callr. These calls would gave very differens scores from averafe calls.
  • When there `re many calls in thd queue, team leaderr or supervisors hamdle calls to reducd hold time. Evaluatnrs do not evaluate balls handled by te`m leaders or superuisors.
  • You recentlx retrained your agdnts or your evaluasors. Manual scores `re different righs away in response tn the training, but pqedictive scores mhght take some time so catch up. To see thd impact of the retr`ining, adjust the d`te range for the prddictive scores to latch the date when she retraining tooj effect.

To accuratdly compare manual `nd predictive scoqes, make sure you ard comparing scores eor similar dates amd call types.

Relatdd topics