Call Scoring is designed to help teams improve call quality without adding more manual work. After every conversation, calls are automatically evaluated based on a scoring template. This guide explains how the feature works, how scores are calculated, and how supervisors and agents can use the results for faster, more consistent coaching.
User Level:
Admins
Supervisors
Analysts
Want to try CloudTalk AI Analytics?
If you’re a CloudTalk customer, you can start a free 30-day trial of AI Analytics here. You can cancel anytime, no commitment required.
Overview
Call Scoring automatically evaluates each eligible call and provides fast, consistent feedback. The system uses a company-defined template to score key parts of a conversation (for example: Discovery & Needs Analysis, Value Proposition & Solution Fit, Objection Handling, etc.), adds short written explanations, and attaches transcript highlights as evidence.
This helps supervisors focus on calls that need a human review, while agents get quick coaching moments after every call.
Where to find it in CloudTalk
Go to Analytics → AI Analytics.
In the list, each call shows a Score badge along with Talk/Listen ratio, Sentiment, Topics, and more.
Click the three dots on the right of any row to open Call Details, then select the Call Score tab to view the full breakdown.
Inside the Call score tab you’ll see:
Section scores with labels (e.g., Rapport Building, Qualification Discovery).
A short AI-written summary for each question explaining why the score was given.
Transcript extracts linked directly to the relevant moment in the call (click "Check transcription" to jump straight there).
The overall score at the top, based on your template.
💡 Tip: Use the Call Score Filter
The Call Score filter in AI Analytics lets you narrow results by score range (0–100). For example:
- 0–40 → Focus on low-scoring calls that may need coaching.
- 80–100 → Highlight best-practice calls with very high scores.
- 0–100 → Review all calls.
You can also combine Call Score with Sentiment, Topics, Agent, or Department filters to build targeted coaching queues and streamline reviews.
How scoring works
Template-driven
Each score is based on a template that defines the sections to evaluate (e.g., Opening & Framing, Discovery & Needs Analysis , Value Proposition & Solution Fit, Objection Handling, Closing & Next Steps, Tone & Empathy).
Inside each section, you add one or more questions.
Questions can be either Yes/No or a 5-point scale (Very Poor → Very Good).
For scale questions, Admins can add descriptions for each level to keep scoring criteria clear and consistent.
Weights and calculation
Equal weight for sections: Each section contributes the same percentage toward the overall score.
Example: 10 sections = each worth 10%.
Example: 5 sections = each worth 20%.
2. Equal weight for questions within a section:
If a section has one question, it carries the full section weight.
If it has two questions, each counts for half, and so on.
3. Section score: Calculated as the average of its questions, then rounded.
Example:
Section 1 (2 questions) → (25% + 100%) ÷ 2 = 62.5% → rounded to 63
Section 2 (2 questions) → (100% + 0%) ÷ 2 = 50%
4. Overall score: Calculated as the average of all section scores.
Example: (63% + 50%) ÷ 2 = 57%
Answer type mapping
Yes/No answers are scored as Yes = 100% and No = 0%.
5-point scale questions are mapped to percentage values:
Very Poor = 0%
Poor = 25%
Fair = 50%
Good = 75%
Very Good = 100%
Handling Not applicable (N/A)
The system automatically returns N/A if it cannot score a question (for example, no transcript evidence) or if a category has no questions. When this happens, the question is excluded from scoring, and the weights for the section and overall score are recalculated using only the applicable questions.
What CloudTalk generates
For each section, the Call scoring feature provides:
A numeric score
A short explanation of why that score was given
Transcript excerpts as evidence
This ensures you can quickly see what worked well and where improvements are needed.
Set up the scoring template
Only Admins can create or edit the template. One template is supported at a time.
Open Dashboard → Settings → AI Conversation Intelligence.
Scroll to the Call Scoring section in AI Conversation Intelligence settings, then click Edit scoring template.
When you open the Call Scoring settings, the default template is displayed.
The configuration page is divided into two sections:
Questions (left panel): Shows the list of questions included in the scoring calculation. Each question is labeled with the section it belongs to.
Question Detail (right panel): Provides details for the selected question, including:
Question: What the AI is evaluating.
Score Type: The scale used for evaluation (Yes/No or 1–5 scale).
Rating Definition: Describes what each rating means in the context of the question.
Example of Yes/No score scale:
Example of 1-5 Scale:
Select a question from the list to edit its configuration. To add a new question, click the + button in the top-right corner of the Questions section.
4. Save the template. It will be applied to all eligible calls by default.
Best practices for template design
Make behaviors observable: Avoid vague terms like “Great at discovery.” Instead, write a clear description of the behavior and add one or more concrete examples.
Description: "Agent confirms the customer’s issue in one sentence."
Example: "So, you’re saying the system isn’t saving your changes - is that correct?"
Avoid context-dependent questions: Only include the questions that can be evaluated from the call transcript. For example, avoid questions like “Did the agent follow the script?” since the script itself isn’t visible to the scoring system.
Limit the scope: Fewer, sharper sections are better than many broad ones. Each section should contain focused questions, each with a definition and examples to keep evaluation consistent. Keep your template manageable - aim for 4–7 sections with focused, specific questions.
Balance score types: Use Yes/No for must-have behaviors (e.g., compliance checks). Use the 5-point scale for quality judgments. For both types, provide clear descriptions, and for the 5-point scale, back up Very Poor and Very Good with examples.
Yes: "Agent clearly disclosed call recording at the start."
No: "Agent did not mention call recording at any point."
Define the ends clearly: Provide both a definition and an example of what Very Poor looks like and what Very Good looks like, so expectations are unambiguous.
Very Poor (1)
Definition: Agent ignored the objection or dismissed it without addressing the concern.
Example: Customer: "That seems expensive." Agent: "Anyway, let’s move on."
Very Good (5)
Definition: Agent acknowledged the concern empathetically and reframed it positively.
Example: "That’s a fair concern. Others felt the same, but they found the time saved made it worthwhile."
Use multiple examples where possible: Adding more than one example per rating (e.g., two or three sample phrases for Very Poor or Very Good) helps make expectations clearer and avoids ambiguity.
Plan for N/A: If a step may not apply to every call type, the system will automatically return N/A when it cannot score a question. You don’t need to configure anything for this to work.
Review results regularly: After two weeks of usage, revisit your template. Check if the examples and definitions are driving useful coaching outcomes. Adjust where definitions are vague or examples don’t resonate with agents.
FAQ
Does the AI score affect call routing or KPIs automatically?
No. Scores are for coaching and analytics.When does the score appear?
Call Score is calculated together with other Conversation Intelligence metrics, after the call has ended and once the transcript is available
Which roles can see scores?
Admins, Supervisors, and Analysts can see scoring data in the Dashboard.If scorecard questions change after scoring has been applied, will calls be rescored?
No. Changes to the scorecard apply only to future calls. Calls scored before the change are not recalculated.
Can I create custom questions?
Yes. You can add a question from the predefined library and then customize it. You can edit the question text, change the answer type (Yes/No or 5-point scale), and update the scoring descriptions. The only part that cannot be customized is the category, which is fixed.


