Language selection

Search CSPS

AI Readiness Scorecard (DDN3-J05)

Description

This job aid presents a scorecard assessment tool and an interpretive guide that can be used to determine whether a specific problem is fit for an AI-based solution.

Published: March 17, 2026
Type: Job aid

Download as PDF (483 KB)


AI Readiness Scorecard

PDF

AI Readiness Scorecard

This job aid presents a scorecard assessment tool and an interpretive guide that can be used to determine whether a specific problem is fit for an AI-based solution.

Web version

AI Readiness Scorecard

How to use this tool

This is a tool to help you assess whether a specific problem or friction point is a strong candidate for an artificial intelligence (AI)-based solution. You will complete the following five sections of the scorecard for each friction point you identified in your process map.

AI refers to a broad range of technologies that can perform tasks typically associated with the cognitive functions of humans, such as recognition, learning and logical reasoning.

An AI-based solution leverages AI to use the input it receives to generate outputs such as predictions, content, recommendations or decisions. AI systems rely on data to function. Examples include:

  • automation: AI relying on set workflows, rules and fixed logic to execute repetitive tasks efficiently, thereby reducing errors and streamlining processes
  • detection and prediction: the use of learned patterns in data to detect or predict outcomes of interest
  • generative AI: often uses large language models to generate new content such as text, images, audio, code or other forms of data based on prompts
Who is this for?

This tool is designed for teams in the early stages of exploring an AI-based solution. It is for anyone who wants to bring a structured, evidence-based approach to their initial discovery and planning.

Why should you use it?

Many AI projects fail not because the technology is wrong, but because it is applied to the wrong problem. This scorecard helps you de-risk your project by:

  • moving beyond a vague "symptom" to a clear, data-backed diagnosis
  • ensuring that AI is the right tool for the job, not just a solution in search of a problem
  • building a powerful, evidence-based business case to gain support from leadership
What is the objective?

By the end of this assessment, you will have a clear profile of your AI opportunity, allowing you to make a strategic decision on whether to proceed, pivot, or pause. The output is a completed five-part scorecard providing a holistic view of the task's fit, the problem's severity, and your readiness to use an AI solution.

When should it be done?

This scorecard should be completed during the discovery phase of your project. It is best used after you have a basic understanding of the problem you want to solve, but before you have committed to a specific solution and conducted a full technical assessment with IT.

How do you use it?

The process is simple:

  1. Focus on a single, specific problem. Take The Design Process: Understanding the Problem (DDN237) to learn more about problem framing and assessing the viability and feasibility of a given solution. If your AI solution will perform multiple distinct tasks, please complete a separate scorecard for each one. Individual scorecards will provide a clearer assessment.
  2. Complete each of the five sections on the following pages.
  3. For each criterion, provide an honest assessment using the scoring guide in each section.
  4. Use the final scores and the interpretation guide to facilitate a strategic discussion with your team.

A note on policy and ethical guardrails:

This scorecard is a tool for assessing operational and technical opportunity. It is not a substitute for a formal risk assessment, privacy impact assessment (PIA), or ethical review. All AI projects, especially those involving automated decisions, must comply with the Treasury Board's Directive on Automated Decision-Making, and use of generative AI should be in line with the Guide on the use of generative artificial intelligence. This scorecard should be used in conjunction with, not in place of, official policy guidance.

Section A: The Nature of the Task (The "AI Fit" Score)

This section assesses whether the task itself is a good fit for current AI capabilities. For each criterion, please enter a 0 for No or a 1 for Yes.

0 - No 1 - Yes

1. Is the task highly repetitive? Example repetitive tasks suitable for AI:

  • Recommendation systems
  • Anomaly detection
  • Forecasting

Example repetitive tasks suitable for generative AI:

  • Generating new reports
  • Creating social media content
  • Creating summaries

2. Is the task data-intensive? Does it require analyzing large volumes of text, numbers, or images?

3. Does the task rely on finding patterns in data? For example, screening resumes, identifying trends

4. Is the task about generating standardized or templated content? For example, drafting first versions of job descriptions, emails

5. Have you already explored non-AI solutions, and found they are not sufficient?

Subtotal A (the AI fit score): __ / 5


Section B: The Impact of the Problem (The "Pain Point" Score)

This section assesses the severity of the problem. When and where possible, collect data on all criteria rather than making assumptions. For each criterion, please rate the impact on a scale from 1 (very low impact) to 5 (very high impact).

1 - Very Low Impact 5 - Very High Impact

6. What is the resource impact of this friction point? For example, time, financial, physical

7. What is the psychological impact of this friction point on employees? For example, stress or frustration

8. How often does this friction point lead to errors? What is the error rate in terms of rework?

9. What is the impact of this friction point on the organization's goals? What is the strategic cost of doing nothing?

Subtotal B (the pain point score): __ / 20


Section C: Process Quality ("Don't Automate Bad Processes")

This section assesses whether the underlying process is stable and well defined enough to support an AI intervention. For each criterion, please enter a 0 for No or a 1 for Yes.

0 - No 1 - Yes

10. Is the current process well defined and stable? In other words, not constantly changing or completely ad hoc

11. Are the desired outcomes of the process clear and agreed upon? Do we know what "good" looks like?

12. Has the process already been simplified as much as possible?

Subtotal C (the process quality score): __ / 3


Section D: Data Quality ("Garbage In, Garbage Out")

This section assesses whether the data required for an AI tool is readily available and of sufficient quality. For each criterion, please enter a 0 for No or a 1 for Yes.

0 - No 1 - Yes

13. Do we have access to the right data needed for the AI to learn or operate? For example, historical records, relevant documents that are fit for purpose

14. Is the available data of sufficient quality? Is the data accurate, complete, relatively clean and up to date?

15. Is the data structured and in a usable format? For example, in a database with appropriate governance, in a machine-readable format

Subtotal D (the data quality score): __ / 3


Section E: Compliance and Ethics

This section is a preliminary check to identify mandatory policy requirements. It is not a substitute for a formal assessment. A "Yes" to any of the questions in this section indicates that you must engage with the appropriate experts in your department.

Compliance question Action required

16. Does the AI tool process any personally identifiable information?

The Privacy Act defines personal information as information about an identifiable individual that is recorded in any form, such as names, employee numbers, performance ratings, financial information, social security numbers, mailing addresses, citizenship statuses or biometrics.

If "Yes," a privacy impact assessment (PIA) is mandatory. This is a key project stream that should run in parallel with development. Your first step is to contact your department's access to information and privacy (ATIP) coordinator.

17. Does the AI tool assist or replace a human in making an administrative decision about or impacting an individual?

For example, screening resumes, determining eligibility, assessing performance, automating approvals, facial recognition, generating a risk score

If "Yes," an algorithmic impact assessment (AIA) is mandatory. This assessment must be completed before the system is deployed.

18. Does the data used to train or operate the AI carry legal, compliance or accountability implications if errors occur?

For example, historical biases in training data, lack of data on marginalized groups

If "Yes," this is a significant ethical risk. This issue must be documented and a mitigation plan developed as a core part of your project's risk register and any formal PIA or AIA.


Guide to interpreting your AI readiness scorecard

This guide will help you interpret the results from your completed sections. Use it to facilitate a strategic discussion with your team and decide on the most appropriate next steps for your project.

Step 1: Calculate your subtotal scores

First, calculate the subtotal for each of the four scored sections. Section E is a compliance check and is not scored.

Scorecard section Your score

Section A: AI fit score

__ /5

Section B: Pain point score

__ /20

Section C: Process quality score

__ /3

Section D: Data quality score

__ /3

Step 2: Interpret each score individually

Scorecard section If your score is... It means...

A: AI fit score (out of 5)

4–5 (High)

2–3 (Moderate)

0–1 (Low)

Proceed. The task is an appropriate fit for an AI solution.

Pause. The task is a partial fit; a combined human and AI approach may be needed.

Pivot. The task is not a good fit for AI; consider non-AI solutions.

B: Pain point score (out of 20)

14–20 (High)

8–13 (Moderate)

0–7 (Low)

The problem is severe and justifies a significant investment.

The problem is a known inconvenience but may not be a top priority.

The problem's impact is minimal.

C: Process quality score
D: Data quality score (combined out of 6)

5–6 (High)

3–4 (Moderate)

0–2 (Low)

Proceed. Your project has a solid foundation and is ready to advance.

Pause. Some foundational work (for example, process cleanup) is needed.

Pivot. Significant foundational work is required before an AI project can be deployed and succeed.

Step 3: Review compliance and ethics (section E)

This is the most critical step in the assessment. This section is not scored; it is a compliance check to identify mandatory requirements.

  • If you answered "Yes" to question 16 (personally identifiable information) or question 17 (automated decision), it signals that a formal compliance process must be initiated and integrated into your project plan. These are not roadblocks, but essential parallel tracks to ensure a responsible rollout.
  • If you answered "Yes" to question 18 (data bias), this indicates a significant ethical risk that must be addressed through a formal mitigation plan.

Step 4: Determine your strategic recommendation

Finally, combine your score interpretations and compliance checks to determine the overall recommendation.

If your profile is... Your strategic recommendation is...

High AI Fit (A)

High Pain Point (B)

High Quality (C + D)

Compliance checks are clear

Prime candidate: This is an excellent project to proceed with, subject to completing any required PIA or AIA processes.

Recommendation: Review the GC AI register and consult with colleagues to determine if there are other similar AI projects you could re-use or adapt.

High AI Fit (A)

High Pain Point (B)

Low Quality (C + D) or compliance checks are not clear

High potential, but action required: This is a great idea, but foundational work such as identifying and correcting data inaccuracies is required.

Recommendation: Pause and resolve any process and data work before proceeding with the project. Begin mandatory processes (PIA, AIA) where necessary. These readiness and compliance streams must be addressed before deployment.

High AI Fit (A)

Low Pain Point (B)

Medium or High Quality (C + D)

High potential but lower priority: The project has potential but will not solve a high-priority problem.

Recommendation: Pause. Flag as a developmental opportunity for those who are trying to develop a skillset. If funding allows, this type of project is a low-risk way to build capacity.

Moderate AI Fit (A) or

Moderate Pain Point (B)

Moderate candidate: This project has potential but requires a strong business case.

Recommendation: Pause. Consider a small-scale pilot to better evaluate the benefits the AI solution can bring or reassess the problem with a different potential AI solution.

Low AI Fit (A) or

Low Pain Point (B)

Low priority; re-evaluate: The task may not be right for AI, or the problem may not be big enough to solve.

Recommendation: Pivot and consider simpler, non-AI process improvements first.


Date modified: