Online and Telephone Counselling Course – Enrolment OPEN

The AI & Therapy Critical Thinking Matrix

A Practitioner’s Guide to Ethical AI Evaluation

This article introduces the AI & Therapy Critical Thinking Matrix: a structured, practitioner-centred framework designed to help UK counsellors and psychotherapists evaluate any AI or digital tool ethically, legally, and professionally. You’ll learn why the matrix was developed, how its ten ethical domains work in practice, and how it sits within the current UK regulatory landscape, including the Data Usage and Accountability Act, Medicines and Healthcare products Regulatory Agency (MHRA) guidance on Digital Mental Health Technologies, and the Shared AI Charter for UK Counselling and Psychotherapy Organisations.

Get your free Ethical AI resources

Ethical AI Tool Checker (Custom GPT) and Ethical AI Critical Thinking Guide (PDF)

Whether you’re evaluating a tool for your own practice, responding to a client’s use of AI, or navigating organisational decisions, this framework supports confident, defensible, and ethically grounded decision-making.

You’re a counsellor or psychotherapist. You care deeply about your clients, your ethics, and doing right by both. And increasingly, you’re encountering AI and digital tools in your professional world: practice management systems with ‘AI-powered insights,’ clients mentioning conversations with ChatGPT, mental health apps promising therapeutic support around the clock.

The pace of change is real. But here’s what matters: you already possess the ethical reasoning to navigate this. You evaluate complex situations every day in your practice. What you need isn’t a technology degree. You need a structured way to apply the professional judgement you already have to a new context.

Illustration of a counsellor kneeling beside an open case of tools, representing the use of AI tools in counselling practice.

That’s exactly what the AI & Therapy Critical Thinking Matrix provides.

Why Counsellors and Psychotherapists Need an AI Critical Thinking Framework

AI systems and digital mental health tools are entering the therapeutic landscape faster than most practitioners can realistically track. Many of these tools are marketed directly to your clients, or proposed by employers and agencies, without the kind of scrutiny we’d apply to any other intervention entering the therapeutic space.

Illustration of a counsellor sitting in an armchair and holding a compass, representing ethical decision-making when using AI in counselling.

The challenge isn’t a lack of ethical awareness among practitioners. It’s that until now, there hasn’t been a systematic, practitioner-centred framework for evaluating these tools in the UK context. Professional bodies are developing guidance, and collaborative efforts like the Shared AI Charter for UK Counselling and Psychotherapy Organisations represent important steps forward. But practitioners need something they can use now, in supervision, in contracting conversations, in the moment when a colleague says, ‘We’re thinking of adopting this new platform.’

The Critical Thinking Matrix was developed to fill that gap. It’s grounded in UK law and regulation, aligned with professional ethical frameworks, and designed to be used by any counsellor or psychotherapist regardless of their technical confidence. It doesn’t tell you what to think about AI. It helps you think clearly, systematically, and defensibly.

What the AI & Therapy Critical Thinking Matrix Does

The matrix is a reflective evaluation framework built around ten ethical domains. Each domain represents a critical area of consideration when you encounter any AI or digital tool in a therapeutic context, whether that’s a regulated Digital Mental Health Technology, a general-purpose AI system like ChatGPT, or a commercial mental health app.

It’s not a checklist. There are no tick boxes and no pass/fail scores. Instead, each domain poses questions that invite genuine professional reflection. The kind of reflection you’d bring to supervision, or that you’d want to articulate if a client, insurer, or professional body asked you to justify your decision.

Think of it this way: you already know how to evaluate whether a therapeutic intervention is appropriate for a particular client. You consider the evidence, the context, the relationship, the risks. The matrix applies that same evaluative discipline to digital and AI tools.

Illustration of a counsellor sitting in an armchair with a thoughtful expression and a question mark above their head, representing professional reflection and uncertainty about using AI in counselling.

The Ten Ethical Domains of the AI & Therapy Critical Thinking Matrix

The matrix covers ten areas that, together, provide a comprehensive ethical lens for evaluating any AI tool you might encounter in counselling and psychotherapy practice.

Intended Purpose and Regulatory Status asks you to look past the marketing and identify what a tool is actually designed to do. Is it a wellbeing app, a clinical assessment tool, or something in between? If it claims to diagnose, monitor, treat, or prevent mental health conditions, it may fall within the scope of the Medicines and Healthcare products Regulatory Agency (MHRA) guidance on Digital Mental Health Technologies, and that changes your responsibilities significantly.

Confidentiality and Data Privacy addresses how client data is stored, processed, accessed, and protected. Under UK GDPR, mental health information is classified as special category data requiring heightened protection. With the Data Usage and Accountability Act (DUAA) now in force as of February 2026, the landscape around automated decision-making and data accountability has shifted further. As a practitioner, you are typically the data controller, even when using third-party platforms. That responsibility doesn’t transfer to the tool provider.

Informed Consent considers whether clients can genuinely understand what a tool does and how their data is used. Consent in therapeutic contexts has always been more than a signed form. When AI enters the picture, the complexity increases. Can you explain, in plain language, what happens to the information a client shares with or through a digital tool? If you can’t, that’s a significant concern.

Therapeutic Relationship and Depersonalisation asks whether technology undermines human connection or clinical judgement. This is where practitioners’ existing expertise is most directly relevant. You understand the therapeutic relationship. You know that attunement, rupture and repair, and the relational foundations of therapy don’t happen by accident. This domain helps you evaluate whether a tool supports that relationship or quietly erodes it.

Bias and Discrimination examines how a tool identifies, mitigates, or inadvertently reinforces bias. AI systems learn from the data they’re trained on. If that data reflects existing inequalities, the tool’s outputs will too. Has the tool been tested on diverse populations? Does it allow for adjustments based on individual client characteristics? These aren’t abstract concerns; they directly affect the people sitting in front of you.

Professional Accountability and Responsibility clarifies where the therapist remains responsible for decisions and outcomes. When a tool generates a recommendation or a risk assessment, who is accountable if something goes wrong? The answer, professionally and legally, is almost always you. This domain ensures that reality stays visible.

Technical Reliability and Validity asks whether there’s genuine evidence supporting a tool’s safety, reliability, and effectiveness. Marketing claims aren’t evidence. This domain encourages you to look for independent evaluations, published studies, and transparent information about limitations. NICE’s Evidence Standards Framework for Digital Health Technologies provides a useful benchmark here, and from April 2026, digital health technologies will be subject to even more rigorous appraisal standards.

Impact on Client Autonomy considers whether a tool supports or diminishes client choice and agency. Does the tool use persuasive techniques? Does it limit client choices? Does it promote genuine empowerment, or does it create dependency? Client autonomy is a foundational therapeutic principle, and it applies just as much to digital interventions as to any other aspect of practice.

Regulatory and Ethical Alignment examines compliance with the frameworks that govern your practice: UK GDPR, the Data Usage and Accountability Act, MHRA guidance on Digital Mental Health Technologies, and your professional body’s ethical framework. This domain also invites you to consider whether the tool’s developers have engaged meaningfully with these frameworks, or whether compliance is an afterthought.

Implementation, Monitoring, and Review addresses the practical realities of integrating a tool into your work. How will you introduce it to clients? How will you monitor its impact? What happens if the tool updates its terms, changes its data handling, or produces a harmful output? This domain recognises that ethical evaluation isn’t a one-off event. It’s ongoing.

Using the Critical Thinking Matrix in Therapeutic Practice

The matrix is designed to be practical. It works in several contexts that will be familiar to anyone in therapeutic practice.

Before adopting a new tool, the matrix provides a structured pre-assessment. You work through the domains relevant to your situation, reflecting on each area before making a professional decision. This isn’t about creating bureaucratic barriers to innovation. It’s about ensuring that any tool entering your practice has been evaluated with the same care you’d give to any other clinical decision.

Illustration of a supervisee and supervisor sitting in armchairs facing each other, representing shared language for discussing the use of AI in counselling.

In supervision, the matrix gives you and your supervisor a shared language for discussing AI-related concerns. Rather than a vague sense that something feels uncomfortable about a particular tool, you can identify specifically which ethical domains are raising questions. That’s a much more productive conversation.

During organisational decision-making, the matrix helps teams evaluate tools collectively. If your service or agency is considering a new platform, the matrix provides a framework everyone can engage with, aligning clinical ethics with data protection law and regulatory expectations.

For ongoing review, the matrix reminds us that tools change. Terms of service update. Data handling practices shift. Regulatory guidance evolves. A tool that was ethically sound when you adopted it may not remain so indefinitely. The matrix supports periodic re-evaluation.

And for client conversations, the matrix can support transparent, honest dialogue about the digital tools being used in or around therapy. Whether you’re discussing a tool you’re using in practice or responding to a client who mentions using AI between sessions, the matrix helps you ground those conversations in clear ethical thinking.

The UK Regulatory Landscape for AI in Therapy

The matrix doesn’t exist in a vacuum. It’s aligned with the regulatory frameworks shaping UK therapeutic practice, and those frameworks have seen significant movement recently.

The MHRA’s guidance on Digital Mental Health Technologies, published in 2025, provides a clear framework for distinguishing between wellbeing-focused tools and those that qualify as medical devices. If a tool claims to diagnose, monitor, treat, or prevent mental health conditions, it may be classified as a medical device under MHRA rules. The matrix’s first domain helps practitioners ask exactly these questions.

The Data Usage and Accountability Act received Royal Assent in June 2025, with its main provisions entering into force in February 2026. While it modernises aspects of UK data protection, special category data (which includes mental health information) retains strong protections. Automated decision-making based on health data continues to face significant restrictions, something directly relevant to any AI tool processing client information.

Professional bodies including BACP, UKCP, and NCPS have signed the Shared AI Charter for UK Counselling and Psychotherapy Organisations, signalling coordinated commitment to ethical AI engagement across the profession. NCPS has published its own Relational Safeguards framework, emphasising that safe AI in mental health must be time-bound, supportive, adjunctive, transparent, user-autonomous, and safeguarded.

These developments matter because they confirm something the matrix has always argued: existing ethical principles are sufficient for navigating AI. They simply need to be applied systematically.

Free Resources Download

Ethical AI Tool Checker and Guide

The Ethical AI Tool Checker for Therapists

Alongside the Critical Thinking Matrix, the Ethical AI Tool Checker for Therapists offers a complementary resource. Where the matrix provides the comprehensive ethical framework, the Tool Checker guides you through a structured review of a specific tool’s privacy policy, terms of service, and data handling claims, translating legal and technical language into practice-relevant considerations.

The Tool Checker mirrors how you’d justify a decision to a client, in supervision, to an insurer, or to a professional body. It doesn’t give yes-or-no verdicts. It supports the kind of contextual, relational, and accountable decision-making that defines ethical therapeutic practice.

Together, the matrix and the Tool Checker provide everything you need to evaluate any AI or digital tool with confidence and rigour.

An Ethical AI Framework for Confident Practice

It’s worth being clear about what the matrix is not. It’s not anti-technology. It’s not designed to discourage practitioners from engaging with digital tools. AI and digital technologies will continue to develop, and some will genuinely support therapeutic work.

What the matrix provides is a way to engage thoughtfully rather than reactively. To evaluate rather than assume. To make decisions you can articulate and defend, rather than decisions made by default or under pressure.

The guiding principle is straightforward: if you cannot clearly explain a tool’s data handling, purpose, and impact to a client, you cannot ethically use it with clients. The decision always remains yours. The matrix exists to ensure that decision is informed, defensible, and grounded in the ethical commitment that defines your profession.

You can download the full Critical Thinking Matrix and the Ethical AI Tool Checker below. Use them in your practice, bring them to supervision, share them with colleagues. They’re designed to support you in doing what you already do well: thinking critically, acting ethically, and putting your clients first.

Frequently Asked Questions

What is the AI & Therapy Critical Thinking Matrix?

The Critical Thinking Matrix is a structured ethical evaluation framework designed specifically for UK counsellors and psychotherapists. It helps practitioners assess any AI or digital tool across ten ethical domains, including confidentiality, informed consent, bias, professional accountability, and regulatory compliance. It’s a reflective tool that supports professional judgement rather than replacing it.

Do I need technical knowledge to use the Critical Thinking Matrix?

No. The matrix is designed for practitioners with no specialist digital or regulatory knowledge. It translates complex legal and technical considerations into practice-relevant questions you can work through using your existing professional skills. If you can evaluate a therapeutic intervention ethically, you can use this matrix.

How is the Critical Thinking Matrix different from a compliance checklist?

Checklists produce yes/no answers. The matrix invites genuine professional reflection. Each domain poses open questions that help you think through the ethical implications of a tool in your specific context. There are no pass/fail scores. The goal is informed, defensible decision-making that you could articulate in supervision, to a client, or to your professional body.

What is the Ethical AI Tool Checker for Therapists?

The Ethical AI Tool Checker is a companion resource to the Critical Thinking Matrix. It guides you through a structured review of a specific tool’s privacy policy, terms of service, and data handling practices, translating legal and technical language into plain, practice-relevant considerations. Together with the matrix, it supports thorough ethical evaluation of any AI or digital tool.

About the author: Kenneth Kelly is the developer of the AI & Therapy Critical Thinking Matrix and convener of the UK Expert Reference Group on the Use of Artificial Intelligence in Counselling and Psychotherapy. He is the founder of Counselling Tutor, supporting counsellors and psychotherapists across the United Kingdom.

The AI & Therapy Critical Thinking Matrix (Version 1.1) is aligned with UK GDPR, the Data Usage and Accountability Act, MHRA guidance on Digital Mental Health Technologies, and professional ethical frameworks including those of BACP, UKCP, and NCPS.

This article is not legal advice. Always consider supervision, organisational policy, and current regulatory guidance. The practitioner remains responsible for clinical decision-making.

References

British Association for Counselling and Psychotherapy (BACP). Ethical Framework for the Counselling Professions. And digital practice guidance.

British Psychological Society (BPS). Statements and Commentary on New MHRA Guidance for Mental Health Apps and Technologies.

Expert Reference Group on the Use of Artificial Intelligence in Counselling and Psychotherapy. Shared AI Charter for UK Counselling and Psychotherapy Organisations. And associated resources on ethical AI.

Information Commissioner’s Office (ICO). Guide to the UK General Data Protection Regulation (UK GDPR). And resources on data protection in health and social care.

Kelly, K. The AI & Therapy Critical Thinking Matrix: An Ethical Evaluation Framework for Counsellors and Psychotherapists. Counselling Tutor.

Medicines and Healthcare products Regulatory Agency (MHRA). Digital Mental Health Technologies: Regulation and Evaluation for Manufacturers, Healthcare Professionals and the Public. UK Government guidance collection.

Medicines and Healthcare products Regulatory Agency (MHRA). Digital Mental Health Technology (DMHT): Device Characterisation and Regulatory Expectations.

Medicines and Healthcare products Regulatory Agency (MHRA). MHRA Issues New Guidance for People Using Mental Health Apps and Technologies. UK Government news release.

MindEd. Digital Mental Health Technologies (DMHT): E-learning Resources for Practitioners, Parents and Carers.

National Counselling and Psychotherapy Society (NCPS). Ethical Framework and Guidance on the Use of Technology in Counselling and Psychotherapy.

National Counselling and Psychotherapy Society (NCPS). Relational Safeguards for AI Mental Health Tools.

National Institute for Health and Care Excellence (NICE). Guidance and Evidence Standards for Digital Health Technologies (including digital mental health tools).

UK Council for Psychotherapy (UKCP). Ethical Principles and Code of Professional Conduct. And guidance on online and digital practice.

UK Government. Data Protection Act 2018 and

UK Government. Data Usage and Accountability Act (DUAA).

Transparency note
This article was written and reviewed by human contributors. ChatGPT 5.2 was used as a supportive tool to assist with formatting, layout clarity, and language refinement. All content, interpretations, and ethical positions were created and checked by the authors.

💡 About Counselling Tutor

Counselling Tutor provides trusted resources for counselling students and qualified practitioners. Our expert-led articles, study guides, and CPD resources are designed to support your growth, confidence, and professional development.

👉 Meet the team behind Counselling Tutor