AvISO

We Passed: AvISO becomes the first UK consultancy to achieve accredited ISO 42001 certification for AI governance – why we did it and what we learnt

November 26, 2025

A major milestone: Stage 2 audit success

We are delighted to share that AvISO has successfully passed our Stage 2 ISO 42001 audit with A-Lign, witnessed by ANAB (ANSI National Accreditation Board). This is a significant achievement for us and one we are extremely proud of. Having ANAB involved added an extra level to the certification journey and reinforces the credibility of what we have achieved.

The audit was thorough and comprehensive. However, The A-lign auditor, Dhruv Chabra, was excellent and made the process constructive and engaging. We learnt a great deal, not just about the standard, but about how we can continue to improve as a team. This has been months of hard work and collaboration, and seeing it payoff is hugely rewarding. We are over the moon and excited about what comes next: supporting our clients through the same journey with confidence and clarity. This is a big moment for AvISO, and we could not be happier.

This approach also strengthens our relationship with A-Lign, both as a certification partner and as a client. Going through the process ourselves means we can empathise with the challenges our clients will face and offer guidance that is grounded in real experience. ISO 42001 is new to everyone, and by living the journey first-hand we have built genuine competency in the standard. How can you consult on something you have not experienced yourself? You cannot. This achievement gives us credibility and confidence, and it ensures that when we advise clients, we do so with practical insight rather than theory.

Introduction

Artificial Intelligence (AI) is no longer a future concept. It is embedded in everyday business processes, often without formal oversight. From ChatGPT to AI-driven SaaS tools, organisations are using AI without always understanding the risks.

At AvISO, we saw this as a turning point. ISO 42001, the new international standard for AI management systems, offers a structured way to govern AI responsibly. We decided to go beyond theory and achieve accredited certification ourselves.

Why? Because leadership means action. We wanted to:

  • Be first to market and show the importance of this new framework
  • Demonstrate competence and credibility
  • Avoid asking clients to do something we had not done ourselves
  • Build a stronger partnership with A-Lign, our certification body
  • Understand the journey our clients will take first-hand

Integration with existing ISO standards

ISO 42001 does not exist in isolation. For AvISO, it was essential to integrate AI governance into our Integrated Management System (IMS), which already includes:

  • ISO 27001 – Information Security
  • ISO 9001 – Quality Management
  • ISO 14001 – Environmental Management

This integration matters because:

  • It avoids duplication of controls
  • It streamlines documentation
  • It creates a holistic governance framework where AI risks sit alongside information security, quality, and environmental considerations

Lessons learnt – practical guidance

Our colleague Daren led this project and shared insights that every organisation should consider. Here is what we discovered.

Common challenges

Undefined roles and responsibilities
One of the first hurdles is governance. Many organisations have not appointed clear AI or risk owners, which creates gaps in accountability. Without defined roles, decisions about AI usage, risk assessment, and compliance often fall between departments. This can lead to inconsistent practices and missed risks. Establishing ownership early is critical to ensure oversight and effective decision-making.

Shadow AI usage

AI tools are everywhere, often embedded in everyday applications. Employees may use ChatGPT or AI-driven features in SaaS platforms without informing IT or compliance teams. While these tools can boost productivity, they introduce risks around data privacy, security, and bias. Unmonitored use can also lead to breaches of company policy or regulatory requirements. Organisations need clear policies and training to manage this hidden layer of AI activity.

Incomplete AI system inventory

Most organisations underestimate how many AI systems they actually use. Beyond obvious tools, AI is often integrated into CRM systems, HR platforms, and analytics software. Without a comprehensive inventory, it is impossible to assess risk or apply controls effectively. Building this inventory requires input from multiple teams and a structured approach to discovery.

Complex impact assessments

ISO 42005 provides guidance for AI Impact Assessments, but applying it in practice is challenging. These assessments require cross-functional collaboration—IT, compliance, legal, and operational teams all need to contribute. Defining clear criteria for identifying risks such as bias, security vulnerabilities, and ethical concerns is essential. Many organisations struggle to balance thoroughness with practicality.

Scope definition for IMS integration

When integrating ISO 42001 into an existing Integrated Management System (IMS), defining the scope is a critical step. The scope statement must clearly outline boundaries and applicability to avoid confusion and duplication. Key considerations include:

  • External and internal issues (Clause 4.1) such as regulatory changes, market expectations, and organisational culture
  • Stakeholder requirements (Clause 4.2) including clients, regulators, and internal teams
    A well-defined scope ensures AI governance aligns with existing ISO frameworks like ISO 27001 and ISO 9001, creating a streamlined and efficient system.

Actions that worked

Define clear AI and risk ownership

Assigning responsibility early is essential. Without named AI and risk owners, governance falls apart. We created defined roles with clear accountability for AI systems, risk assessment, and compliance. This gave us a single point of contact for decisions and ensured issues were escalated quickly. It also helped embed AI governance into everyday operations rather than leaving it as a siloed compliance task.

Build a complete AI system inventory

You cannot manage what you do not know exists. We invested time in mapping every AI system in use, including vendor-provided solutions and shadow AI tools hidden in SaaS platforms. This process involved engaging multiple teams—IT, operations, and even marketing—to uncover where AI was being used. The inventory became the foundation for risk assessments and control implementation.

AI awareness and training

Technology alone does not solve governance challenges—people do. We rolled out training to build awareness of AI risks and responsible use. Staff were encouraged to report AI usage early, even if informal, so we could assess and manage it. This cultural shift was critical. It turned AI governance from a compliance exercise into a shared responsibility.

Supplier and vendor controls

AI risk does not disappear when you outsource. We reviewed contracts and added clauses covering bias and reliability, incident reporting, and model changes or retraining. These requirements ensure vendors remain accountable and transparent. It was a learning curve, but it reinforced that governance extends beyond your own systems.

Continuous monitoring, policies and traceability

AI systems evolve, and so must your controls. We implemented clear policies for responsible AI use, logging and audit trails, retraining and model updates, and incident response. These policies set expectations and gave teams practical guidance. Traceability was key, documenting what matters for compliance without drowning in paperwork. This balance kept the system practical and sustainable.

Additional considerations

Data governance and privacy

Managing AI responsibly starts with strong data governance. Organisations must ensure compliance with GDPR and other data protection laws. This includes understanding where data is stored, how it is processed by AI systems, and what safeguards are in place to prevent misuse. Data governance policies should cover consent, retention, and secure disposal, as well as transparency in how AI models use personal data.

Bias and fairness testing

AI systems can unintentionally introduce bias, leading to unfair or discriminatory outcomes. Regular testing is essential to identify and mitigate these risks. This involves reviewing training data, monitoring outputs, and applying fairness metrics. Bias testing should not be a one-off exercise but part of ongoing monitoring, especially when models are retrained or updated.

Ethical and legal compliance

AI governance is not just about technical controls. It must align with organisational values and legal obligations. This means embedding ethical principles into decision-making, ensuring accountability for AI-driven outcomes, and maintaining compliance with relevant regulations. Ethical compliance builds trust with stakeholders and reinforces the organisation’s commitment to responsible AI.

Change management

Introducing AI governance often requires cultural and operational shifts. Employees need to understand why these changes matter and how they affect their roles. Clear communication, training, and leadership support are vital to make governance part of everyday practice rather than a compliance burden. Change management should focus on engagement and collaboration to ensure adoption.

Incident simulation and drills

AI-related failures or breaches can have serious consequences. Organisations should test their response plans through simulations and drills. These exercises help identify gaps in procedures, improve coordination between teams, and build confidence in handling real incidents. A well-rehearsed response plan reduces risk and minimises disruption when issues occur.

Why ISO 42005 matters

ISO 42005 is not just an optional guideline—it is the backbone of effective AI risk management. This standard provides detailed guidance on conducting AI Impact Assessments, which are essential for identifying and prioritising risks before they escalate.

You cannot carry out a meaningful risk assessment without first completing a thorough AI Impact Assessment. Why? Because risk assessment depends on understanding how AI systems interact with data, processes, and people. Without that insight, you are guessing rather than managing risk.

An AI Impact Assessment helps you:

  • Identify where AI is used and what decisions it influences
  • Understand potential impacts on privacy, security, ethics, and fairness
  • Define clear criteria for evaluating risk severity and likelihood
  • Engage the right stakeholders early to ensure a complete picture

In short, ISO 42005 gives organisations the structure and clarity they need to make risk assessment practical and reliable. It turns AI governance from theory into actionable steps that protect your organisation and build trust with stakeholders.

Personal narrative

Daren Morley: Project Lead and Technical Specialist

Daren summed up the experience perfectly:

“The biggest surprise was how much shadow AI existed in everyday tools. People were using AI features without even realising it. Building an inventory was eye-opening. Another challenge was cultural, getting everyone to see AI governance as part of their role, not just a compliance exercise. The integration with our existing ISO standards was a game-changer. It made the process smoother and showed that AI governance is not a bolt-on. It is part of good management.”

Paul Stevens: Managing Director at AvISO and ISOvA

For me, this project was never just about achieving another certification. It was about positioning AvISO at the forefront of a critical conversation: how organisations govern AI responsibly. ISO 42001 is a new standard, and we recognised early on that it would shape the way businesses manage risk and build trust in the years ahead. Being first to market was important, but it was not the only driver. This was about leadership, credibility, and setting an example for our clients.

From a strategic perspective, achieving accredited certification under ANAB’s oversight sends a clear message: we do not just advise on best practice, we live it. That matters when you are asking clients to invest time and resources in something new. It also strengthens our relationships with partners like A-Lign, who share our commitment to quality and integrity. Working closely with them gave us valuable insight into the certification process and reinforced the importance of collaboration.

On a business development level, this milestone opens doors. Clients want to work with organisations that understand their challenges and can guide them with confidence. By going through this journey ourselves, we have gained practical experience that will make our advice more relevant and actionable. We have learnt what works, where the pitfalls are, and how to integrate AI governance into existing systems without creating unnecessary complexity.

Personally, I am proud of what the team has achieved. This was a collective effort, and it reflects the culture we have built at AvISO, one that values learning, innovation, and doing things properly. Passing the Stage 2 audit was a big moment for us, but it is only the beginning. The real opportunity lies in helping our clients navigate this new landscape and ensuring AI is managed in a way that is ethical, secure, and aligned with their goals.

Paul Guiney: Managing Director at Innovation Bureau

The journey to help AvISO become the first ISO consultancy to achieve ISO 42001 certification was shaped by our shared commitment to ensuring AI is implemented with integrity, transparency and accountability. Guiding the technical side of this process, specifically the AI Impact Assessment (AIIA) and policy drafting, has been a significant learning experience in understanding what it truly means to manage AI responsibly. Working through the AIIA made clear that AI risk is not a narrow technical issue but a multidimensional challenge that extends far beyond code and must be examined in the context of organisational structures, stakeholder impacts, data quality and system behaviour. AI introduces dynamic and sometimes hidden risks, such as shadow AI, that cannot be managed informally.

Navigating this process highlighted the critical role of ISO 42005 as the architecture for a robust AIIA. Anchoring the assessment in this guidance demonstrated that strong governance does not slow innovation but enables it. ISO 42005 provides organisations with a reliable method for scaling AI responsibly while strengthening their risk management, assurance and compliance capabilities.

Ultimately, I learned that the AIIA is not a checkbox exercise but the foundation of an effective AI Management System, shaping policies, informing monitoring practices and driving continuous improvement from the outset. I am incredibly proud to have partnered with AvISO Consultancy, combining my domain knowledge with their deep ISO expertise, to co-create real value. It is an exceptional achievement to be part of the team that has become the first ISO consultancy to attain ISO 42001 certification.

Practical next steps

Define clear roles for AI and risk ownership

Start by assigning responsibility. Without clear ownership, governance will fail. Identify who will oversee AI systems and who will manage risk. These roles should have authority and visibility across the organisation. Make sure responsibilities are documented and communicated so everyone knows who to turn to for decisions and guidance.

Build a complete AI system inventory

You cannot manage what you do not know exists. Begin with a discovery exercise to identify every AI system in use, including vendor-provided solutions and hidden AI features in SaaS platforms. Engage multiple teams—IT, operations, marketing, HR—to ensure nothing is missed. This inventory will form the foundation for risk assessments and compliance controls.

Integrate AI controls into existing ISO frameworks

ISO 42001 works best when it is not treated as a standalone system. Integrate AI governance into your existing ISO frameworks such as ISO 27001 for information security and ISO 9001 for quality management. This avoids duplication, streamlines documentation, and creates a single, coherent management system. Integration also makes adoption easier for staff who are already familiar with your current processes.

Establish monitoring, reporting, and retraining processes

AI systems evolve, and so must your controls. Put in place policies for continuous monitoring, including logging and audit trails. Define how incidents will be reported and escalated. Plan for retraining models when data changes or performance declines. These processes should be practical and proportionate, ensuring compliance without creating unnecessary complexity.

Train staff and raise awareness across the organisation

Governance is not just a technical challenge—it is a cultural one. Provide training on responsible AI use and the risks associated with shadow AI. Encourage early reporting of AI usage, even informal, so you can assess and manage it. Awareness campaigns and clear communication will help embed AI governance into everyday practice and make it a shared responsibility.

Thinking about ISO 42001 for your organisation?

We have been through the journey and can help you navigate it with confidence. Get in touch with AvISO today to find out more.

Ask a Question
By clicking “Continue To Site”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
This is some text inside of a div block.

Heading

This is some text inside of a div block.