AI services are becoming more prevalent, yet many organisations are still waiting to see their impact at scale and the financial returns from their efforts. A recent MIT-supported study reveals that just 5 per cent of enterprises have successfully deployed generative AI tools into production at scale, despite billions of dollars spent.

Failure to scale AI solutions can stem from many factors. These include infrastructure and regulatory concerns, but what is often overlooked is a failure in overall approach. Increasingly, we are seeing single-point AI initiatives that aren’t iterated over time, don’t adapt to context, and aren’t fully embedded into existing workflows.

Meanwhile, the UK’s Public Design Evidence Review emphasises that well-designed public services are those that truly work for the people who use them, by solving intended problems, being accessible, and grounded in context through practices like user journey mapping, co-design, and iterative testing. Having gained experience in the field of AI service development and understanding the unique challenges it can bring, our UCD discipline have been reflecting on why putting users at the centre of our design has helped us to create high-trust, high-impact national-scale, AI-based services for businesses and end-users. We’ve shared some of our key reasons as to why we must take a user-centred approach to AI.

Solving the “Right” Problem With the “Right” Technology

Too often, AI projects begin with the technology rather than the problem. UCD flips this approach by first mapping the end-to-end user journey to uncover where the greatest impact can be made. By identifying the pain points that matter most to users, project teams can ensure they are addressing the right problem and conduct the right business change. UCD also supports making thoughtful decisions about whether AI is the right tool for the job. This approach ensures that AI is not used for its own sake but as a deliberate, valuable solution to real problems.

UCD Creates Trust Between Users and Services

Trust is a cornerstone of any service, and it is particularly crucial when AI is involved. User-centred designers and researchers can help establish this trust by designing AI-driven services to clearly communicate actions and decisions based on their close knowledge of users’ needs and behaviours. Transparency and clarity about how people and organisational data will be used also promotes trust. Users must understand what the AI is doing, why it is doing it and how they can influence the outcome. Engaging users throughout, from concept to delivery, is crucial to achieve this.

Promoting Agency and Control

Trust and transparency go hand-in-hand with agency and control. Good UCD practices, such as researching and testing designs with a wide range of users, empowers users to feel in control of their interactions with AI, through transparency and communication. It’s also enacted through providing opportunities for users to make meaningful choices. Users need to have control over customising their experience, providing feedback and the ability to make choices that benefit them.

By designing AI tools that respect user agency, teams can avoid the common pitfalls of users feeling overwhelmed, misled, or excluded from decision-making. This instils confidence and promotes engagement with the service.

Ethics and Diversity at the Core of AI Design

AI is only as fair as the data and processes that shape it. This makes ethical design and diversity central to UCD practices. Designers play a critical role in bridging the gap between raw data and real-world user contexts. By actively engaging diverse groups of users in testing, designers can ensure that AI solutions are representative, inclusive and less likely to perpetuate bias or inequality.

Beyond the interface, designers need to take an active role in how AI models are trained, ensuring that teams use high-quality, contextually relevant data. This holistic approach from algorithm to interface reduces risks and helps ensure AI services are both fair and effective.

Co-Design Is a Two-Way Street

One of the most powerful aspects of UCD in AI development is co-design, actively involving users in shaping both the interface and the underlying AI model.

Through hands-on workshops, interviews, and design reviews, teams can better understand and align AI services with user expectations while also helping users understand the possibilities and limitations of the technology. This mutual learning boosts confidence in users and teams and highlights the tool’s role as a supportive partner rather than a threat.

By keeping users at the centre of design decisions, co-design builds solutions that not only function effectively but also carry the trust, buy-in, and long-term engagement of the people who use them.

Designing AI Services at Informed

Successfully designing AI-based services brings a unique set of challenges.  To meet these we’ve created our AI Readiness Assessment, shaped by real delivery experience and user-centred design expertise. It’s a proven, multidisciplinary approach which helps organisations unlock value fast, build trust, and scale AI solutions responsibly while keeping user and business outcomes at the heart. By putting it into practice across a range of sectors, we’ve developed and continue to tailor our approach to ensure that companies see real world impact from their AI adoption.

As organisations embrace AI adoption and scaling, our UCD community of practice is evolving in step; continually learning, innovating, and applying insights from real-world delivery success. We’re excited to keep partnering with organisations that share the ambition to design, develop, and deploy AI services that make a lasting positive impact.

AI Readiness Assessment

Understanding your AI opportunity – turning ambition into action.