Designing Human-centered AI

User Needs and Defining Success

Larissa Suzuki
5 min readMar 21, 2021


Machine Learning (ML) is the science of helping computers discover patterns and relationships in data instead of being manually programmed. It is a powerful tool for creating personalised and dynamic experiences, and human perception drives virtually every facet of artificial intelligence.

There’s another layer of human-centred AI (HCAI) that’s significant. We may be product creators, but we’re also humans, people who both intentionally and unintentionally shape every facet of ML. Let’s take a look at the high-level process of how ML works:

  1. Training data are collected and labeled (Input)
  2. Models are built, trained and evaluated
  3. Predictions are generated (Output)

We are the ones who choose the data sources (input), we determine the evaluation metrics, and we get affected by the results (output).

To help you make AI product decisions that are human-centred, Google has created the People + AI Guidebook: This Guidebook has been written to make people + AI partnerships productive, enjoyable, and fair. It targets user experience (UX) professionals and product managers seeking to follow a human-centred AI approach. The main goal is to ensure that the time and resources you invest in building AI make a significant impact while putting the user first the entire time.

In this project, Google:

  1. Conduct and publish human-AI interaction research.
  2. Create and launch open-source tools and platforms to build AI responsibly.
  3. Widen the circle of who can participate in the development and application of AI.

The Guidebook has six chapters, which cover everything from how to identify user needs and define success for your AI-driven product to how to design for failure, to how to provide the right level of control to your users.

In Chapter 1 (User needs & defining success), the authors explore how to identify if AI adds unique value to your product. The crucial question to be asked is: “Should we AI/ML this?”. Chapter 1 is one of the most critical chapters because it covers many of the considerations integral to the problem framing phase of the product development lifecycle. As many of us know, the problem framing phase is the bedrock for everything that follows.

The Guidebook essentially approaches the problem framing phase not from a technology-first perspective but rather from a people-first perspective, focusing on user needs and ensuring that the time and resources you invest in building AI result in usable and valuable products for people.

First, we ask whether the problem at hand is a valuable one to solve, and then we question whether AI can solve that problem in a unique way.

I have seen several teams stating things like, “We have an OKR to deliver 10 AI features next quarter”. Often, users have the misconception that AI is a use case. And it is not. We must move away from:

“Can we use AI to ______?”

and move more towards:

“How might we solve ______?”

“Can AI solve this problem in a unique way?”

One helpful exercise in the ideation phase is to gather your team together and plot your team’s ideas into a 2x2 matrix. On the Y axis we have the “User Impact”, and on the X axis we have the “Need for AI”. Machine Learning solutions do take time to be productionised. Opting for those high-value user journeys with Machine Learning as a unique solution is where you will want to focus first.

Adapted from People + AI Guidebook

This exercise’s focus is probably better with AI/not better with AI, because, like all good thumb rules, it ultimately depends on your user needs and team capabilities.

Probably better with AI

  • Anomaly detection: For example, credit card fraud is constantly evolving and infrequently happens to individuals but frequently across a large group. AI can learn these evolving patterns and detect new types of fraud as they emerge.
  • Personalisation: For example, recommending different content to different users.
  • Prediction of future events: For example, suggesting what is the next best action one should follow to close a deal.
  • Natural language understanding: Dictation software requires AI to function well for different languages and speech styles.

Probably not better with AI

  • Costly errors: If the cost of wrong predictions is very high and outweighs the benefits of a slight increase in success rate, such as a navigation guide that suggests an off-road route to save a few seconds of travel time.
  • Transparency: If users, customers, or developers need to understand precisely everything that happens in the code, like with Open Source Software. AI can’t consistently deliver that level of explainability.
  • People tell you they don’t want it: Certain tasks that people consider high-value and enjoy doing or specific preferences are hard to communicate.

When it comes to building products better with AI, everyone on your team should feel aligned on what success and failure look like for your feature and alert the team if something goes wrong. After deciding on which problem to solve, your team should build and use AI in responsible ways. The Google AI Principles and Responsible AI Practices offers practical steps to ensure that you create a feature with the greater good in mind.

As a next step, we must consider whether some parts of the experience should be automated or augmented. The Guidebook states that we should “Automate tasks that are difficult or unpleasant, and ideally ones where people who do it currently can agree on the “correct” way to do it. Augment bigger processes that people enjoy doing or that carry social value”.

Lastly, design the successes and failures of your AI Product, a.k.a. your “reward function” to create a great user experience for all and over the long run. In other words, your team will want to deliberately design the reward function, including optimising long-term user benefits by visualising your product’s downstream outcomes and limiting their potential adverse consequences.



Larissa Suzuki

Engineer, inventor, entrepreneur, philanthropist • #Data/AI Practice Lead, #AIEthics fellow, Interplanetary Internet @google • Prof @ucl