Artificial Intelligence Experience Design Principles

As was the case with the mobile revolution, and the web before that, machine learning is causing us to rethink, restructure, and reconsider what’s possible in virtually every experience we build. At Google, they call this Human-Centered Machine Learning – staying grounded in human needs while solving for them—in ways that are uniquely possible through machine learning (ML) and understanding how to best integrate ML into the UX utility belt to ensure building ML  and Artificial Intelligence (AI) in inclusive ways.

In Dávid Pásztor article, AI UX: 7 Principles of Designing Good AI Products, he shares these guidelines.

Distinguish AI content from normal content visually so people will know where the information is coming from.

In many cases, we use AI and machine learning to dig deep into data and generate new and useful content for ourselves. These can come in the form of movie recommendations on Netflix, translations in Google Translate, or sales predictions in CRM systems. AI generated content can prove extremely useful for people, but in some cases these recommendations and predictions need greater accuracy. AI algorithms have their own flaws, especially when they don’t have enough data or feedback to learn from. We should let people know if an algorithm has generated a piece of content, so they can decide for themselves whether to trust it or not.

Explain how machines think so people will understand the results.

Artificial intelligence often looks like magic: sometimes even engineers have difficulty explaining how the machine-learning algorithm comes up with something. We see our job as a UX team as helping people understand how machines work so they can use them better. We should give users hints about what the algorithm does or what data it uses. A good old example comes from e-commerce, where we explain why we recommend certain products.

Set expectations so people will know what they can or can’t achieve with the AI product.

We must set the right expectations, especially in a world full of sensational, superficial news about new AI technologies. Some chatbots use messages to clarify their level of advancement. In this case, we try to lower the expectations with a nice copy and a kind character for the bot.

Find and handle edge cases so no weird or unpleasant things happen to your users.

AI can generate content and take actions no one had thought of before. For such unpredictable cases, we have to spend more time testing the products and finding weird, funny, or even disturbing or unpleasant edge cases.

Optimizing recall – have the machine-learning product use all the right answers it finds, even if it displays a few wrong answers. Let’s say we build an AI that can identify Picasso paintings. If we optimize for recall, the algorithm will list all the Picasso paintings, but some van Goghs will appear in the results too. Optimizing for precision means the machine learning algorithm will use only the clearly correct answers, but it will miss some borderline positive cases. It will show only Picasso paintings (with no van Goghs), but it will miss some Picassos. It won’t find all the correct answers, only the clear cases. When we work on AI UX, we help developers decide what to optimize for. Providing meaningful insights about human reactions and human priorities can prove the most important job of a designer in an AI project.

Help engineers with insights about people’s expectations and the right training data

Engineers will need training data, specifically well-defined outcomes for different inputs they can feed into the machine learning algorithm. Google reportedly hires “content specialists”, experts in the domain of the product who help build this training data set. After collecting an initial data set, the engineers can train the algorithm and we can start doing user tests with early prototypes. With these tests, we double check the first trained models to see how they perform with real users. In an AI project, you will need even closer collaboration between developers and designers.

Test the AI UX with methods like the Wizard of Oz testing. Use the test participant’s own data when emulating AI content requires.

Testing the UX of AI products can prove much more difficult than for regular apps. These apps mainly promise to provide personalized content, but you hardly can emulate that with some dummy stuff in a wireframe. Two great methods can work though: Wizard of Oz testing and personal contents. In AI UX During Wizard of Oz studies, someone emulates the product’s response from the background. It very often tests chatbots with a human being answering each message, pretending the bot is writing. You can also use the test participant’s personal content in test situations. Ask for their favorite musicians and songs and use them testing a music recommendation engine. This tests people’s assumptions and how they react to good and bad recommendations very well.

Provide the opportunity for users to give feedback and add new training data to the system.

The user experience of AI products gets better and better if we feed more data into the machine learning algorithms. Look at the movie recommendation engines, for each movie displayed, you can set if you like it or not. It collects vast amounts of training data for the algorithm. movie recommendation. AI UX Also provide your customers the opportunity to give feedback about the AI content. On every screen where the system has made a recommendation or prediction, give the consumer the chance to give feedback easily and right away. It usually means one-tap feedback options displayed next to the AI content. In some systems, a button next to the prediction report bad ones.

AI has opened a new frontier for human experience. This frontier requires new methods and techniques to create great experiences for our customers.