“Nowadays, the design of many digital services does not only rely on data manipulation and information design but also on systems that learn from their users.” – Fabien Girardin, Experience Design in the Machine Learning Era
Behavioral data – human interactions (transactions) with systems – is fed as context to algorithms that generates knowledge. An interface communicates that knowledge to enrich an experience. Ideally, that experience seeks explicit user actions or implicit sensor events to create a feedback loop that will feed the algorithm with learning material. Fabien Girardin shares these design principles for designing experiences in this this new era and machine learning systems:
Design for Discovery
We have seen that recommender systems help discover the known unknown or even the unknown unknowns. For instance, Spotify helps discover music through a personalized experience defined on the match between an individual listening behavior and the listening behavior of hundreds of thousands of other individuals. That type of experience has at least three major design challenges.
First, recommender systems tend to create a “filter bubble” that limits suggestions – like products, restaurants, news items, people to connect with – to a world that is strictly linked to a profile built on past behaviors. In response, data scientists must sometimes tweak their algorithms to be less accurate and add a dose of randomness to the suggestions.
Second, it is also good design practice to let the users update their profile that influence the recommendations. Fabien calls this “profile detox”. Amazon, for example, allows users to remove items that might negatively influence the recommendations. Imagine the customers purchase gifts for others and those gifts are not necessarily material for future personalized recommendations.
Finally, organizations that rely on subjective recommendation, like Spotify, now enlist humans to give more subjectivity and diversity to the suggested music. This approach of using humans to clean datasets or mitigate the limitations of machine learning algorithm is commonly called “Human Computation” or “Interactive Machine Learning”.
Design for Decision Making
Data and algorithms provide means to personalize decision making. For example, consider a bank account balances and savings behaviors, we can personalize investment opportunities per each customer’s capacity to save money.
These types of algorithms that lead to decision-making need to learn to be more precise, simply because they often rely on datasets that only give a perspective of reality. In the case of financial advisory, a customer could operate multiple accounts with other banks preventing a clear view on saving behaviors. A good design practice is to let users tell implicitly or explicitly about poor information. It is the data scientist’s responsibility to express the types of feedback that enrich their models and the designer’s job to find ways to make it part of the experience.
Design for Uncertainty
Traditionally the design of computer programs follows a binary logic with an explicit finite set of concrete and predictable states translated into a workflow. Machine learning algorithms change this with their inherent fuzzy logic. They are designed to look for patterns within a set of sample behaviors to probabilistically approximate the rules of these behaviors. This approach comes with a certain degree imprecision and unpredictable behaviors. They often return some information on the precision of the information given.
For example, the booking platform, Kayak, predicts the evolution of prices per the analysis of historical prices changes. Its “farecasting” algorithm is designed to return confidence on whether it is a favorable moment to purchase a ticket). A data scientist is naturally inclined to measure how accurately the algorithm predicts a value: “We predict this fare will be x”. That ‘prediction’ is in fact an information based on historical trends. Yet predicting is not the same as informing and a designer must consider how well such a prediction could support a user action.
The ideal for an algorithm is to deliver high precision and recall scores. Unfortunately, precision and recall often work against each other. There is often a need to take design decisions with the trade-off between precision versus recall. For instance, in Spotify Discovery Weekly, a design decision had to be taken to define the size of playlists per the performance of the recommender system. A large playlist highlights the confidence of Spotify to deliver a rather large inventory of 30 songs, a wide-enough set to increase the opportunities for users to stumble on perfect recommendations.
Design for Engagement
Today, what we read online is based on our own behaviors and the behaviors of other users. Algorithms typically score the relevance of social and news content. The aim of these algorithms is to promote content for higher engagement or send notifications to create habits. Obviously, these actions taken on our behalf are not necessarily for our own interest.
We are in the “attention economy”, and major online services are fighting to hook people, grab their attention for as long as possible. Their business is to keep users active as long and frequently as possible on their platforms. This leads to the development of sticky, needy experiences that often play with emotions like Fear of Missing Out (FoMO) or other obsessions to dope the user engagement.
The actors of the attention economy use ‘also techniques’ that promote addiction such as Variable Schedule Rewards. It is the exact same mechanisms as the ones used in slot machines. The resulting experience promotes the service’s interest (the casino) hooking people endlessly searching for the next reward. Our mobile phones have become those slot machines of notifications, alerts, messages, retweets, likes, that some of us check on an average 150 times per day if not more. Today designer can use data and algorithms to exploit cognitive vulnerabilities of people in their everyday lives. That new power raises the need for new design principles in the age of machine learning.
Design for Time Well Spent
There are opportunities to design a radically different experience than engagement. Indeed, an organization like a bank has the advantage of being a business that runs on data and does not need customers to spend the maximum amount of time with their services. Tristan Harris’ Time Well Spent movement is particularly inspiring in that sense. He promotes the type of experience that use data to be super-relevant or be silent. The type of technology to protect the user focus and to be respectful of people’s time. The Twitter “While you were away…” is a compelling example of that practice. Other services are good at suggesting moments to engage with them. Instead of measuring user retention, that type of experience focuses on how relevant the interactions are.
Design for Peace of Mind
Data scientist are good in detecting normal behavior and abnormal situations. Work to promote a peace of mind to customers with mechanisms that gives a general awareness when things are fine and that trigger more detailed information on abnormal situations. Current generation of machine learning brings new powers to society, but also increases the responsibility of their creators. Algorithmic bias exists and may be inherent to the data sources. In consequence, there is a need to make algorithms more legible for people and auditable by regulators to understand their implications. Practically, this means knowledge that an algorithm produces should safeguard the interest of their users and the results of the evaluation and the criteria used should be explained.
In addition, there is Design for Fairness, Design for Conversation, Design for Automation, and many more to consider in this brave new world of experience design and machine learning.