“Computing Power is Actually a Support System”

A UX designer finds the joy in digital ethics
If process reflects purpose then the process should be fun. In partnership with ustwo, House of Beautiful Business explores Play Thinking — a product design ethos that values emotion and experience just as highly as function and outcomes. For this series we spoke with experts in fields like psychotherapy, UX design, magic, even big pharma to explore how pursuing emotional responses as key outputs of the design process — and pursuing playfulness more generally — can improve our work lives, our digital experiences, indeed most any type of task.
In our first piece, Sophie Kleber, experience Designer and thought leader in the field of emotional AI, discusses why we need to make the digital world an enjoyable one, and how designers have a responsibility to create a world we all want to live in.

In 2002 the entire U.S. stock market was worth about $10 trillion dollars. By 2021 the combined stock market valuation of the world’s biggest tech companies — Apple, Alphabet, Nvidia, Tesla, Microsoft, Amazon, and Facebook — were worth that same amount, approximately $10 trillion. The shift to internet-based work and life has been staggering, with people now spending, on average, about one-third of their lives online.

As a result of all this time spent virtually, however, many people reported feeling digital fatigue. Constant meetings, external self-awareness (being aware that others are watching you), screen lights and glare — experiencing these frequently can lead to boredom, disengagement, and burnout. To Sophie Kleber, this phenomenon demonstrates how little companies have prioritized user experience. “The experience should be joyful,” she says, speaking via video from Berlin.

“When we think about technology, and we think about the development of technology, you have experiences that are utilitarian and functional, and you can get in and out and do what you need to do,” she says. “And you’re gonna leave with, in the best-case scenario, no feeling. In the worst-case scenario, you left with a sad feeling, like you just went to the Department of Motor Vehicles or something.”

In Kleber’s mind, there is no trade-off between the easy usability of a website, app, or product, and the pleasant emotions people should experience during and after using them. Companies, however, have not traditionally prioritized the sensations that users actually feel while using their products and services, focusing instead on swiftness, efficiency, and outcomes. “User experience is the piece that brings that emotional component in,” she says.

The idea of the “minimum viable product” may be part of why we’re in this predicament. In the MVP theory, companies should prioritize getting the most basic version of their product out into the market, as that enables customers to try the product, make sure it actually works, and give feedback to the company, which would then incorporate customers’ ideas into better versions of the product. But whatever advantages an MVP offers, determining the joy users experience with the product is not foremost among them. Kleber instead advocates a “minimum lovable product,” so that “you add some delight and some joy to the user experience.” She believes that approach actually better serves companies in the long run, because with a traditional MVP, it can be difficult for a product to differentiate itself from others on the market.

When you offer a minimum lovable product, you have a better chance of standing out,” she says. Kleber is an advocate for “customer lifetime value,” arguing that positive experiences — not just positive outcomes — are more enticing to customers long-term. “When companies think about it like that, it’s a lot easier to say, ‘no, I’m not going to milk you for 12 hours of gaming. I want you to play an hour today, maybe you can play an hour tomorrow. Maybe on Thursday, you don’t and maybe on Friday, you actually have a moment where something irked you and you play two hours because it’s your escapism.’ There’s a lot to be said for balance,” she says.

For a long time, Kleber felt that designers and engineers should try to leave users with the sense that the way a product or service functioned was so smooth and flawless that it almost felt innate. That they should feel like, “of course this works the way it does.” Now she’s an advocate of integrating an alternative approach, where users are amazed by an experience and marvel at it. She encourages designers and engineers to consider this question: “Can you make the experience as quick and as easy and as fast as possible, but make it joyful?”

It’s a delicate balance, but Kleber believes it’s the approach that leaves the user with the most memorable experience. “Relief in the best case is an exhilaration, a feeling of, ‘that was so easy — and I loved it.’”

These sorts of considerations will be increasingly important as Artificial Intelligence (AI) advances in quality and becomes more ubiquitous. AI will be able to detect emotion and also mimic emotional responses back to humans. “Then you are in a space where you have to define the relationship between that humanoid AI, with emotions, and the person,” she says. “In that space, there’s a lot of room for interpretation of what type of relationship we want to have.”

With Alexa or Google Home, the relationship between robot and human is largely one of a servant or servile companion. Those devices are helpful and can answer questions. But with further technology, the computer can actually support a human’s well-being. “Instead of having to learn the computer’s language, the computer now learns our language,” says Kleber. In those cases, the particularities of the robot’s demeanor — its speech patterns, humor and vocabulary, among other things — have an enormous impact on the user’s experience. A pleasant affect and good sense of humor will encourage users to interact with the machines and make those interactions enjoyable.

At the same time, the ability of humans to construct machines that simulate humans raises a host of ethical questions. In studies that Kleber conducted, people say that they feel personal relationships with machines such as Alexa. “Children develop much stronger relationships with these machines and talk to them much more, like it’s a friend or something like that,” she says. With both adults and children, there is a danger of machines substituting for human interactions. That’s especially true because machines can be created which avoid the pitfalls of humans — they aren’t disagreeable, and they can be less prone to misunderstanding.

“We have a loneliness crisis, and that loneliness is directly correlated with the rise of personal computing and smartphones,” she says. The reverse is also true; there is no harm in humans barking orders at machines, but if it becomes more common, humans may expect the same level of deference from other people and change our behavior toward them. Kids may not understand that they should not speak to people the way they speak to machines. “Many kids originally thought that Alexa was someone who lived in their basement,” she says. “Ideally, we actually get past that point into a space where they’re clearly distinct.”

Kleber points to some of the more remarkable positive effects of robots’ increasingly human-like simulations. Some military veterans who suffer from post-traumatic stress disorder, for instance, have found they feel more comfortable engaging in therapy with a simulated therapist. “The creators of it are adamant that this is not a replacement for therapy, but it is a starting point,” Kleber hastens to add. For people who have vulnerabilities or difficulties with social interactions, spending time with machines can be beneficial, not as substitutes for human interaction but as waystations before doing the real thing.

“I have this idea that computing power is actually a support system that supports us throughout our day so that we can go do what we want to do,” she says. Some people may want more support, or support in different areas than others do. “With anything that we do technology-wise, it’s very important that we continue to design for complete freedom of choice and for equity in life experience.”

Designers have long done their work with human emotions in mind, with the hope that users and consumers will find their product or service enjoying the experience. The choices involved grow more complex as technology does, however. “Designers design a lot for likeability — when we use certain colors, there is a whole emotional spectrum involved,” Kleber says. For example, the combination of blue and gold tends to induce people with a feeling of trust. However, designing products that have two-way interactions with users is entirely different. Now advertisers are performing market research to determine how best to achieve a given emotional response from a user or viewer. “These sorts of things are ethically questionable — is that actually a manipulation that we’re comfortable with having at this point?” she asks, rhetorically. For not just designers but many people at tech companies, these sorts of ethical quandaries are more salient than ever. “We are the guinea pigs in our own experiments. So we have a responsibility to design a world that we want to live in.”

To learn more about Play Thinking, download the Play Thinking Playbook at ustwo.com.


We don't support this version of your browser, and neither should you!

You are visiting this page because we detected an unsupported browser. Your browser does not support security features that we require. We highly recommend that you update your browser. If you believe you have arrived here in error, please contact us. Be sure to include your browser version.