Ethical UX Design Fosters Inclusivity

Cennydd Bowles, a former consultant for Twitter, Samsung and the BBC, thinks it’s time for tech to hear people’s concerns.

Written by Jeff Link
Published on Jul. 06, 2020
design-ux-product-management-move-slow-and-fix-things

“Future historians will be asked which quarter of 2020 they specialize in,” tweeted fiction writer David Burr Gerard, author of The Epiphany Machine, back in June.

It caught the attention of Cennydd Bowles, a London-based designer and futurist, who saw it as a succinct encapsulation of a massive shift transpiring underfoot. The world is moving ever faster, it seems, but responsible design practices have yet to catch up.

For Bowles, who spent nearly two decades advising clients such as Twitter, Ford, Samsung, Accenture and the BBC before turning his attention to the ethics of emerging technology, the race to innovate can be a double-edged sword. When technologists and designers do not take significant time to weigh the downstream effects of their actions, it can have unfortunate consequences, the repercussions of which are often felt most severely by marginalized groups.

“Networked technology, worshipping the twin gods of simplicity and magic, has indeed done simply magical things, but simultaneously eroded people’s understanding of their possessions and eroded their privacy alike.”

“Networked technology, worshipping the twin gods of simplicity and magic, has indeed done simply magical things,” he writes in the blog post “After Dread,” “but simultaneously eroded people’s understanding of their possessions and eroded their privacy alike. The tech giants, once seen as useful innovators, have become vast repositories of power and wealth.”

Executives and product leaders have ignored a gaping blind spot in their data sets, he said in a video call: it is not only the apotheosized user personas with whom they need to empathize, but also those who, by virtue of their lack of representation or spending power, are largely ignored in the formulation of content guidelines and product strategy.

Bowles has lectured on ethical practice at Google, Stanford University and Facebook, and is the author of several books, including Undercover User Experience Design — a guide for introducing good UX practices under organizational and budget constraints — and, more recently, Future Ethics. The latter applies an ethical framework to consider algorithmic bias, systems of surveillance, and manipulative UX practices.

We spoke with him about what technologists and designers can learn from ethicists in a time of economic and cultural reckoning.

More on UX DesignThe Author of ‘Don’t Make Me Think’ Discusses UX Design’s Evolution

 

exponential-growth-curve
Image: Shutterstock

You recently said “if 2019 was the year of the algorithm, 2020 is the year of the logarithm.” What do you mean?

There was this meme that did the rounds in March. It’s a graph. On the X axis was time. On the Y axis was time spent looking at exponential graphs. The plot went up exponentially. We’ve spent a lot of this year just looking at exponentiality.

Technologists are, at least in theory, more comfortable with the idea of exponential growth. We firmly believe in Moores law. We recognize how things accelerate very quickly. But things have been accelerating in a very public way, very quickly, both with COVID-19 and, now, also, Black Lives Matter. So the pace of change that we are somewhat comfortable with in tech has come to the rest of the world in a big way.

 

What are the implications of this exponential rate of change in the public realm?

In unprecedented times, you don’t have a reliable trajectory for where things are going. Everything has been pretty much predicated upon, OK, there’s going to be jobs, there’s going to be travel, there’s going to be capitalism. And now all these things are suddenly at risk. And so there’s this rush to say, “now is finally the time for fully automated luxury communism, or digital libertarianism,” or whatever kind of shape you think the future should take. So the implications are this scramble, essentially, to create a narrative for the future.

 

The U.S. is engaged in a massive re-examination of race. What do you think is important for designers to be thinking about to support a more equitable future?

The people who should be consulted about that are designers and technologists from underrepresented groups. You’ve got to talk to them, and the community needs to start listening to them and not necessarily mid-career-white-guy like me.

“User-centered design has given us this idea that there’s this one archetype we’ll use.... [We] tend to design for that ‘idealized’ user.”

With that caveat, I think user-centered design has given us this idea that there’s this one archetype we’ll use. And usually that’s quite similar to the disproportionately white male representation of tech companies. They aren’t representative, in any way, of the world, yet we still, subconsciously or otherwise, tend to design for that “idealized” user.

So one thing we have to do is reframe how we think about users. They aren’t all going to be male; they aren’t all going to be highly technologically literate; they aren’t necessarily going to be cisgender. They’re not all going to live on the West Coast of the United States, and so on. So that’s certainly a large part of it.

Find out who's hiring.
See all Design + UX jobs at top tech companies & startups
View Jobs

 

A transmasculine person with a furry blue coat checking his phone on the sidewalk
Photo: The Gender Spectrum Collection

What else should designers consider?

For me, the very idea of designing for a user has a big blind spot, which is that our work affects people, even if they don’t use our products.

Facebook would be a good example of that. Even if you’re not a Facebook user, they may still have a shadow profile of you, because you might be tagged in other people’s photographs. Or, through cookies, they might be tracking your traversal on the web. They’re building a picture of engineered interests behind some sort of arbitrary ID — even if you haven’t signed up with a name or a photo.

“Our work affects people, even if they don’t use our products.”

As designers, we’ve just overlooked those externalities. Even if someone isn’t a user of our products, if they are affected by our products, we still have to try and anticipate their potential harms. We have a moral obligation, if not eventually a legal one, to try to mitigate them in the same way tobacco firms are held liable for passive smoking.

 

Are there any companies leading the way in recognizing the interests of non-users, or users who have accessibility limitations?

I get asked that question a fair bit, and, over the past few years, I was hoping I’d have a better answer, but it still hasn’t really happened.

There are some companies that do specific tranches of that well: accessibility, for example. I would say Apple does a really good job. They’re very much built with universal design principles in mind, when it comes to recognizing a diversity of use cases and diversity of users. I think Microsoft has historically done a good job. They’ve had some quite publicly visible design principles about how to design for people with, say, different motor capabilities or different skills, and pointed out these may be conditions that are temporary.

The world is not designed for people who don’t have what we consider normal capabilities. Microsoft has done some good work highlighting that, talking about, for instance, how a mother nursing a child may only have one arm free to use technology because she’s carrying a baby.

And then, probably, in the broader scale of things, the other company I think is doing a good job is Salesforce. They’re actually investing properly in ethical design and ethical innovation. They’ve hired a chief ethical and humane use officer, Paula Goldman, who heads a team of some good talent in this space. And they seem to have been given backing at a very senior level, as well, to fundamentally ask big, complicated questions about who should use their product.

More on UX Design What Is Empathy Mapping?

 

inaccessible-interface-design
Image: Shutterstock

In an article in Wired, you note that some newer apps use unfamiliar icons or subtle text overlays and menu options that defy best practice guidelines. What do you make of these newer interfaces?

It’s healthy, to an extent. That means the field is progressing a little bit. There is often tension, of course: Some of the design principles we’ve come to rely upon for the last few decades in interface design or UX design are there for reasons. And some of those reasons have to do with how well they are understood by a wider set of humanity. Any time you challenge, or you subvert, those conventions, there is a risk that, in doing so, you lose some of that universality.

TikTok is a big enough company that they already know these are going to be very hot issues for them. They have, obviously, a kind of interesting reputation in the West because this is the first social media platform thats hit the global scale to come from China, rather than from California. As a result, they’re labeled with all sorts of sweeping generalizations: “Well, China doesn’t care about privacy, or this or that,” which is pretty stupid.

At the scale they’re at, they’re having to handle these issues very quickly. They’ve just formed a content advisory council to help them figure out what should and shouldn’t be allowed on the platform, and a tech ethicist, David Ryan Polgar, has been appointed to the panel.

So you can innovate, absolutely you should innovate, but eventually you still hit these issues. You can’t just innovate and say, “Well, we’ve got a new interface, our job is done here.” You then have to wrestle with the social implications of that interface.

 

How can designers continue to innovate while being mindful of users with non-normative needs and expectations?

I’m reminded of a talk that I saw this January by Liz Jackson, who’s a designer and disability advocate. And she kept repeating this phrase, “nothing for us, without us” — essentially, that designers particularly tend to objectify disability as a prop, or they problematize disability as something that we can solve with design interventions. She has a friend, an academic who uses a mobility device. And every year, her design students will say, “Well, for my project, I would like to design a way for your friend to walk without a cane.” It’s designing at someone rather than designing with them.

“For me, the most important thing is [to] genuinely try to include [people] in the process. This is where co-design, for example, is appealing.”

So, for me, the most important thing is not to design at people’s needs, but genuinely try to include them in the process. This is where co-design, for example, is appealing. We should be bringing in representatives of underrepresented communities and listening to them. Bring them in, hire them, whether as consultants or on staff, so that you can make decisions together.

 

Let’s turn to your book, Future Ethics. What do you hope to communicate to your audience?

I guess three points, if I had to summarize:

We’ve got to stop thinking we’re the first group of people on these shores. The tech community has this infuriating tendency to think we’re the smartest people in the room. All we need to do is let some genius software engineers or product managers or designers attack a problem and they will solve it with technology.

But, of course, there have been ethicists around for millennia. And so we should listen to them. We should read ethics, we should understand it back to its theory and history, and then put it into practice in our everyday work. And that means listening to academics, primarily, which we’ve not been very good at. The title ethicist should go to people with Ph.D.s in philosophy. But when you’ve read enough, you can translate it and say, “OK, for designers, for technologists, here’s what it means.”

The second thesis, which I’ve touched on, is that we have to broaden our view of the actors within our system — not just users, but the people who are affected by our work. And you can broaden that to say, the systems that are affected by our work.

And then the third thesis, I suppose, is that we need to stretch our time horizons. Technologists have been very much concerned with, “OK, when I click this button, what happens next?” “What’s our roadmap in the next six months?” “How are we going to hit this target for the next earnings call in six weeks?” But we need a much longer-term view of the consequences of our work, because they may not become apparent for 10 years.

Find out who's hiring.
See all Design + UX jobs at top tech companies & startups
View Jobs

 

online-harassment-blocking-community-guidelines
Image: Shutterstock

What missteps would you like to see reversed, or at least acknowledged?

Possibly the biggest mistakes have centered on not setting proper behavioral expectations on social media. Flickr spent a lot of time and effort trying to curate a good community because they set up good guidelines that said: “Here’s what you can do. Here’s what we won’t let you do. Be this person. Don’t be this person.” But social media, specifically, Twitter and Facebook, have been very slow to set up any kind of robust guidelines.

Twitter didn’t even have a block function for about a year. That’s despite what I read about early female adopters, saying: “Look, I’m getting harassed on this platform. Why can’t I do something about it?”

The defense was, “free speech,” and “who are we to arbitrate?” And the company was made up largely of men who probably didn’t understand what harassment felt like.

“There’s a decolonizing design movement, which, as the name suggests, asks, ‘What would design practice look like, if it wasn’t based on all these principles of white normativity, if not supremacy?’”

I don’t think anyone anticipated the massive role these technologies would play in social and political discourse. [Mark] Zuckerberg, when he started Facebook in 2004, I’m sure he didn’t anticipate that, in 15 years, we’d be talking about how state actors are using it to hijack American democracy. We can’t expect that he should have. Still, as these companies grew, they didn’t take these issues seriously enough.

 

How can tech companies do better at setting community behavioral expectations, and ensuring these expectations are equitable?

In particular, there’s a decolonizing design movement, which, as the name suggests, asks, “What would design practice look like, if it wasn’t based on all these principles of white normativity, if not supremacy?” Could we design in a more globally sensitive and intelligent way? And so I think we should be listening to those people, because they’ve got very interesting ideas about how we should be changing our design practices.

 

What do you see as a first step designers can take to help people shape their own futures?

I’m going to take a very specific angle in my response to this question, which is around artificial intelligence. We should simply not be using inexplicable AI for anything that has meaningful impacts on people’s lives. We need to be pushing to say, “If we have an algorithmically generated decision we can’t explain to the user, then let’s not use it to make important determinations.” Will this person get a job? Will this person get a loan? Are they approved for housing?

If a computer is making a decision on people’s behalf, and they have no insight into that process, they have no way of contesting the decision. And I think we need to give people that potential by promising we will only use systems we can explain.

More on Accessible Design Adobe’s Approach to Accessibility? Everyone’s Responsible.

Explore Job Matches.