How can design protect us from AI? , by Jasper Keynes | October, 2022

Artificial intelligence and its inner workings are so complex that we know very little about the inner workings of algorithms. Whether AI acts as a ‘black box’ remains a mystery. It is essential for designers to recognize these limitations, and we must always consider the competitiveness in AI-led systems.

In 2019, leading IT figures scored 10 times higher credit scores against Apple than their partners. In a series of Twitter posts David Heinmeier Hanson rallied against Apple, claiming that the program was sexist.

Hanson, the creator of Ruby on Rails, filed the same financial statements as his wife, but apparently the algorithm thinks he deserves a credit limit 20 times higher than his wife. Obviously, no appeal works.

The tweet sparked a series of replies, including a reply from Apple co-founder Steve Wozniak. Wozniak explained that the same thing happened to him and his partner, and that “it’s hard to get a human to improve though”.

The algorithm behind the Apple Card, issued by Goldman Sachs, will be the subject of an official investigation led by the New York Department of Financial Services.

AI. compete in

What sets this story apart is the inability of users to resist decisions made by artificial intelligence. The systems, designed by Apple and Goldman Sachs, offered no way for users.

Competitive artificial intelligence is perhaps the most neglected aspect of AI-led user experiences. The user’s inability to have a say in the decisions taken by AI could have far-reaching implications. This quote from David Collingride probably sums it up best:

“When change is easy, the need for it cannot be foreseen; when the need for change is obvious, change has become costly, difficult, and time-consuming.”

– Collingridge, D. (1980). Social control of technology. Francis Pinter

These problems are inherent in artificial intelligence: machine learning uses algorithms to learn and change over time, but it also has a tendency to pick up bias from its data sets. When these biases are reflected in decisions made by AI, they can have far-reaching implications such as discrimination and exclusion.

Research into Competitive AI Systems

PhD Research Group’Suspicious AI’ Delft at TU is focusing exclusively on competitive AI by design. Their goal is:

“To ensure that artificial intelligence (AI) systems respect human rights to autonomy and dignity, they must allow human intervention during development and post-deployment.”

We can protect users from harmful decisions by ensuring that AI-led systems are competitive by design. These systems should always be amenable to human intervention throughout the system life cycle.

Interpretability is an evolving area of ​​ML research, with researchers actively looking for ways to make models less of a black box. But finding these solutions is not so easy. In the meantime, design can lend a hand.

Problems like fairness, transparency and accountability cannot be solved by technological innovation. Rather they have to be designed by making them human intervention points. This is where design comes in.

A protester covers a camera outside a government office in Hong Kong after protests over Chinese AI-led surveillance in 2019
A protester covers a camera outside a government office in Hong Kong after protests over Chinese AI-led surveillance in 2019. Chris McGrath – Getty Images

Competing AI-systems are a force

If we design the system in such a way that AI decisions are not fixed, we can compromise and improve the competitiveness of the system. When designing an AI-led system, think about ways to integrate the demand for agreement between the user and the AI. Find a consensus between both the user and the system.

The implementation of competitive AI is essentially an extension of UX research. The constant conflict that is addressed creates an influx of user generated data that can continuously improve the system. Compromise is not a weakness but a strength.

Leveraging that user-generated data and design with users is essential to creating a system that benefits the user at all times, within the confines of an AI black-box. This continuous loop of creating data and designing better systems will lead to a better product in the long run.

how to design a competitive system

It is not difficult to design competing interference points in the system – it is just necessary. Implementing moments where the user can object to decisions made by the system can be as simple as checking the output by a human.

First, it is important to interpret the results. It should be clear how the algorithm produces the output. This may not always be obvious, especially due to the ‘black box’ property of some algorithms. Then it is important to clarify how the data is being used and what the user can expect.

It is important to point out incorrect or less reliable answers by changing your visual design or layout. Don’t be afraid to let the user know you have the answer.

AI may be suitable for certain situations and act as an extension of user capabilities. If the user is using AI, let them be in control. Users should always be able to interfere or ignore the output of any AI powered system.

I have already written about how to design with AI led systems. It’s a good idea to follow these general practices when designing with AI-led algorithms:

An army of scan cars are being used for extensive surveillance in Amsterdam. The scanned cars are part of a municipal program to monitor overuse of parking spaces across the city.

Car scan in action in Amsterdam
Scan Car in Amsterdam, AMS . from the website of

Scan Cars uses a car’s roof-mounted camera and uses object recognition to identify and issue parking fees. Cars are automating the process of license plate identification and background checks with specialized scanning equipment and an AI-based identification service.

The service is currently in use for over 150,000 street parking spaces in the city of Amsterdam. Since the surveillance is fully automated, the city has received alarming calls from this type of surveillance.

The loss of autonomy for citizens and the problems associated with automating these processes have led to new developments surrounding the service. This is why the municipality set out to build a more competitive AI system. Together with experts in the field, the municipality has opened up ways for users to compete with the AI-led service. As best explained on their website:

“Together with UNSense, we invited representatives from the city of Amsterdam and researchers from Rotterdam, TADA and TU Delft to join us in a 3-day sprint to design the “scan car of the future”, which also looks at human and unbiased The value of advances in technology.

During these sessions, several design strategies were explored. Among others, participants examined whether the sensing of the car could be reduced, if the function of the car made more sense and what features could be added to bring potential benefits to the individual – being a citizen of Amsterdam. Relationship. ,

The implications of asking questions like: “What if you could talk back?” or “What if you could talk back?” Opens up huge user-centric design opportunities. The fearsome surveillance monster has since been transformed into a more competitive counterpart, empowering civilians in the process.

Human-scale scan car from competitor AI’s website

I think the design of the Scan Car is a perfect example of how competitive AI systems are a force to be reckoned with. The city now has data on the use of parking spaces, and may redesign its city accordingly. The system has also improved. It is not scary to use, and as long as these competing qualities are present it cannot be abused. Victory is on both sides. Both users and product owners can now benefit from the system, while providing autonomy and independence to all.

Leave a Comment