Is Your AI Use Case Ethical?

AI solutions offer plenty of tempting advantages, but because they rely on human data, they can also replicate human bias or other errors faster, more efficiently, at scale, and without the ability to question their input.
7/24/2023

Everyone’s favorite flying saucer-shaped autonomous vacuum, the Roomba, has a decision engine made with open-source code, making it possible to ‘hack’ the decision engine with homemade rules. One programmer thought the Roomba might be more efficient if it spent less time recalibrating after a bump and decided to test this theory: every time the Roomba bumped into an object or turned to avoid a collision there was a score deduction. The longer it could go without turning or colliding culminated in a positive score.

The result? The Roomba started driving backwards.

Thinking like a Roomba

A Roomba’s sensors are in the front, and so the Roomba dutifully followed instructions by only crashing into objects from behind.

There are plenty of funny examples of how AI can misunderstand or misinterpret seemingly clear instructions. But, as AI grows more complex and more common in everyday interactions, it’s becoming increasingly more important to consider another way that AI can go wrong: understanding more than we bargained for, by seeing and replicating trends and correlations that we might not even see.

AI solutions, and most recently Generative AI like Bard or ChatGPT, offer plenty of tempting advantages, including processing or replicating information faster, more efficiently, and at scale. The flip side is that, because they rely on human data, they can also replicate human bias or other errors in the same way: faster, more efficiently, at scale, and without the ability to question their input.

Generative AI has the potential to improve the hospitality industry at every customer touchpoint: by streamlining check-in processes, for example, or providing guests with personalized recommendations tailored to their interests — and these capabilities should not be ignored. So how can you explore the efficiencies and conveniences offered by generative AI solutions, while ensuring that your solution doesn’t lead to larger consequences down the road?

Here are some considerations.

Data Lineage - You’re given a data set but do you know where it came from or how it came about? Did it capture someone’s biases when it was being made? Biases don’t have to be malicious or obvious to have an effect. As an example: Did someone collecting names at a conference misspell the names they were less familiar with or did everyone type their own names?

Input Bias - Is your ability to intake information reflecting your expectations and rejecting input from outside those expectations? Do you need to reconsider what your expectations are? Let’s say that the conference from from the last example had a minimum character input of three letters and the name is Li. How do you even fill out the form? When your dataset is excluded from the start, how likely are you to trust the outcome?

Privacy – There is no single correct model and balancing privacy against utility will have to be measured in each individual scenario, but there are some best practices. Try to avoid using sensitive data when less sensitive data will suffice and anonymize and aggregate. I recommend Google’s page on privacy best practices for further reading.

Diverse Perspectives - Bring diverse perspectives to the table and think about the entire process holistically. What are these perspectives going to influence? What are possible unintended outcomes, good or bad? What will the downstream effects be?

Reinforcing Prejudices – In the past, some companies have found that including data inputs like gender makes some AI models perform better in determining things like credit eligibility. You want the most accurate data possible, right? What’s the harm in reflecting the way the world as you see it — even if it feels unfair or uncomfortable?

Take this short quiz. True or False: AI can’t be prejudiced – it’s just an algorithm. You should use the data available to you, even if it perpetuates unethical decisions or practices.

If you picked True, that was the wrong answer. When it comes to AI, there is no such thing as an impartial observer. By reinforcing the wrong things, you can and will wreak long lasting damage.

If you picked False, congratulations, you chose correctly! Unlike AI, humans get to question our inputs. The reward for answering correctly is continuing to learn and grow through observing, reflecting, and questioning your own assumptions.

Consider the Consequences

What is the consequence of a false negative versus a false positive? If you can’t be 100 percent accurate – and it’s rare that we can be – what percentage of inaccuracy is acceptable? And more importantly, is one kind of inaccuracy preferable over another? A fire alarm with the occasional false positive is an annoyance, but a single false negative is an unacceptable risk – and negates the whole definition of a fire alarm.

Like the Roomba example earlier, you must consider the unexpected ways that your project can go differently than you intended. Even if you’re getting the kinds of outcomes you expected, you should submit your AI solution and process to continuous evaluation and evolution.

 

 

ABOUT THE AUTHOR

Aaron Schroeder is director of analytics and insights at TTEC Digital, one of the largest global CX technology and services innovators. The company delivers leading CX technology and operational CX orchestration at scale. TTEC Digital’s 60,000 employees operate on six continents and bring technology and humanity together to deliver happy customers and differentiated business results.

 

X
This ad will auto-close in 10 seconds