Responsible AI: Why Ethics Are Needed in AI (Part 1)

24th Sep 2021
Brought to you by
tax cloud
Share this content

From translating languages and analysing data, to diagnosing diseases and automating customer service, it’s a well-known fact that Artificial Intelligence (AI) is enhancing our working lives.

We’re all increasingly using AI to make sense of vast amounts of data, inform strategic thinking, create predictions and generate solutions to problems. We’re relying on it more and more to help us process data quickly, improve efficiency, bring down costs, and speed up research and development.

For example, pharmaceutical companies are using AI to analyse patterns in data and complete pre-market tests and trials quicker, at a lower cost.

On the surface, humans and AI seem to have a harmonious working relationship: Humans bring leadership, teamwork, creativity and social skills to the table, whilst AI brings speed, scalability, and quantitative capabilities.

What comes naturally to people can be tricky for machines, and what’s straightforward for machines remains virtually impossible for humans. Business requires both kinds of capabilities.” – Harvard Business Review

But there’s a dark, sinister cloud looming.

Because AI has no consciousness, emotion or empathy, and simply interprets the data it’s given, it lacks a moral compass. It doesn’t know right from wrong. And although the positive benefits it brings to businesses are clear, the potential for systematic discrimination, inequality and unfair ethical practices is lurking: Many worry that AI is set to bring more harm to society than good to the economy.

Let’s find out more…

What do ethics mean in relation to AI?

Ethics are a set of principles or standards that determine how someone should or shouldn’t behave. For example, common ethical practices in the workplace tend to be things like “have mutual respect for colleagues” or “take accountability for your actions” - that type of thing.

In AI, ethics mean the same thing, only instead of guiding employees’ behaviour, they provide guidance on the right and wrong ways to develop, implement and use AI tools. After all, we’re the ones building these robots, so it’s our responsibility, to create and use them within an ethically sound framework. 

AI ethics is a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies.” – Turing

Using ethics to guide the development and use of AI means different things to different companies. For some, it means adopting AI in a way that’s transparent, and accountable. For others, it means making sure that their use of AI remains consistent with certain laws and regulations.

But in general, ethical AI ensures that any AI system, tool or software that’s built or used, follows a set of ethical standards and complies with specific laws and regulations.

But what happens if ethics aren’t applied to AI?

What happens if ethics aren’t considered during the build or use of AI

While AI can often perform tasks much better and a lot quicker than us mere mortals can, they can also deliver biased results which are unfair and unethical.

The Google search engine is a prime example. The search results we get from Google are heavily biased based on the number of clicks a website or page or article gets, and our location and preferences.

Search-engine technology is not neutral as it processes big data and prioritises results with the most clicks relying both on user preferences and location. Thus, a search engine can become an echo chamber that upholds biases of the real world.” – UNESCO

Without an ethical framework to follow, AI tools have the potential to produce unbalanced, unexpected and undesirable results. Which is exactly what happened to Microsoft and Amazon….

Unethical AI example #1: Microsoft chatbot

Microsoft, along with Apple, have always been pioneers in the world of technology. But they seriously messed it up back in 2016 with their newly developed Twitter chatbot called Tay.

Tay was initially an “experiment in conversational understanding.” 

The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through "casual and playful conversation." – The Verge

But ‘casual and playful conversation’ it was not! As to be expected with the general public, as soon as people got wind of this cool, new chatbot called Tay, they started sending it racist and misogynistic messages, just to see what it would do. And, because of the way Tay had been developed, trained and programmed, ie. without an ethical framework, no thought had been given as to how it should process and respond to these types of vile remarks and messages. So, Tay fought back with equally disgusting, racist and misogynistic messages.

Not a good look for Microsoft.

Since then, Microsoft has overhauled its internal policies and instilled some ethical AI guidelines around the development of AI, particularly when involving sensitive use cases.

Unethical AI example #2: Amazon recruitment tool

Amazon is equally as pioneering as Microsoft when it comes to the technology they develop and implement to make processes more efficient, both internally and for customers.

But, like Microsoft, a lack of AI ethics meant that they developed a machine learning device, designed to speed up recruitment, that generated gender-biased results.

The recruitment AI tool was built to take over the manual and time-consuming job of reviewing thousands of job applicants’ CVs. It used AI to score the candidates CVs from one to five stars. The tool would produce the top five CVs that the HR department could then use to hire the best person for the role.

But the new system wasn’t rating candidates for software development and technical positions in a gender-neutral way. The AI system had been trained to vet these applicants by observing patterns in the CVs that had been submitted to Amazon over the last 10 years. However, 10 years ago, the software development industry was heavily male-orientated. So, Amazon’s system taught itself that male candidates were preferable to female ones.

It penalised resumes that included the word “women” (eg. women’s chess club captain) And it downgraded graduates of two all-women’s colleges.” – Reuters

So, as you can see, not instilling ethical practices when developing or using AI can be disastrous.

How to apply ethics when developing or using AI

Customers and employees will reward organisations that practice ethical AI with greater loyalty…and in turn, punish those that don’t. There’s both reputational risk and a direct impact on the bottom line for companies that don’t approach the issue thoughtfully.” – Search Enterprise

Stay tuned for part 2 next week, where we’ll discover how to apply ethics when developing or using AI…

This article was brought to you by Tax Cloud

Tax Cloud is an online self-service platform designed to help accountants, like you, submit R&D tax claims on behalf of their clients. The R&D tax claim process can be long-winded and complex and, if you’re not a specialist in this niche area of tax, then it can be difficult to obtain the maximum amount of tax relief. Because Tax Cloud is supported by the R&D specialists at Myriad Associates, you’ll be supported throughout the entire process, at a fraction of the cost of hiring a specialist R&D tax expert.

To find out more, visit the website, call the team on 020 7360 4437 or drop them a message.