Responsible AI: Why Ethics Are Needed in AI (Part 2)

1st Oct 2021
Brought to you by
tax cloud
Share this content

Customers and employees will reward organisations that practice ethical AI with greater loyalty…and in turn, punish those that don’t. There’s both reputational risk and a direct impact on the bottom line for companies that don’t approach the issue thoughtfully.” – Search Enterprise

In part 1, which you can read here if you missed it, we discovered why ethics are needed when developing or using AI. We learned what ethics are in the context of AI, and we saw what can happen if they’re not considered during the development process (racist chatbots and sexist recruitment tools).

In this article, we’ll find out how to make sure that your AI tools, systems and software are developed and used within an ethical framework.

Why ethics are important when developing or using AI

Although part 1 covered the reasons why companies need to consider ethics when developing or using AI, it’s worth a quick recap.

It’s impossible to create an AI system, software or tool that’s completely neutral. AI systems, software and tools are built, programmed and trained by humans. Therefore, the choices that are made by these humans during the development, programming and training stages will unconsciously (and sometimes consciously) inflict some sort of bias over how the system, software or tool behaves in certain situations.

This is why it’s key to have an ethical framework in place that can level the playing field and ensure that morally sound decisions are made during the development or use of AI.

We’ve already seen what happened to Amazon and Microsoft when they didn’t consider ethics when developing their AI tools, so let’s look at another example of an epic ethical failure on the AI front.

Uber & their failed facial recognition tool

Back in 2017, Transport for London suspended Uber’s license over security concerns. In response to this, Uber launched a new facial recognition software in 2020, named the Real-Time ID system. The system would scan the driver's face and perform checks against a database of photos to make sure that the driver was verified to pick up passengers or deliver food orders.

However, hundreds of Uber drivers who were using the tool to clock-in and clock-off their shifts began to find that their faces weren’t being recognised. This triggered Uber’s safeguarding protocol - which prevents imposters from using the driver’s accounts - and they were subsequently fired. By text.

What had happened was, due to a lack of ethical input during the development, testing and training of the AI tool, the facial recognition software had inadvertently developed a biased algorithm. This meant that the software was more prone to error when trying to identify people with darker skin. For instance, it had an error rate of up to 21% for dark-skinned women. These errors meant that hundreds of legitimate Uber employees needlessly lost their jobs.

But it’s not just Uber that have had this problem with facial recognition tools: During the last year, over 200 people have made complaints about unfair dismissal due to unethical facial recognition software.

So, how can companies make sure that the AI they develop and use is ethically sound?

How to implement an ethical framework when developing or using AI

Google was one of the first companies to vow that its AI will only ever be used ethically – i.e. it will never be engineered to become a weapon.” – IT Pro

AI has the potential, if developed or used unethically, to cause harm to people and organisations. It can expose them to risks that could have been avoided if careful consideration had been made when making decisions around the development and usage of these tools, software and systems. 

Google may have been the first to acknowledge the importance of ethics in AI, but Facebook, IBM, Amazon and Microsoft (after their early disastrous AI attempts) have all joined forces to create a set of best practices for the development and use of AI. This has clearly set a precedent as over 86% of business leaders now consider ethics to be a priority when designing, building, training and using AI systems.

So how do we go about implementing ethics into our AI?

4 ways to implement ethics into AI
 

Ways to implement AI ethics #1: Mitigate unethical bias
As we discovered in part 1, AI has no morals, no empathy, no compassion and no awareness. We build AI models to simply collect, translate and learn from the vast amounts of data they receive. AI, therefore, doesn’t know if it’s behaving ethically or not. It just does as it’s told and takes on the bias of whichever dataset it’s given.

For example, if you use the data from a mortgage lender that has notoriously denied loans to minority people, your AI will naturally learn to become biased against minority groups.

So, as a company, you must give your AI tools broad rule sets to follow and try to build up a diverse work culture to help identify and mitigate human bias.

Ways to implement AI ethics #2: Employ ethics managers and compliance officers
Ethics managers, or “sustainers” as some call them, are people that are employed to make sure that a company’s AI systems are functioning safely and responsibly. They’re responsible for continually checking the AI algorithms and data platforms, identifying potential pitfalls and finding workable solutions to problems created by unethical AI behaviour.

For instance, if an AI system for credit approval was found to be discriminating against a certain group of people, ethics managers would investigate the issue and address it quickly. It’s also common practice to employ data compliance officers to make sure that the data being fed into the AI tool, system or software complies with GDPR regulations and other data licensing laws.

Ways to implement AI ethics #3: Create ethical AI guidelines
Creating a set of ethical guidelines or standards for employees to follow when developing or using AI is an easy way to instil ethics into AI activity. To eliminate potential bias, these ethical AI guidelines should be created by a diverse team which includes employees of different races, genders and job roles.

Organisational structures should ensure any review committees be cross-functional within the organization. Enable this culture to speak freely on ethical concepts revolving around AI and bias.” – Search Enterprise

Ways to implement AI ethics #4: Train both your employees and your AI
Training employees on how to build, use and maintain ethical AI is crucial: they need to learn how to spot bias, correct problems and identify potential pitfalls. But it’s also important to teach your AI tools, systems and software how to handle the data it receives. Even though they can’t understand right from wrong, machine-learning algorithms must be taught how to perform the work they’re designed to do, ethically, using training data sets. These data sets give the AI tool, system or software experience at handling a variety of different scenarios, people and transactions so it can learn how to behave in an ethically sound way. Huge training data sets could be used to teach translation apps how to handle idiomatic expressions, for instance. Or, take AI assistants like Siri, for example. Massive amounts of complex data were needed from poets, novelists and playwrights to develop a confident, caring, and helpful personality, enabling it to respond in exactly the right way to different questions or scenarios.

This article was brought to you by Tax Cloud

Tax Cloud is the UK’s first online, self-service R&D tax portal. Designed to help accountants file R&D tax returns on behalf of their clients in a quick, easy and cost-effective way. The online portal is supported by the R&D tax experts at Myriad Associates who will guide you through the entire process and make sure your claim receives the maximum amount of tax relief possible.

To find out more, visit the website, call the team on 020 7360 4437 or drop them a line here.