Unpacking AI Bias

Unpacking AI Bias

Imagine this: A predictive AI program is integrated into a dermatology lab to aid doctors in diagnosing malignant skin lesions. The AI was built using thousands of diagnostic photos of skin lesions already known to be cancerous, training it to know with great exactness what a malignant lesion looks like. The potential benefits are obvious — an AI might be able to notice details that human doctors could miss and could potentially accelerate treatment for patients with cancer.

But there’s a problem. Patient outcomes do not improve as expected, and some even tragically get worse. Upon reviewing the AI’s training materials, programmers discover that the AI wasn’t making its diagnostic decisions based on the appearance of the lesions it was shown. Instead, it was checking whether a ruler was present in the picture. Because diagnostic photos, like those shown to the AI during its training, contain a ruler for scale, the AI identified rulers as a defining characteristic of malignant skin lesions. The AI saw a pattern that its designers hadn’t accounted for and was consequently rendered useless for its intended purpose.  

This is a real example of unchecked AI bias. Thankfully, in reality, the bias in question was caught before the AI was put into use, but the risks posed were no less significant. Without due diligence and proper screening procedures, organizations that rely on AI could introduce significant risks to their operations — up to and including loss of life.  

So where does AI bias come from, and how can it be mitigated? 

The Origins of AI Bias

AI bias isn’t present by design. Instead, this type of bias is often a consequence of unseen patterns in training material and sample data. It can manifest in hard-to-detect ways, making it challenging to combat.  

AI programs are trained using a curated set of preselected data, from which they identify patterns and draw conclusions. If training data contains unintended patterns, the AI could inherit an unhelpful bias that may produce inaccurate or inequitable results. Key facts to consider about AI bias include: 

  • AI bias comes from people. Humans possess a wide range of implicit biases, and people developing an AI can work their biases into the algorithm without even realizing it. 

  • AI biases can relate to anything. They can sometimes be discriminatory, operating along lines like race, sex, gender, age, religion, or geographic location. For example, a facial recognition AI could develop difficulty recognizing darker skin colors because they were not adequately represented in sample images. 

The Risks of AI Bias

The risks of relying on a biased AI can be significant — as high as life or death in medical contexts, as explored above — and could cause businesses to face reputational damage or legal liability in other areas.  

  • In 2019, Facebook was sued by the U.S. Department of Housing and Urban Development after its ad algorithm developed a bias that caused it to discriminate along racial lines when displaying real estate advertisements. This type of discrimination caused harm on both sides of the equation, as businesses lost potential buyers, and some customers did not see the full scope of available properties.  

  • In 2018, an AI used by Amazon to assist with recruitment was found to have inherited biased hiring practices that discriminated against women. This bias made the AI unfit for its intended function and potentially harmed the company by causing it to miss out on more qualified applicants.  

As these examples show, biased AI programs can cause harm in many different areas, from productivity and sales to customer access to products and services. A lack of awareness is not a valid excuse for failure to take appropriate action. Businesses must work proactively to guard against these inadvertent or unintended biases and combat them if they occur. 

Mitigating AI Bias

To manage the risks and maximize the benefits of incorporating AI, there are several approaches and criteria that businesses should keep in mind: 

  • Because AI bias originates from people, businesses can address it just as they address human bias—through training. Expanding existing employee bias training and awareness initiatives to include the potential impact of AI could go a long way in preventing AI bias.  

  • Routine testing of data sources and training modules can help identify undetected patterns that could cause AI bias.  

  • There is no set-and-forget solution. AI bias can be introduced at any point in the creation or integration process, and companies must establish a regular cadence of review as models are updated and inputs change. 

How We Can Assist

At BDO Digital, we take a responsible approach to AI consulting and can help companies guard their operations and reputations against AI risks. We help clients address ethical considerations, mitigate bias, and comply with relevant regulations and guidelines. We also help define and implement AI governance policies and deploy tools for ethical and responsible AI usage.  

For more information about identifying AI risks and implementing effective solutions for your business, talk to our team today.