How To Help Tame Cognitive Bias In Your AI System

How To Help Tame Cognitive Bias In Your AI System

Over the years, AI has been able to furnish a host of solutions to many of our everyday challenges. Voice assistants like Alexa and Siri, for example, are now reasonably good at interpreting human speech correctly. They’re already providing precise, targeted information in many instances.

That said, implementing AI systems has become a real game-changer not only for private use but also in the corporate environment. AI-based tools are capable of both making decisions on their own as well as helping people make informed decisions, and thus transforming entire workflows. 

One thing that has to remain top of mind in this context, however, is that machines are only ever as intelligent as their algorithms and data sets allow them to be. If the algorithms and data sets exhibit unconscious biases to begin with, the machine will simply adopt these biases. As such, the training data and the resulting actions are based on the experiences of the people who, for example, assigned the documents or started the processes. Their views on the subject of an action are reflected in the training data and, by extension, in the decisions made by the systems and machines. So, AI’s “bias” is acquired through training. If the bias isn’t taken into account accordingly during the development phase, the machine will incorporate it

In other words, if we want to make AI systems truly intelligent, we need to design their learning behavior to be as open and unhindered as possible and minimize any potential distortion or bias. The more measures and techniques we develop to minimize the transmission of our biases, the more reliable AI will be. To provide an AI-based system with the most neutral starting point possible, CEOs and business leaders should consider the following four questions:

1. Which problem should the AI-based system solve?

AI projects often begin with an overblown desire for action. As such, the first step to sustainable implementation ought to be to figure out which problem you want to solve by using AI. Selecting a specific problem in a particular department and considering how the use of AI can solve that problem is a targeted and efficient way to hone in. A low-hanging-fruit approach based on the processes that have the greatest positive impact on the company is the best way to get off to a successful start (high ROI and relatively little effort).

2. Is the quality of the training data good/high/relevant?

Incomplete or inconsistent data sets will yield poor results or, in our case, distortions of results or information that could have an adverse impact on the company. That makes it critically important to ensure that the training data sets are of high quality and ideally from every available data source to counteract bias and mitigate “overfitting.”

3. Is there diversity among the project staff and their tasks?

Each department employee has his or her own preferences when it comes to allocating or evaluating documents and content. The included keywords are a significant factor in this context. To achieve a neutral view of a topic or use case from an AI-based system, it makes sense to use a variety of different documents from different employees as the basis for training. Using relevance models, the system can create a ranked list and deliver the appropriate content proactively in response to queries. That’s an enormous helping hand, especially for exhaustive research topics or complex decisions.

4. Have we made provisions for continuous measurability and optimization?

Once the AI-based solution is in live operation, it definitely shouldn’t be ignored or neglected. Continuous measurement, evaluation and validation of the results is paramount; potential biases may have been overlooked and will need to be corrected. Consistent feedback and optimization help both to maintain the accuracy as well as to increase it systematically.

Training AI-based systems can be compared with the lifelong learning process of humans. Like people, the system needs to spend a lifetime learning in order to improve. There’s just one small but significant difference — we can deliberately block out or turn off our biases and prejudices in favor of a neutral view. AI-based systems can’t do that; that means it’s up to humans to provide support in the form of training data that’s based on quality and diversity.