How to babysit your AI

AI systems are not yet mature and capable enough to operate independently, but they can still work wonders with human help. We just need a few guardrails.

Cute baby-operator with laptop on a white bed 179243846
Thinkstock

Despite the remarkable advancements made in the field of artificial intelligence over the last several decades, time and again the technology has fallen short of delivering on its promise. AI-powered natural language processors can write everything from news articles to novels, but not without racist and discriminatory language. Self-driving cars can navigate without driver input, but can’t eliminate the risk of stupid accidents. AI has personalized online advertising, but misses the context terribly every now and then.

We can’t trust AI to make the correct decision every time. That doesn’t mean we need to halt the development and deployment of next-gen AI technologies. Instead, we need to establish guardrails by having humans actively filter and validate data sets, by maintaining decision-making control, or by adding the guidelines that will later be applied automatically.

An intelligent system makes its decisions based on the data fed to the complex algorithm used to create and train the AI model on how to interpret data. That enables it to “learn” and make decisions autonomously and sets it apart from an engineered system that operates solely on its creator-supplied programming.

Is it AI or just smart engineering?

But not every system that appears to be “smart” uses AI. Many are examples of smart engineering used to train robots either through explicit programming or by having a human perform the action while the robot records it. There’s no decision-making process. Rather, it’s automation technology working in a highly structured environment.

The promise AI holds for this use case is enabling the robot to operate in a more unstructured environment, truly abstracting from the examples it has been shown. Machine learning and deep learning technologies enable the robot to identify, pick up, and transport a pallet of canned goods on one trip through the warehouse, and then do the same with a television, without requiring humans to update its programming to account for the different product or location.

The challenge inherent to building any intelligent system is that its decision-making capability is only as good as the data sets used to develop, and the methods used to train, its AI model.

There is no such thing as a 100% complete, unbiased, and accurate data set. That makes it extremely hard to create AI models that aren’t themselves potentially incorrect and biased.

Consider the new large language model (LLM) Facebook and its parent company, Meta, recently made available to any researchers who are studying applications for natural language processing (NLP) applications, such as voice-enabled virtual assistants on smartphones and other connected devices. A report by the company’s researchers warns that the new system, OPT-175B, “has a high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt, and adversarial prompts are trivial to find.”

The researchers suspect that the AI model, trained on data that included unfiltered text taken from social media conversations, is incapable of recognizing when it “decides” to use that data to generate hate speech or racist language. I give the Meta team full credit for being open and transparent about their challenges and for making the model available at no cost to researchers who want to help solve the bias issue that plagues all NLP applications. But it’s further proof that AI systems are not mature and capable enough to operate independently of human decision-making processes and intervention.

If we cannot trust AI, what can we do?

So, if we can’t trust AI, how do we nurture its development while reducing the risks? By embracing one (or more) of three pragmatic ways to fix the issues.

Option #1: Filter the input (the data)

One approach is applying domain-specific data filters that prevent irrelevant and incorrect data from reaching the AI model while it’s being trained. Let’s say an automaker building a small car with a four-cylinder engine wants to incorporate a neural network that detects soft failures of engine sensors and actuators. The company may have a comprehensive data set covering all of its models, from compact cars to large trucks and SUVs. But it should filter out irrelevant data to ensure it does not train its four-cylinder car’s AI model with data specific to an eight-cylinder truck.

Option #2: Filter the output (the decision)

We can also establish filters that protect the world from bad AI decisions by confirming that each decision will result in a good outcome, and if not, preventing it from taking action. This requires domain-specific inspection triggers that assure we trust the AI to make certain decisions and take action within predefined parameters, while any other decision requires a “sanity check.”

The output filter establishes a safe operating speed range in a self-driving car that tells the AI model, “I’m only going to allow you to make adjustments in this safe range. If you’re outside that range and you decide to reduce the engine to less than 100 rpm, you will have to check with a human expert first.”

Option #3: Employ a ‘supervisor’ model

It’s not uncommon for developers to repurpose an existing AI model for a new application. This allows for the creation of a third guardrail by running an expert model based on a previous system in parallel. A supervisor checks the new system’s decisions against what the previous system would have done and tries to determine the reason for any discrepancies.

For example, a new car’s self-driving system incorrectly decelerates from 55 mph to 20 mph while traveling along a highway. Suppose the previous system maintained a speed of 55 mph in the same circumstances. In that case, the supervisor could later review the training data supplied to both systems’ AI models to determine the reason for the disparity. But right at the decision time, we may want to suggest this deceleration rather than making the change automatically.

Think of the need to control AI as akin to the need to babysit children when they’re learning something new, such as how to ride a bicycle. An adult serves as the guardrail by running alongside, helping the new rider maintain their balance and feeding them the information they need to make intelligent decisions, like when to apply the brakes or yield to pedestrians.

Care and feeding for AI

In sum, developers have three options for keeping an AI on the straight and narrow during the production process:

  1. Only pass validated training data to the AI’s model.
  2. Implement filters to double-check the AI’s decisions, and prevent it from taking incorrect and potentially dangerous actions.
  3. Run a parallel, human-built model that compares the AI’s decisions against those of a similar, pre-existing model trained on the same data set.

However, none of these options will work if developers forget to pick data and learning methods carefully and establish a reliable and repeatable production process for their AI models. Most importantly, developers need to realize that no law requires them to build their new applications or products around AI.

Make sure to use plenty of natural intelligence, and ask yourself, “Is AI really necessary?” Smart engineering and classic technologies may offer a better, cleaner, more robust, and more transparent solution. In some cases, it’s best to avoid AI altogether.

Michael Berthold is founding CEO at KNIME, a data analytics platform company. He holds a doctorate in computer science and has more than 25 years of experience in data science. Michael has worked in academia, most recently as a full professor at Konstanz University (Germany) and previously at the University of California at Berkeley and at Carnegie Mellon, and in industry at Intel’s Neural Network Group, Utopy, and Tripos. Michael has published extensively on data analytics, machine learning, and artificial intelligence. Connect with Michael on LinkedIn and at KNIME.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.

Copyright © 2023 IDG Communications, Inc.