man shocked using computer

Better together: How AI plus people improves some decision-making

New research shows that, in some cases, artificial intelligence can be used more effectively through collaborative decision-making between humans and machines.

Last year, three out of four survey respondents told Gallup researchers they thought AI might take away their jobs. Yes, AI can take on some tasks independently, but for many jobs, it can be used more effectively: collaborative decision-making.

That’s a finding from research conducted by Tian Lu, an assistant professor of information systems. His study showed that under certain conditions when people incorporate AI input, decision-making outcomes improve. "Because humans and AI have different strengths and weaknesses, collaboration is key," says Lu, whose investigation also shows managers how to motivate that collaborative approach and make the collaborative value come true.

Overcoming AI angst

"It’s not natural for humans to collaborate with AI," Lu says. "Much research has found that some humans, especially experienced humans and domain experts, tend to disrespect AI. They inherently don’t trust it because AI is still a black box. People don’t know how outcomes are derived, so humans resist AI." This, he adds, is particularly true when high-stakes decisions are involved.

On the other end of the spectrum, some over-rely on AI, allowing the machine recommendations to direct decisions altogether.

Neither of these approaches is necessarily the way to go, but Lu points out that the varying abilities of machines and humans can be complementary. Humans are good at making decisions in uncommon situations as well as novel circumstances that may present new information to the equation, he explains. "In contrast, AI has huge storage capacity and computational capability, so it’s good at handling decision-making in information-rich situations with large quantities of data," he continues.

So, how do you leverage these capabilities, and when should you? To find out, Lu and his colleague, Yingjie Zhang from Peking University, ran field experiments in partnership with a microlending platform operating in China. The team set out to determine when collaboration works best and what conditions promote it. Since their business partner was in the loan business, they used default rates to determine when collaboration delivers its highest value.

Good thinking

Underpinning Lu’s research is the book Thinking: Fast and Slow by renowned psychologist and Nobel laureate Daniel Kahneman. He postulates that people have a dual information-processing system when they think about things. System 1 — fast thinking — is quick, automatic, and instinctual, relying on emotions and mental shortcuts to reach conclusions. System 2 — slow thinking — is the analytical, deliberate, and deep thinking we use in solving complicated problems or reasoning our way into understanding something unfamiliar.

System 1 thinking may be fast, but it’s also more prone to errors or biases. System 2 is more accurate and also more mentally demanding. Still, it was System 2 thinking that Lu and Zhang were trying to motivate because they believed it would contribute to greater value from human-machine collaboration.

To push loan officers into that analytic System 2 thinking mode, the researchers used two conditions that, based on prior research, they thought would prompt people to switch from System 1 to System 2 thought. "We know that in general, people think more deeply when facing a challenging task," Lu says. "We used information complexity to proxy this condition."

The complexity he’s speaking about is the amount of information loan officers had to parse through before saying "yes" or "no" to the loan applicant. In a small-information scenario, the loan makers were only looking at 12 pieces of information commonly used in lending, such as income, loan-to-income ratio, home ownership, payment history at other microlending organizations, and loan interest rate.

The large-information scenario added another 60 pieces of information to the mix, many of which were things most loan officers never see. They included caffeine, alcohol, and tobacco purchases, online shopping behavior, cellphone usage statistics, money spent on online gaming cards, and mobility data that reveals how often the applicant was at entertainment venues like movie theaters, commercial locations like malls, or social institutions such as hospitals and schools.

The loan default rate before AI was introduced at this company was nearly 12.8%. When humans evaluated loans based on the large data set on applicants, the default rate was 10.6%, a 2% improvement, indicating that information complexity may have prompted humans to use a little more System 2 thinking. AI alone was able to drop the loan default rate to 5.2% using the larger data set, demonstrating its value in evaluating complex information.

Along with information complexity, the research team chose a second condition they tested to boost collaboration between man and machine. Lu explains that when people seek to engage in deep thinking, they often look for good role models. To simulate a role model and referent information, the researchers explained to loan officers why AI made the decisions it did, thereby making explained AI a stand-in for the role model.

Adding the AI explanation delivered a 2.5% drop in the default rate versus humans looking only at the small-data, 12-data-point applications, but when the large, 70-data-point condition added complexity to the mix, an explanation delivered a 3.1% default rate. That’s a 7.3% improvement over humans looking at the large data set without help from artificial intelligence and a 2% lift from AI’s independent decisions with the larger data set.

In addition to dramatically improving loan default rates, collaboration between AI and people also helped cut down on the gender bias AI introduced. AI used online gaming activity as a key determinant of creditworthiness, and gaming is something men are more likely to engage in than women, which created bias when AI made decisions alone. "AI explanations motivated humans to think more actively, and additional analysis showed that rethinking by humans tended to mitigate the gender bias caused by AI," Lu says.

The analytical framework Lu created shows that adding complexity more information — and explanations of how AI makes its decisions can help humans shift to System 2, active thinking, which improves decision-making overall. Lu says this framework is something others could test on their data for things like hiring decisions or medical recommendations.

Still, he also cautions business managers to apply some of that System 2 thinking to the question of whether they need to motivate cooperation at all.

"Data are expensive, so if you only have limited data, think about whether you should invest your budget on training good AI or gaining more data," he says. "Do we need to motivate humans to collaborate with AI? Not necessarily in some contexts. We must figure out the conditions that realize the value of collaboration between humans and machines."

Latest news