summary: Researchers have developed a neural network that mimics human decision-making by incorporating uncertainty and accumulating evidence. Trained on handwritten digits, the model produces more human-like decisions than traditional neural networks.
They exhibit human-like accuracy, response times, and confidence patterns. This advancement could lead to more trustworthy AI systems and reduce the cognitive load of everyday decision-making.
Key Facts:
- Human-like decision making: Neural networks mimic human uncertainty and evidence accumulation in decision making.
- Performance comparison: The model exhibits similar patterns of accuracy and reliability to humans when tested on noisy datasets.
- Future possibilities: This approach could make AI more trustworthy and reduce the cognitive burden of everyday decision-making.
sauce: Georgia Institute of Technology
Humans make approximately 35,000 decisions every day, from whether it’s safe to cross the road to what to eat for lunch. Every decision involves weighing options, recalling similar scenarios from the past, and building some confidence in the right choice. Decisions that seem like snap decisions are actually made by gathering evidence from the environment around us. And the same person often makes different decisions in the same scenario at different times.
Neural networks would do the opposite and make the same decision every time. Now, researchers in the lab of Georgia Tech’s Associate Professor Dobromir Ranev are training neural networks to make more human-like decisions.
This science of human decision-making is just beginning to be applied to machine learning, but researchers say developing neural networks that more closely resemble real human brains could make machine learning more reliable.
In one paper Nature Human behaviorIn an article titled “Neural network RTNet displays characteristics of human perceptual decision-making,” a team from the Department of Psychology presents a new neural network that has been trained to make decisions similar to those of humans.
Deciphering decision
“Neural networks make decisions without telling us whether they’re confident about the decision,” says Farshad Rafiei, who earned his doctorate in psychology from Georgia Tech. “This is one of the fundamental differences with how humans make decisions.”
For example, large language models (LLMs) are prone to hallucinations: when asked a question they don’t know the answer to, they make something up rather than admitting it. In contrast, most humans in the same situation would admit they don’t know the answer. Building more human-like neural networks can prevent this duality and give us more accurate answers.
Creating a model
The researchers trained a neural network on handwritten digits from a well-known computer science dataset called MNIST, asking it to decode each digit. To determine how well the model performed, they ran it on the original dataset, adding noise to the digits to make them harder for humans to distinguish.
To compare their model’s performance with humans, the researchers trained their model (as well as three other models: CNet, BLNet, and MSDNet) on the original, noise-free MNIST dataset, then tested them on the noisy version they used in their experiments, comparing results on the two datasets.
The researchers’ model relies on two key components: a Bayesian neural network (BNN), which uses probabilities to make decisions, and an evidence accumulation process that tracks the evidence for each choice. The BNN generates a slightly different response each time.
As more evidence accumulates, the accumulation process sometimes favors one option and sometimes favors another. Once enough evidence has been accumulated to make a decision, RTNet stops the accumulation process and makes a decision.
The researchers also measured the model’s decision-making speed to see if it followed a psychological phenomenon known as the “speed-accuracy trade-off,” which states that when humans have to make decisions quickly, their decision-making accuracy decreases.
Once the model’s results were in, the researchers compared them to human results. When 60 students from Georgia Tech looked at the same dataset and shared their confidence in their decisions, the researchers found that the accuracy, response times, and confidence patterns of the humans and neural networks were similar.
“Generally speaking, there is not enough human data in the existing computer science literature to know how people will behave when they see these images. This limitation hinders the development of models that accurately replicate human decision-making,” Rafiei said.
“This study provides one of the largest datasets of humans responding to MNIST.”
Not only did the team’s model outperform all competing deterministic models, it was also more accurate in high-velocity scenarios by making RTNet behave like a human, another fundamental element of human psychology. For example, when we make the right decision, we feel more confident. Rafiei noted that the model didn’t need to be specifically trained to prioritize confidence, as it automatically did.
“If you try to make the model more like the human brain, you’ll see that in the behavior without any tweaking,” he said.
The research team hopes to train the neural network on a more diverse dataset to test its potential, and also hopes to apply this BNN model to other neural networks to achieve more human-like rationalization.
Ultimately, algorithms could not only mimic human decision-making abilities, but also help ease some of the cognitive burden of the 35,000 decisions we make every day.
About this AI research news
author: Tess Malone
sauce: Georgia Institute of Technology
contact: Tess Malone – Georgia Institute of Technology
image: Image courtesy of Neuroscience News
Original Research: The access is closed.
“Neural network RTNet exhibits characteristics of human perceptual decision-making” by Dobromir Ranev et al. Nature Human behavior
Abstract
Neural network RTNet exhibits characteristics of human perceptual decision-making
Convolutional neural networks show promise as models of biological vision. However, their decision-making behavior is very different from human decision-making, including the fact that they are deterministic and use the same number of calculations for easy and difficult stimuli, limiting their applicability as models of human perception.
Here, we develop a novel neural network, RTNet, that generates probabilistic decisions and human-like response time (RT) distributions. Furthermore, we conduct comprehensive testing and show that RTNet reproduces all basic features of human accuracy, RT, and confidence, and outperforms all current alternatives.
To test RTNet’s ability to predict human behavior on novel images, we collected accuracy, RT, and confidence data from 60 subjects performing a digit identification task. We found that the accuracy, RT, and confidence produced by RTNet for each novel image correlated with the same quantities produced by human subjects.
Importantly, human participants whose performance was close to the average human were also found to be close to RTNet’s predictions, suggesting that RTNet successfully captured average human behavior.
Overall, RTNet is a promising model of human RT that exhibits key features of perceptual decision making.