Researchers at Georgia Institute of Technology have developed a neural network called RTNet that mimics the human decision-making process. The network is reliable and variable, and can predict the outcome of a given decision. Accuracy in tasks such as digit recognition.
Humans make approximately 35,000 decisions every day, from determining whether it is safe to cross the road to choosing what to eat for lunch. Each decision involves evaluating options, recalling similar situations from the past, and building confidence in the correct choice. What may seem like a snap decision is actually the result of gathering evidence from the environment. Furthermore, the same person may make different decisions at different times in the same situation.
A neural network would do the opposite and make the same decision every time. Georgia Institute of Technology Researchers in Professor Dobromir Ranev’s lab are training robots to make human-like decisions, a science that is only just beginning to be applied in human decision-making applications. Machine LearningBut researchers say they could improve reliability by developing neural networks that more closely resemble real human brains.
In one paper Nature Human behaviorA team from the Department of Psychology has unveiled a new neural network that has been trained to make human-like decisions.
Deciphering decision
“Neural networks make decisions without telling us whether they’re confident about the decision,” says Farshad Rafiei, who earned his doctorate in psychology from Georgia Tech. “This is one of the fundamental differences with how humans make decisions.”
For example, large language models (LLMs) are prone to hallucinations: when asked a question they don’t know the answer to, they make something up rather than admitting it. In contrast, most humans in the same situation would admit they don’t know the answer. Building more human-like neural networks can prevent this duality and give us more accurate answers.
Creating a model
The research team trained a neural network with handwritten digits from a well-known computer science dataset called MNIST, challenging it to decode each digit. To determine the accuracy of their model, they ran it on the original dataset, adding noise to the digits to make them harder for humans to distinguish. To compare the model’s performance with humans, they trained their model (as well as three other models: CNet, BLNet, and MSDNet) on the original, noise-free MNIST dataset, but tested them on the noisy version used in their experiments, and compared the results from the two datasets.
The researchers’ model relied on two main components: a Bayesian Neural Network (BNN), which uses probabilities to make decisions, and an evidence accumulation process that tracks the evidence for each choice. The BNN generates a slightly different response each time. As evidence accumulates, the accumulation process sometimes favors one choice and sometimes favors another. Once enough evidence has accumulated to make a decision, the RTNet stops the accumulation process and makes the decision.
The researchers also measured the model’s decision-making speed to see if it followed a psychological phenomenon known as the “speed-accuracy trade-off,” which states that when humans have to make decisions quickly, their decision-making accuracy decreases.
Once the model’s results were in, the researchers compared them to human results. When 60 students from Georgia Tech looked at the same dataset and shared their confidence in their decisions, the researchers found that the accuracy, response times, and confidence patterns of the humans and neural networks were similar.
“Generally speaking, there is not enough human data in the existing computer science literature to know how people will behave when they see these images. This limitation prevents the development of models that accurately reproduce human decision-making,” Rafiei said. “This study provides one of the largest datasets of humans reacting to MNIST.”
Not only did the team’s model outperform all competing deterministic models, but it also became more accurate in high-velocity scenarios by making RTNet behave like a human, another fundamental element of human psychology. For example, people gain confidence when they make the right decision. Rafiei noted that the model didn’t need to be specifically trained to prioritize confidence, as it automatically did.
“If you try to make the model more like the human brain, you’ll see that in the behavior without any tweaking,” he said.
The team hopes to train the neural network on more diverse datasets to test its potential, and also hopes to apply this BNN model to other neural networks to achieve more human-like rationalization. Ultimately, the algorithm could not only mimic human decision-making capabilities, but also help ease some of the cognitive burden of the 35,000 decisions we make every day.
Reference: “Neural Network RTNet Exhibits Characteristics of Human Perceptual Decision-Making” by Farshad Rafiei, Medha Shekhar, and Dobromir Ranev, July 12, 2024, Nature Human behavior.
DOI: 10.1038/s41562-024-01914-8