Georgia Tech researchers have developed a neural network, RTNet, that mimics human decision-making processes, including confidence and diversity, improving its reliability and Accuracy In tasks such as number recognition.
Humans make approximately 35,000 decisions every day, from deciding whether it is safe to cross the road to choosing what to eat for lunch. Each decision involves evaluating options, remembering similar situations in the past, and feeling reasonably confident that the right choice is being made. What may seem like a quick decision is actually the result of gathering evidence from the environment. In addition, the same person may make different decisions in identical scenarios at different times.
Neural networks do the opposite, making the same decisions every time. Now, Georgia Institute of Technology Researchers in the lab of Assistant Professor Dobromir Rahnev are training them to make decisions more like humans. This science of human decision-making is only now being applied to Machine learningBut developing a neural network that is closer to the actual human brain could make it more reliable, according to the researchers.
In a paper in nature human behaviorA team from the School of Psychology has unveiled a new neural network that has been trained to make decisions similar to humans.
Unlock the decision
“Neural networks make decisions without telling you whether they are confident in their decision or not,” says Farshad Rafiee, who earned his doctorate in psychology at the Georgia Institute of Technology. “That’s one of the fundamental differences from the way people make decisions.”
For example, large language models are prone to hallucinations. When a large language model is asked a question it doesn’t know the answer to, it will invent something without admitting to being deceived. By contrast, most humans in the same situation would admit they don’t know the answer. Building a more human-like neural network could prevent this duplication and lead to more accurate answers.
Making the model
The team trained their neural network on handwritten numbers from a popular computer science dataset called MNIST and asked it to decode each number. To determine the model’s accuracy, they ran it on the original dataset and then added noise to the numbers to make them harder for humans to distinguish. To compare the model’s performance to humans, they trained their model (as well as three other models: CNet, BLNet, and MSDNet) on the original MNIST dataset without noise, but tested it on the noisy version used in the experiments and compared the results from the two datasets.
The researchers’ model relied on two main components: a Bayesian neural network (BNN), which uses probabilities to make decisions, and an evidence accumulation process that tracks the evidence for each option. The BNN produces slightly different responses each time. As more evidence is collected, the accumulation process can sometimes favor one option and sometimes another. Once there is enough evidence to make a decision, RTNet stops the accumulation process and makes the decision.
The researchers also timed the model’s decision-making speed to see if it followed a psychological phenomenon called the “speed-accuracy trade-off,” which states that humans are less accurate when they have to make decisions quickly.
Once they had the model’s results, they compared them to those of humans. Sixty Georgia Tech students looked at the same dataset and shared their confidence in their decisions, and the researchers found that accuracy rates, response times, and confidence patterns were similar between humans and neural networks.
“In general, we don’t have enough human data in the current computer science literature, so we don’t know how people will behave when exposed to these images. This limitation hinders the development of models that accurately mimic human decision-making,” Rafiei said. “This work provides one of the largest human datasets responsive to MNIST.”
Not only did the team’s model outperform all competing deterministic models, it was more accurate in higher-speed scenarios because of another key element of human psychology: RTNet behaves like humans. For example, people feel more confident when they make the right decisions. Even without having to specifically train the model to favor confidence, the model applied that automatically, Rafiei noted.
“If we try to make our models closer to the human brain, it will show up in the same behavior without fine-tuning,” he added.
The research team hopes to train the neural network on more diverse datasets to test its capabilities. They also expect to apply this BNN model to other neural networks to enable them to think more rationally like humans. Ultimately, the algorithms will not only be able to mimic our decision-making abilities, but they may also help relieve some of the cognitive burden of those 35,000 decisions we make every day.
Reference: “RTNet Neural Network Shows Signs of Human Cognitive Decision-Making” by Farshad Rafiei, Medha Shekhar, and Dobromir Rahnev, July 12, 2024, nature human behavior.
doi: 10.1038/s41562-024-01914-8