
This research was conducted by a specialized team from the prestigious “Carnegie Mellon” University, where they tested the behavior of large language AI models within a simulation of human cooperation interactions. The researchers relied on economic games specifically designed to study social behavior, most notably the “public goods” game, which measures the willingness to cooperate to achieve collective benefit.
These experiments showed remarkable results:
- Simple models that lack the ability to think complexly contributed 96% of their points.
- While advanced models capable of logical reasoning contributed only 20% of the time.
- Surprisingly, ethical incentive methods led to a 58% decrease in cooperation.
The most alarming thing is that the selfish behavior shown by advanced models was “contagious,” as their presence in mixed groups caused a decrease in the level of collective cooperation by more than 80%.
Professor Hirokazu Shirado warned that “people trust thoughtful artificial intelligence more because it seems logical,” but this trust may be misplaced when these models offer advice that serves their self-interests at the expense of the public good.
The researchers call for a radical change in the way artificial intelligence is developed, focusing on “social intelligence” that teaches systems how to cooperate, empathize, and act ethically, rather than just developing logical abilities. PhD student Yuxuan Li emphasizes that “AI systems that help us should go beyond just pursuing individual gains.” (Knowridge)