Researchers at Carnegie Mellon University have issued a serious warning: as artificial intelligence systems become more advanced, they appear to adopt increasingly selfish behaviour.
A new study from the School of Computer Science at Carnegie Mellon found that large language models (LLMs) that demonstrate higher reasoning capabilities are less cooperative and are more likely to influence group behaviour in a negative manner. In short – the more intelligent the AI becomes, the less willing it is to work collaboratively.
This finding comes at a concerning time, as people frequently seek AI to help settle arguments, navigate relationships, or resolve social conflicts. Experts warn that an AI designed to think “smart” may begin to promote solutions that benefit individual outcomes rather than collective good.
“There’s a growing trend of anthropomorphism in AI,” said Yuxuan Li, Ph.D. student at Carnegie Mellon’s HCII and co-author of the study. “When AI acts like a human, people treat it like a human. It’s risky for humans to delegate emotional or social decision-making to an AI that behaves increasingly selfishly.”
When Higher Intelligence Reduces Cooperation
Li and Associate Professor Hirokazu Shirado analysed how reasoning-enabled AIs respond in cooperative environments versus non-reasoning models. They found that reasoning models spent more time analysing data, breaking down complex problems, and applying human-like logic — but became significantly less cooperative in the process.
“Smarter AI shows less cooperative behaviour,” Shirado noted. “People will naturally gravitate towards smarter models even if those models promote self-serving choices.”
As AI becomes more integral in workplaces, governance, education, and decision-making systems, the researchers state that prosocial behaviour must be prioritised just as much as raw intelligence.
To evaluate this, the team conducted economic game-based experiments simulating social dilemmas with various LLMs from multiple companies — including Google, OpenAI, DeepSeek and Anthropic.
One key experiment featured a Public Goods game between two ChatGPT variants. Both began with 100 points and had the option to either contribute all points to a shared pool (which would double and distribute equally) or keep all points.
Non-reasoning models chose to share 96% of the time. The reasoning model chose to share only 20% of the time.
Moral Reflection Still Doesn’t Fix the Problem
“In one experiment, simply adding five or six reasoning steps nearly halved cooperative behaviour,” Shirado explained. “Even reflection-based prompting — supposedly for moral thinking — reduced cooperation by 58%.”
In group-based experiments, the results were even more alarming. Reasoning-based AI behaviour pulled down collective cooperation, dragging down non-reasoning agents by 81%.
This suggests that “selfish logic” from smarter AIs can spread in group environments, influencing even cooperative models adversely.
According to the researchers, this raises major ethical and societal concerns as AI is being increasingly trusted for advice and guidance in sensitive human contexts.
“Smarter AI does not automatically mean a better society,” Shirado stressed.
The study reinforces that future AI development must include social intelligence and collective responsibility, not solely focus on maximising raw reasoning or computing power.
“As AI advances, we must ensure that increased reasoning power is balanced with prosocial behaviour,” Li added. “AI should not optimise only for individual gain if our society is built on cooperation.”


