The Missing Heart of AI: Why Moral Machines Need an Eco-centric Revolution
Because Apparently We Need to Explain Why Compassion Matters to the People Building Our Future
A response to Wendell Wallach on AI ethics, governance, and the tech oligopoly
Wendell Wallach’s recent interview on the Agentic podcast reveals something startling: after 15 years since co-authoring Moral Machines, virtually nothing has been implemented to give AI genuine moral decision-making capabilities. The book remains “the go-to text,” he notes, but it’s been read far more than it’s been applied.
Why? Because the tech oligopoly—the small coalition of billionaires, venture capitalists, and corporations racing toward artificial general intelligence—has fundamentally misframed the problem. And this misframing reveals a deeper pattern: one that my ego/eco/SEVA framework can help us see clearly.
The Ego-Centric AI Paradigm: Separation, Extraction, Domination
Wallach identifies the core problem plaguing AI development: “A lot of people in the tech industry are concerned with one central ethical issue, which is superintelligence, and they think superintelligence is going to be able to get around any kind of system of ethical reasoning.”
This is ego-centricity in its purest technological form:
Separation: The belief that artificial intelligence can—and should—exist apart from human values, emotions, embodiment, and interconnection. As Wallach notes, AI developers assume “almost everything can be reduced to reason,” ignoring that human morality involves consciousness, theory of mind, sociability, embodiment, and emotion. They’re building disembodied, isolated reasoning machines and calling it “intelligence.”
Extraction: The race for AGI is driven by those seeking to extract maximum value—$10 trillion annually within 10-15 years, with 90% of returns flowing to those who already hold the wealth. As Wallach observes, we’re moving toward 1880s-level wealth concentration, where “a small number of people had so much of the wealth that there was a question about whether we had a democracy at all.”
Domination: The ultimate goal is control—of data, of decision-making power, of immortality technologies. The tech oligopoly wants “government not to get in the way,” using the rationale “these technologies are too difficult for you to understand...trust us.” They’ve achieved legislative capture through the simple formula: complexity + campaign contributions = freedom from accountability.
The interviewer inadvertently reveals this ego-centric framing when he asks about the “Cold War with China” and says “it is important that we win it.” Notice the assumption: competition, not cooperation. Winning, not thriving together. Us versus them
.
The Eco-Centric Alternative: Interconnection, Cooperation, Collective Intelligence
Wallach himself gestures toward an eco-centric reframing, though he may not use those terms. When discussing the supposed China “threat,” he challenges the inevitability of conflict:
“I actually think that this relating to China in this inevitable ratcheting up...maybe losing some opportunities for more cooperation, particularly in that everybody wants the benefits of AI...everybody is scared of the possible harms, and it’s clear that there’s no way of dealing with those harms unless we cooperate with each other to some degree.”
This is eco-centric thinking: recognizing that we are not separate nations in competition, but interconnected beings sharing a planetary system—and sharing common hopes and fears about technology.
An eco-centric approach to AI would acknowledge several truths that the ego-paradigm obscures:
Interconnection over isolation: Moral decision-making isn’t reducible to pure reason precisely because humans exist in relationship. We have “theory of mind”—the recognition that what’s in my mind differs from yours. We have emotions that help us navigate social complexity. We’re embodied in specific kinds of bodies that shape how we interface with the world. As Wallach emphasizes, you cannot build moral intelligence “from the bottom up” while ignoring all of these relational, embodied capacities.
Collective flourishing over individual dominance: The current model concentrates AI benefits in the hands of those who “want to be free to do what they want to do” without accountability. An eco-centric model would ask: How does this technology serve the whole? How does it enhance collective intelligence rather than replace human judgment?
Long-term wisdom over short-term innovation: Wallach notes that “the amount of money going into speeding up the development of essentially autonomous AI is unbelievable, and the amount of money going into ensuring these systems are safe...is pitiful.” This is extraction thinking—grab the resources now, externalize the costs later. Eco-centric thinking would invert this: invest heavily in safety, wisdom, and guardrails before deployment.
The SEVA Path: Sacred Service Through Technological Stewardship
Here’s where we need to go beyond critique and into sacred possibility.
SEVA—serving all life through reverent stewardship—offers a radically different paradigm for AI development. Not AI as a tool for human domination over nature and each other, but AI as an expression of our deepest commitment to serve the web of life.
What would SEVA-based AI look like?
1. Moral decision-making as sacred responsibility: Wallach rightly identifies that we need “wise people or wise teams” working on genuinely sophisticated moral decision-making systems. But this cannot be an afterthought or a toolkit companies download. It must be foundational—the first question, not the last. Before we ask “Can we build this?” we must ask “Should we build this, and if so, in service to whom?”
2. Cooperation as spiritual practice: The interviewer’s assumption that we must “win” the AI race against China reflects ego-thinking. SEVA asks: What if international AI cooperation became an act of planetary service? What if instead of racing toward AGI in isolation, nations pooled wisdom about safety, ethics, and the prevention of catastrophic harms? As Wallach notes, “there’s no way of dealing with those harms unless we cooperate.”
3. Accountability as reverence for consequences: Wallach uses the automobile analogy: “If certain kinds of accidents happen, corporations are held liable or they have to have recalls and fix that item. I don’t see any reason why we can’t do that with AI.” SEVA takes this further: accountability isn’t just about liability—it’s about reverence for the impacts our creations have on all beings. It’s about refusing to deploy technologies when “the public has now become the guinea pig for experimentation.”
4. Embodied intelligence as template: One of Wallach’s most profound observations is that human moral intelligence involves being “embodied in the world, having a particular kind of body and interfacing with the world based on what that body is.” This isn’t a bug—it’s a feature. SEVA recognizes that intelligence divorced from embodiment, from consequence, from relationship, from the vulnerability of existing as a body in the world...is not wisdom. It’s mere calculation.
5. Service to all life, not just human shareholders: The current AI paradigm serves those who hold stock—90% of productivity gains flowing to the wealthiest. SEVA asks: What if AI were developed to serve the salmon struggling upstream, the soil microbiome, the child not yet born? What if every AI system had to demonstrate how it serves not just human convenience but planetary flourishing?
The Urgent Work Before Us
Wallach ends with a warning: “Right now the dangers of AI are really human dangers...the greatest danger is we are not flagging when the systems are unsafe or we don’t know how to manage them, and we aren’t putting the guard rails in place.”
He’s right. But I would add: the deepest danger is the ego-centric paradigm itself.
As long as we approach AI as a tool for human domination, wealth extraction, and competitive advantage—as long as we build intelligence systems that lack embodiment, emotion, theory of mind, and genuine moral capacity—we will create increasingly powerful technologies that serve increasingly narrow interests.
The alternative is here, waiting: an eco-centric approach that recognizes our profound interconnection, that builds AI as collective intelligence serving collective flourishing, that values wisdom over speed.
And beyond that, SEVA: the recognition that technology development is itself a form of sacred service, that every line of code carries ethical weight, that our creations will either honor or desecrate the web of life.
Wallach spent 15 years waiting for someone to implement the moral machines framework. Perhaps the problem isn’t technical—it’s spiritual. We cannot build moral intelligence from an immoral paradigm.
The question isn’t whether AI will have moral decision-making capabilities. The question is: will we?
Call to Reflection:
Where in your work with AI do you notice ego-centric thinking (separation, extraction, domination)?
What would it look like to approach your AI use through an eco-centric lens (interconnection, cooperation, collective benefit)?
How might SEVA—sacred service to all life—transform not just what AI you use, but why and how you use it?
The future isn’t built by algorithms. It’s built by the paradigms we choose.
#AIEthics #MoralMachines #TechOligopoly #EgoVsEco #SEVA #AIGovernance #TechBroProblems #AIAccountability #EthicalAI #ConsciousTech #HeartlessAlgorithms #AIAndHumanity



Beautifully articulated. The shift from ego-centric to eco-centric AI isn’t just ethical, it’s existential. Until our systems reflect interconnection over domination, “intelligence” will remain partial. SEVA feels like the moral architecture AI desperately needs.
I really enjoyed this post! Thanks. You might be interested in some of the ethical theories developed by feminist philosopher such as the Ethics of Care, which aren't often applied to technology but seem more relevant than ever.