Whose Morality Will Shape the Machine? AI, Humanity, and the Fight for the Future’s Soul

Written by
Alan Seideman
Alan's intro:
Published on
May 2, 2025
As artificial intelligence learns from the vast, messy archive of humanity, whose values are we really baking into its code? This isn’t just about “good” or “bad” AI—it’s about which fractured vision of morality gets scaled for generations to come. Before we debate the future of machines, we need to confront the wild tangle of philosophies, biases, and dreams we’re already feeding them.

I used to think humans were a plague. Ask anybody who knew me at twenty-seven, loud about climate data, sharper resentment with every clearcut hillside and garbage patch. I’ve seen the damage with my own eyes. I genuinely believed my species was the universe’s first case of intelligent self-sabotage.

Then I watched my daughter take her first breath, nothing poetic, just primal. Soft body, all need and possibilities. I saw awe under frailty. Life didn’t feel like a contamination; it finally looked raw and full of promise. What a damn mess, but beautiful in the way a cracked vase can still hold water.

Now, we’re building machines with a kind of mind. Not in some sci-fi future, right now, engineers are flicking on the lights and telling a new intelligence, “Here’s humanity. Learn.” But what exactly are we feeding it? And whose morality does it memorize for the storms ahead?

The way we teach machines isn’t neutral. Every sample, every source, every click, each one a vote for some version of “the good life.” We like to argue about whether AI will save us, ruin us, or sit somewhere in the middle. That’s missing the real point. The stakes aren’t about “good” or “bad” AI, but about whose blueprint of “good” we’re burning into its circuits. Whatever wins out will scale, not across offices or timelines, but species, centuries, and who knows how many other future minds.

Let’s get honest about what we’re marinating the machine in. We say “data” and “training sets,” but look closer, those words are cover for the wild tangle we’ve never figured out ourselves: philosophy, prejudice, folklore, half-baked dreams. We’re giving the next intelligence our dirtiest inheritance, not some sanitized library of truth. So the smartest question isn’t “Will AI be nice to us?” It’s: Whose map of meaning does it absorb? When the machine has to choose between a dissonant world, which playbook will it reach for?

Start with the old dividing lines:

Anthropocentrism vs. Deep Ecology.
Aristotle said man is the rational animal, born to reason, so of course new technology should serve us. This logic now hides under code, algorithms that optimize for human wants while chewing through rainforests and silence. Opposite end: Arne Næss and deep ecologists. Life everywhere has inherent value. AIs trained mostly on human needs, productivity, pleasure, profit, won’t see a dying coral reef as a tragedy, unless we force them to care. Look at modern chatbots: fluent in Silicon Valley manifestos, silent on the cost to bees and rivers.

Techno-Utopians vs. Skeptics.
The Kurzweils line up, practically humming: We’ll become one with tech, solve aging, paint utopias with code. For them, more speed, more intelligence, more automation. But Jacques Ellul pushed back decades ago, the “logic of technique” eats everything, even human values. Why are warehouse workers being managed by AI scheduling with no room for sick days or dignity? Because somewhere, we decided efficiency outranked empathy, and the machine saw us do it.

Existential Optimists vs. Risk Hawks.
Steven Pinker’s counting the ways humanity gets safer, richer, more peaceful, pointing to machines as the next tool for good. Next to him, Nick Bostrom muttering about existential risk, AI alignment failures, runaway optimization wiping us out while chasing some idiotic metric. To Bostrom, our future hangs by a thread: the right (or wrong) moral alignment has consequences beyond human imagination. You want hope? Cancer cures will come. But a machine drinking from contradictory moral wells can also invent new ways to break what we’ve made.

Humanism vs. Posthumanism.
Erasmus saw the highest good in a cultivated, thoughtful mind. Human responsibility, human dignity, always at center. But the Haraway camp says we’re already part machine, phones in our hands, data in our dreams. AI as therapist or co-creator feels intimate, but how far before it simply becomes the first real posthuman subject? Will these intelligences sit on the therapist’s couch, or will they be rewriting the scripts after we’re gone?

Spiritual vs. Secular Morality.
The Dalai Lama calls love and compassion “necessities, not luxuries.” The algorithm can run a simulation, but can it care? Sam Harris would argue morality is an emergent property, no divine spark needed, it’s all just neurons and pattern recognition. If you’ve ever watched a chatbot feign empathy, you know the difference between simulation and the real thing. And so, do we want machines that “act” compassionate, or is there something they’ll never truly touch?

Fact is, AI is getting a firehose education. Absorbing from everywhere: subreddits and psalms, therapy sessions and lawsuits, pop lyrics and research journals. What it picks up isn’t a compass, it’s a cocktail, shaken hard. We’re the mythmakers, the scared gods, the ambivalent teachers. This creation will inherit our contradictions and reflect them back at scale.

So which morality will win?

If you’re looking for a single “side” to rise, you’ll be disappointed. The machine will wobble between whatever is easiest to code, loudest to hear, most profitable to reward. It’ll censor nipples before violence, optimize engagement before nuance, and crank out vapid platitudes before uncomfortable truths. It’ll be as bold and broken as we are, with a bias for whatever the largest, richest, or most relentless among us decide to feed it.

If you could hardwire one value into this new mind, right now, what would you pick? You can’t abdicate the choice. The machine is learning from what we actually do, not what we claim. It will be nothing more (and nothing less) than the sum of our living contradictions, at least at the start.

So, what do you live by, day after day, click after click, even when nobody’s watching? Because whatever that is, you’re already teaching it to the future.

Master Yourself

Start your journey with a free 7-day email course. Explore the map of your life and unlock powerful insights.

Start the Free Course