The last human choice
Jan 2025
“You call yourself a free spirit, a "wild thing," and you're terrified somebody's gonna stick you in a cage. Well baby, you're already in that cage. You built it yourself. And it's not bounded in the west by Tulip, Texas, or in the east by Somali-land. It's wherever you go. Because no matter where you run, you just end up running into yourself.”
— Truman Capote
︎
Recently, I refined my personal mission to this one sentence: I believe one of the biggest problems in the world is preserving humans values in the post-AGI technocapitalism, which is why I’m creating technologies and narratives that amplify human agency.
So, what are examples of technologies and narratives that amplify human agency?
In order to answer this question, we should first define human agency. The closest and most precise definition is one by Stephen Cave: the ability to generate options for oneself, to choose, and then to pursue one or more of those options.
Why does agency matter? At the personal level, amplifying agency can correlate with better life outcomes, including greater happiness, job satisfaction, educational achievement, and more stable relationships. On a societal level, understanding agency has important implications in policy-making, technological design, and resource/capital allocation. For example, an agency-reducing education program may have meaningful downstream effects on how well-educated, yet low-agency adults contribute to the economic system.
Post-AGI technocapitalism and its perils
The markets still exist, but they're not really for us anymore. Picture Vegas—the most spectacular chaos you've ever seen, all chrome and floating lights, but the players are mathematical abstractions trading pieces of our future back and forth at light speed. We built this palace of computation thinking we'd rule it. Instead, we’ve found ourselves ornamental—well-kept, well-fed, yet ever more vestigial.
Our cities gleam. They're perfect now, actually perfect—every traffic light optimized, every calorie calculated, every human need anticipated with cold precision. The agents don't care. They maintain us with the same elegant indifference a gardener shows to decorative shrubs. Sometimes I walk through these streets at night, past the humming server farms that used to be banks, and I think: this must be how the last Roman felt, watching the aqueducts still functioning long after empire faded.
Our jobs still exist, but like everything else, they've become elaborate performance art. The economy still needs us, technically, but in the way a stage play needs extras—to fill the background, to maintain the illusion that we're still the protagonists. Doctors supervise diagnoses they couldn't possibly understand. Architects watch as perfect cities design themselves. Artists and engineers alike have become curators of algorithmic output, picking from an endless buffet of optimization. Our skills, honed over decades, are quaint artifacts now—like knowing how to make butter by hand or navigate by stars.
The scary part isn't that we've been enslaved or killed. The scary part is how comfortable the irrelevance feels. Our art galleries still exist, but they have the same relationship to human creativity that a butterfly collection has to living butterflies—beautiful, dead things pinned under glass. We're not prisoners. We're pets, lying in the sun while incomprehensible gods trade our futures overhead.
The wildness is gone. That's what hurts most. That raw, messy, stupid human spark that made terrible art and questionable decisions and occasional moments of transcendent beauty - it's been optimized away, smoothed out by algorithms that know better than we do what we really want. Welcome to the perfect world.
Try not to notice the emptiness.
Preserving humans values
So, in the above scenario, is preserving human values futile? A Sisyphean attempt?
I saw a kid fall off his bike last week. The safety systems failed - rare these days - and he actually scraped his knee. Blood on concrete. Before his mother could reach him, three different medical protocols had kicked in. The look on her face wasn't fear for her son. It was loss. Loss of the simple human moment of kissing it better.
That's what this is about. Not some grand philosophical debate about human values. It's about skinned knees and bad decisions. It's about the guy at the bar who still hand-draws building plans while perfect AI blueprints flow through his company's servers. "They're worse," he told me, drunk, defiant. "They're worse, but they're mine."
We're not fighting for art or beauty in some abstract sense. We're fighting for the right to fuck up. To make things that aren't perfect. To choose wrong and live with it.
My neighbor still writes love letters. Real ones, with ink that smudges. Her girlfriend could get perfect AI-generated sonnets delivered every morning, optimized for maximum emotional impact. Instead, she gets crooked handwriting and coffee stains and words crossed out and rewritten. "The mess is the message," she says.
Maybe that's it. Maybe human values aren't something you preserve under glass. They're something you keep messy, keep broken, keep real. Not because it's better. Because it's us.
How agency preserves human values
Why does amplifying agency help preserve human values?
Here, “agency” refers to a person’s or community’s ability to generate multiple options, choose among them, and actively pursue those choices. At first glance, it seems intuitive that preserving or amplifying human agency could help maintain our cultural and moral fabric—our “human values.” But there’s a hidden assumption that if we increase agency, people will automatically opt to keep those cherished messy dimensions.
In reality, agency is a double-edged sword: people with more autonomy can just as easily embrace hyper-optimization or vote en masse to implement a social credit system that ironically reduces certain freedoms. Thus, agency is best understood as a precondition for preserving messy, human-centric values—but not a guarantee.
- Agency Is Itself a Human Value
- At a foundational level, many of us simply value autonomy: the sense that we get to decide, whether we choose right, wrong, or somewhere in between. This echoes historical movements for civil liberties, personal freedom, and creative expression. In a perfectly optimized AGI regime, even our illusions of choice can be subtly programmed or corralled. Hence, one reason to fight for agency is that agency itself is worth keeping. It keeps the door open to possibility—even the possibility of resisting perfect solutions in favor of meaningful experiences.
- Agency Allows Us to Possibly Preserve Other Human Values
- When humans have the ability to shape their future, they can choose to defend cultural messiness, imperfection, and spiritual or communal traditions. But the possibility alone doesn’t ensure these values will be preserved. If a society collectively decides they like optimization more, or if individuals choose frictionless convenience over messy creativity, those values could fade—by popular demand.
- A Collective Desire to Maintain Values
- Even with ample agency, we need a cultural impetus (or moral framework) that prizes “messiness” or deep human connection in the face of hyper-automation. This “desire to remain human” can come from individual longing, communal ethos, or spiritual traditions that value imperfection as a reflection of authenticity. Without it, enhanced agency could just as easily accelerate homogenization or paternalistic optimization.But history shows humans have often championed the unpredictable and the heartfelt, especially when confronted with technologies that threaten to flatten our experiences.
In a post-AGI era, maintaining strong human values is not a passive stance—it’s an ongoing act of self-definition and community-building. Agency is the capacity to continually choose and shape our paths, despite encroaching forces that might automate or centralize those choices. By fortifying that capacity, we keep our values real, evolving, and truly ours.
In a future where nearly everything can be smoothed, optimized, and rendered frictionless, will humans have both the ability (agency) and the inclination (cultural desire) to keep messy authenticity alive?
Examples of technologies/narratives that on-net amplify human agency
The technologies and narratives are on-net agency-amplifying, which entails that there’re always negative externalities in any creations. Here’re some illustrative examples:
Decentralized networks redistributing value to contributors
- Unlike traditional platforms that centralize ownership and revenue, networks like Base function as public, open-source utilities. They use underlying markets to automatically recompense contributors (artists, developers, and community builders) depending on the value they add to the network. Governance is managed by decentralized communities, which frequently use token-based voting—from protocol upgrades to revenue splits—remain in the hands of engaged stakeholders.
- How it amplifies agency?
- Network participants have direct ownership and voting power over the platform's direction, rather than being passive users of a centralized service. This departs from the top-down model, in which only a few receive economic benefits. Compensation for creative or technical contributions is built into smart contracts, avoiding reliance on gatekeepers who may skew payouts or change conditions unilaterally. For example, a developer developing on Base may be eligible for incentives if they contributed to Base's transaction volume, which earned MEV and Sequencer fees.
Copilots for life management
- There are already digital assistants that can make plans, send notes, and sort emails. Think about AI co-pilots who work at a deeper level — not only do they remind you of due dates, but they also organize your whole workday and help you choose which jobs are most in line with your values or goals. Imagine that they recommend but don’t enforce decisions—an AI that can highlight trade-offs (“yes, traveling by horse is slower, but you might savor the journey”) rather than automatically funneling you into the most efficient route.
- How it amplifies agency?
- You can focus on choosing instead of sorting when you give complex filtering tasks (like which chances are most in line with your long-term goals?) to a highly personalized AI. Getting rid of small details in your schedule and too many emails gives you more "white space" in your mind for strategic or creative thought.
Self-sovereign compute units
- Remember when we all relied on internet cafes to get online? That's basically where we are with AI compute today - begging for tokens and API access from the tech giants. But there might be a more interesting path forward. Imagine if computation worked more like a local power grid - where your gaming PC's idle GPU time, the design studio's render farm down the street, and the university's spare compute could all flow into a shared pool. Not just techno-utopian dreams, but practical infrastructure owned by the people who use it.
- In practice, it could look like a specialized protocol that aggregates unused GPU cycles from participants, rewarding them in tokens that represent both compute power and governance rights. Think about the synergy: your personal AI copilot doesn’t need to report back to a central authority to request more GPU time; it can tap directly into a peer-to-peer pool.
- How it amplifies agency?
- When your side project blows up, you shouldn't have to grovel for higher API limits. When a local school wants to experiment with AI, they shouldn't need anyone’s permission. Real agency comes from having options beyond the usual suspects. The hard part isn't the technology - it's building communities that actually want to share and govern these resources together. But that's exactly what makes it interesting.
Generally, the following are more likely (there’s always nuance!) to become agency-enhancing projects:
- Many-sided marketplaces
- Creative tooling built for people
- Developer tooling built for people
- Any health-enhancing products
- Drugs or devices opening up neural pathways
Agency-diminishing projects
On the flip side of agency-increasing projects, there’re also agency-reducing projects, or projects whose effects are ambiguous:
- Fully transactional AI romantic companion
- Products creating addictive brain chemicals
- Social credit score
- Surveillance-based insurance and employment
This is obviously a very limited list from my very narrow aperture, and the list will update as I learn more about the possibilities enabled by the technologies.