Survivor's Guilt: Processing Through AI Layoffs

Thought Essay Framework Personal Business
Survivor's Guilt: Processing Through AI Layoffs

The Dark Flattery

Last week, my employer performed a corporate Thanos snap. Every other person I used to collaborate with on a daily basis was laid off. My employer publicly stated the layoffs were driven by AI efficiency gains.

I logged on to LinkedIn to find a direct message from a former colleague at a previous company. He wrote:

“Brooooo, You are doing too good of a job at AI Data Engineering if the company lays off thousands of people! jk jk. Hope you weren’t impacted, but if you were, LMK if I can help you find a new home.”

I don’t think he realized these words would sting. He was trying to be supportive, maybe even flattering.

But it made my survivor’s guilt about ten times worse.

Because there’s usually some truth hiding inside a joke. And the fact that the thought crossed his mind meant it might’ve crossed other people’s minds too. That someone could look at a mass layoff and connect it to the kind of work I do. The tools I help build. The future I’ve been excited about.

I don’t want to overstate my individual impact. I was one contributor in a large company building towards the same goals of efficiency through AI. But I was a tech lead over AI & Automation for the Data Engineering org, where I’d been leading a working group to build tooling that could effectively automate the jobs of traditionally defined job of a data engineer and data scientist.

The thought had occasionally crossed my mind: Are we building tools to replace ourselves? But I’d convinced myself we were just automating the mundane to free everyone up for more strategic work.

That narrative came crashing down when thousands of colleagues no longer had jobs.


Why I’m Writing This

I posted something raw on LinkedIn the day it happened. The post received over 10,000 impressions and dozens of comments from people on every side of the debate. People who were concerned, skeptical, grieving, and everything in between. This essay is the “a few days later” version. The part after the initial shock, after the reading, researching, and reflecting.

One important boundary up front: I’m not here to analyze my employer’s internal decision-making or share anything confidential. I’m still at the company, and more importantly, that’s not the point. This piece is about something bigger than one company or one headline.


The Debate Everyone’s Having (And What It’s Missing)

In the days after any layoff, the internet argues about causality.

Was it really AI? Was it stock price spin? A pandemic-era overhiring correction? Margins? Restructuring?

Some of those explanations can be simultaneously true. My LinkedIn thread was full of this tension. Skeptics argued the layoffs were a headcount correction dressed up in an AI narrative to impress Wall Street. Others pointed to the stock surging 25% as proof it was theater. Even Sam Altman has acknowledged that some companies are “AI-washing” layoffs using AI as cover for cuts they’d make regardless.

But here’s where I’ll form an opinion. Having worked inside an AI-first company and seen cutting-edge automations deployed at scale across every business function, I don’t believe the automation narrative is fabricated. Were there other contributing factors? Almost certainly. But I believe AI-driven productivity gains were a primary catalyst.

And here’s why the AI-washing debate matters for everyone, not just people inside one company: if the narrative leads knowledge workers to believe AI is still hype, it creates a false sense of security while the underlying capabilities are accelerating week over week. Whether any single layoff was “because of AI” matters less than whether AI is capable of fundamentally reshaping knowledge work. And the data says yes.


What the Research Actually Says

I still believe augmentation over displacement is the best-case outcome. But the research is forcing a more honest conversation about what’s happening in the near term.

  • The scale of exposure is real. The International Labour Organization found that 1 in 4 workers globally are in occupations with some level of generative AI exposure. Clerical and administrative roles face the highest risk, with 24% of tasks “highly exposed.”

  • The transition math is uncomfortable. McKinsey estimates that by 2030, up to 30% of hours currently worked in the US could be automated. Workers earning less than $38K/year are 14 times more likely to need those transitions than the highest earners.

  • The market is already pricing this in. In 2025, roughly 245,000 tech workers were laid off, with about 28% of those cuts explicitly citing AI. The Citrini Research report, a speculative scenario that shook Wall Street when it published, creating a hypothetical doomsday scenario with sweeping job displacement.


The Part That Makes It Personal

This week hit harder than an abstract future-of-work debate because some of the most AI-fluent colleagues I know were affected by the layoffs.

One of those former colleagues commented on my LinkedIn post:

“I went from writing all of my code by hand, to not writing a single line by hand and instead carefully guiding agents to do it… Was I designing tools which would displace me? Judging by the layoffs, I’d say the answer is ‘yes.’ Yet, despite my disappointment, I don’t regret building the tools or learning to use them.”

From my perspective, the efficiency acceleration has been staggering. In September 2025, I watched a headless AI agent move autonomously from a JIRA ticket to a completed pull request. By Q4 2025, what used to require manual data engineering work was being handled autonomously by agentic workflows. I used an AI agent in December to clear a 52-ticket backlog in 48 hours while writing zero lines of code myself.

The human role shifted from technical coder to strategic reviewer. That’s a fundamentally different job description.


From Fear to Conditional Optimism

I started processing this in a darker place. Not full doomsday, but close enough to feel it. Close enough to think: maybe we really are accelerating into something we can’t control.

After a weekend of reflection and research, I’ve landed on what I’d call conditional optimism. The kind that says: the upside is plausible, but only if we steer the transition.

Dario Amodei’s essay “The Adolescence of Technology” gave me language for what this feels like. He describes humanity going through a rite of passage of rapid growth, uneven coordination, massive potential and massive risk coexisting. The technology is evolving faster than society’s ability to metabolize it. We’re watching AI mature in public, in real time, while it’s already wired into the incentives of capitalism and geopolitics.

And yet. I still believe AI can create enormous good. Amodei made that case directly in “Machines of Loving Grace”, a concrete and detailed vision of what the upside looks like if we get this right: disease cured, poverty lifted, biology compressed into a decade of progress. It’s worth reading alongside the adolescence essay as the other half of the picture.

This belief is also grounded in a book that continues to anchor my thinking: Steven Pinker’s Enlightenment Now: The Case for Reason, Science, Humanism, and Progress. Pinker makes a data-driven case that technological advancement improves quality of life at a civilizational scale. In 1820, 90% of the world lived in extreme poverty. By 2015, less than 10% did. That progress was messy and painful and uneven. But it was real.

The economist Joseph Schumpeter called this Creative Destruction: technology destroys some jobs while creating others we couldn’t predict ahead of time. Most people used to be farmers. That sounds obvious now, but it wasn’t obvious then. The transition was brutal for a lot of people. On the other side, entirely new categories of work existed. McKinsey projects that while AI could displace 92 million jobs by 2030, it could also create 170 million new ones.

Another impactful article that was shared with me over the weekend was about an economic principle called Jevons Paradox that reinforces this: when a critical input becomes dramatically cheaper, total demand for it expands rather than contracts. Coal, semiconductors, bandwidth, lighting. Each time efficiency improved, the market grew because new use cases became economically viable. If intelligence follows the same pattern, cheaper cognitive labor could unlock entirely new categories of work we can’t yet imagine.

That doesn’t mean the same outcome is guaranteed. It does mean it’s plausible.


Agency: AI Fluency Is Career Hygiene Now

There’s a sentence that sounds like victim-blaming if you read it with the wrong tone. So let me say it carefully.

Becoming AI-fluent is not a moral virtue. It’s not a guarantee of safety. AI-fluent people got cut too. But it’s the single highest-leverage thing most knowledge workers can do right now to reduce the risk of being disrupted.

The risk asymmetry is real: those who never engage with AI are easier to displace than those who do. Daeus Jorento framed it well in a thought-provoking essay I read over the weekend: People using AI will replace people who aren’t. Not always. Not universally. But often enough that you should treat it like career hygiene.

And when I say “AI fluency,” I don’t mean prompt engineering or chatbots. I’m talking about redesigning how you work so the machine does what machines do best and you do what humans do best.

The shift in mindset looks like this:

  • Chatbot mode: “Help me brainstorm.”
  • Workflow mode: “Take this messy task and turn it into a repeatable system.”

The second one is where the real leverage is. That’s where agentic workflows show up: AI that can take steps, use tools, and operate inside guardrails.

This is where I’m bullish for high-agency individuals, despite the macro uncertainty. I’ve seen what happens when someone goes from passively using ChatGPT to actively redesigning their workflow around AI. The productivity leap is not incremental. It’s a step function. And that kind of leverage gives individuals real agency over their career trajectory.


Responsibility: We Owe People a Softer Landing

If AI allows one person to do the work of ten, the economy probably won’t automatically invent nine new roles overnight.

One commenter on my LinkedIn thread said it plainly: “If you really want to know what we need to do, it’s to get real clear, real fast, on Universal Basic Income.” Another commenter, an accounting VP, shared that he’d shifted his entire focus to risk architecture and relationship management, the skills he believes AI can’t replicate. A few others pushed back and said we should slow down or stop building with AI altogether.

I respect all of those perspectives. Both things can be true: individuals should take agency with their careers, and society should not leave displaced workers to absorb all the downside alone.

If productivity is rising, we need mechanisms that help people transition, in ways that aren’t clear to me yet, but that I am sure policy makers are already thinking about. And I can be sure to advocate for them when we need them.


Start Now

I can’t pretend AI isn’t happening. I can’t unsee the leverage.

So here’s how I’m channeling it: by sharing resources, workflows, and lessons openly. And if you’ve read this far, if anything here convinced you that becoming AI-fluent matters, then don’t wait. Start now.

Here are the resources I’d point anyone to, whether you’re just getting started or looking to accelerate:

  • 📘 Rewiring Your Mind for AI by David A. Wood: Helps you unlearn traditional thinking patterns to get comfortable collaborating with probabilistic systems. A mindset shift, not a technical manual.

  • 👤 Ethan Mollick (LinkedIn): Clear, research-backed thinking on how humans and AI can thrive together. Practical experiments you can copy.

  • 🛠 Angie Jones (Technical content & demos): Concrete examples of applying Agentic AI to engineering workflows that are reproducible and scalable.

  • 📰 The Signal by Alex Banks: Thoughtful distillation of economic and technical AI trends. A weekly newsletter I always make sure to read even if I can’t keep up with the daily headlines.

  • 🎙 The AI Daily Brief: Daily context focused on the “so what” behind the headlines.

  • 🧑‍💻 Everyday AI: Practical, grounded content for bridging the gap between AI hype and your actual 9-to-5.

  • ✍️ AI with Zach: Follow the blog to see the experiments and applications I’m building in real time. Follow on LinkedIn for ongoing takes on how AI automation is reshaping knowledge work from inside a company that’s moved beyond the experimentation cycle to scalable efficiency returns.


A Final Thought

Two paths diverging: Resist fades into cold fog with dissolving offices, Build leads toward a warm glowing horizon. A lone figure walks forward toward Build.

This is how I think about the choice in front of every knowledge worker right now. You can resist, wait it out, and hope the disruption doesn’t reach you. Or you can build: learn the tools, redesign your workflow, and use the leverage to do more meaningful work.

I’m choosing to build. Not because it erases the grief of watching colleagues lose their jobs. It doesn’t. But because staying still isn’t a neutral choice, and the best thing I can do with a seat inside an AI-first company is share what I’ve learned with anyone willing to do the same.

  • If you were impacted by a layoff recently: I’m genuinely sorry. If I can help (referrals, intros, signal boosting, reviewing your resume, pointing you to roles), please reach out.

  • If you’re still employed but uneasy: You’re not alone. This is a rational emotional response to a real shift.

Survivor’s guilt doesn’t go away by denying reality, but it can dissolve when you lean into it and decide how you are going to process and move forward with the information available.

– Zach


Sources

Follow the blog to get notified when a new post is published

📧 Subscribe Here