↑ BACK TO TOP
open sidebar menu
  • AI, it's not that complicated/The Generative AI wave
    Knowledge Bases
    Analyzing Software CompaniesBuilding Software ProductsAI, it's not that complicatedWorking With Data Teams
    Sections
    1: The Basics
    2: The Generative AI wave
    It was never about LLM performanceWhat is RAG?What's a vector database?How do AI models think and reason?How to build apps with AIWhat is MCP?What is Generative AI?The beginner’s guide to AI model architecturesA deep dive into MCP and its associated serversThe scaling law and the “bitter lesson” of AIA practical breakdown of the AI power situationThe vibe coder’s guide to real coding2026 vibe coding tool comparisonHow to build AI products that are actually goodThe AI user's guide to evalsAI will replace you at your job if you let it
    3: Tools and Products
Sign In

AI will replace you at your job if you let it

A look at the thin line between using AI smartly and writing your own pink slip.

ai

Published: December 16, 2025

Like you, I’m extremely tired of the recurring headlines about “AI replacing the workforce,” written almost exclusively by people who both know nothing about AI and have never been part of said workforce. Ironic.

Let me start by saying that I do not think AI is going to put massive swaths of people out of work. But that’s only going to be true as long as we are smart, adaptive, and embrace these new tools to make us all more productive and more creative. The problem is that there are tons of lazy people out there who are using AI completely carelessly, with such minimal oversight such that they are essentially writing their own pinkslips.

If you carelessly offload the core and soul of your job to AI, you cannot be surprised when someone decides AI can do it instead of you.

Terms Mentioned

ChatGPT

If you are a software engineer for whom Cursor now writes all of your code with minimal oversight or creative input from you, this is bad. If you are a marketer who is generating entire blog posts and page sites with vanilla Claude prompts, this is bad. If you are an SDR who has ChatGPT write all of your outbound emails with no customization whatsoever, this is bad.

At some point, your boss is going to realize that the AI is doing your job, not you.

The brain drain from junior talent and financial modeling

I see this pattern with more junior people the most. They are at a point in their careers where they have minimal experience and maybe more importantly minimal taste. The risk of overusing AI is huge here, because without going through the motions and developing that earned intuition for the mechanics of your work, you will never develop that taste. IMO at the highest levels, you are paid for exactly this.

I was talking to my friend CJ of the wonderful Mostly Metrics newsletter about how this manifests itself in finance. A lot of what finance teams at startups do is build financial models, which as any investment bankers in the audience can tell you, is something with a lot of manual, repetitive work; in theory a great candidate for some help from AI. Yes, some help. But CJ told me that he’s seeing more junior talent is offloading the entire financial model building process to AI and not building any parts of it from scratch anymore.

(I’ll admit, the use of the word “model” here is a bit confusing.)

This is not good. To intimately know how something works and develop an intuition for it, you need to do the work yourself. In this case, to build the model from scratch, to develop an understanding of what the sensitive variables are, and maybe most importantly, to be able to explain it to someone.

“Maybe I'm like an old man yelling at windmills just because I spent at least 10 years in the trenches doing that from scratch. But they're skipping steps one, two, and three and jumping straight to four.”

Prompting an AI model to build your entire forecast for you will get you a quick answer, sure. But you didn’t build it, so you don’t understand it. What happens when the assumptions baked into the model change?

This is true across so many different disciplines. Just think about the explosion in vibe coding. It’s amazing that a single prompt in Replit can get you a working app. But if you don’t understand what’s going on in the app at all, don’t try to refine and improve it…what have you really created? And what’s stopping someone else from doing it better than you?

Something something Icarus flying too close to the overheating GPUs.

Use AI to augment your work, not replace it: the facked vs. cracked framework

When it comes to using AI, the best work products are the ones that combine your unique insight and creativity with the automation firepower from models. If the work product you’re creating is not in some way authentically you, or clearly borne of your handiwork, then you are giving up too much to AI and this does not bode well for your long term career prospects.

Across conversations with teams who are trying to use AI at work, plus some of my own experience, here are a few examples of facked vs. cracked uses of AI. In other words, finding the sweet spot where AI is helping you but not doing your entire job for you.

Finance

  • Facked: prompt the AI model to build your entire financial model for you.
  • Cracked: build the bones of the financial model yourself, use AI to automate repetitive spreadsheets copying and pasting and/or test new scenarios.

Marketing

Good marketers are using AI to help them generate better content faster, not entirely offload content creation…after all, who the fuck wants to read that.

  • Facked: use vanilla prompts to generate an entire finished blog post of slop.
  • Cracked: add style guide to context, use AI to generate a starting template, write and edit from there.

Software engineering

Before you say “OK, but this is a straw man,” no. People are absolutely doing this.

  • Facked: offload massive features and parts of your codebase to AI without supervision.
  • Cracked: tabbed auto-complete, heavily customized agents with clear guardrails, testing, and code review.

Sales (well, debatably)

SDRs are perhaps the most guilty party here. If there’s one really clutch thing to use ChatGPT for, it’s hyper-personalization at scale…and yet who is doing this.

  • Facked: use vanilla prompts to generate a bare minimum amount of personalization, like “I noticed you just finished up an impressive 3 years at Meta.”
  • Cracked: use models to do standardized deep research on prospects, store the data in Clay, use a second highly customized prompt to personalize based on research.

Use models, do not let them use you!

A great place to begin is starting small: pick a recurring, manual process to automate instead of trying to tackle an entire function or workflow. AI models work better when you give them a narrow scope, clear context, and a tight feedback loop. Find one specific, annoying thing you do every week, like summarizing meeting notes or categorizing customer feedback, and try to automate that.

Where to next?

Keep learning how to understand and work effectively with AI and ML models and products.

Comparing available LLMs for non-technical users

How do ChatGPT, Mistral, Gemini, and Llama3 stack up for common tasks like generating sales emails?

Tools and Products
What does OpenAI do?

OpenAI is the most popular provider of generative AI models like GPT-4.

Tools and Products
Databricks is apparently worth $100B. What do they even do?

What we should really be asking is “What does Databricks not do?”

Tools and Products
Newsletter
Support
Sponsorships
X + Linkedin
Privacy + ToS

Written with 💔 by Justin in Brooklyn