Technically
AI Reference
Your dictionary for AI terms like LLM and RLHF
Company Breakdowns
What technical products actually do and why the companies that make them are valuable
Learning Tracks
In-depth, networked guides to learning specific concepts
Posts Archive
All Technically posts on software concepts since the dawn of time
Terms Universe
The dictionary of software terms you've always wanted

Explore learning tracks

AI, it's not that ComplicatedAnalyzing Software CompaniesBuilding Software ProductsWorking with Data Teams
Loading...
I'm feeling luckyPricing
Log In

The post about GPUs

Why these chips pair perfectly with AI, and how NVIDIA became the most valuable company in the world.

Last updated Jul 8, 2025ai
Justin Gage
Justin Gage
Read within learning track:
Loading image...

If you’re following what’s going on in AI you’ve probably heard about GPUs, the specialized chips that are powering all of your favorite new GenAI models. Nobody can seem to get enough of them: NVIDIA, the undisputed leader of making these things, has seen their stock price basically 10x since 2023. GPUs are so scarce, there are even entire companies who run markets for them (hi Evan).

So what’s all the rage, exactly? Why are GPUs so good for AI? Our laptops are full of incredibly fast chips that run Chrome and Slack just fine…why don’t they work for training AI models? And why does it seem so difficult to make enough of them?

Fear not, intelligent reader. Technically is here.

Terms Mentioned

Training

Cloud

API

Scaling

Inference

Companies Mentioned

OpenAI logo

OpenAI

PRIVATE
AWS logo

AWS

AMZN

CPUs and GPUs: a brief history

Chips are the brain of the computer. They, and the billions of little transistors that we’ve somehow figured out to put on them, are responsible for executing your code fast and reliably.

The CPU

The most popular type of chip is the CPU, or central processing unit. It’s the workhorse of most computers, and is probably working to display this post that you’re reading as we speak. And they’ve been getting faster and faster and faster, thanks to an age old observation called Moore’s Law that processing power seems to double every 2 years. The more transistors we fit on the chip, the faster it is.

To give you some more context to how far we’ve come: the first CPU ever developed was the Intel 4004, released in 1971. It has 2,300 transistors, and could perform 60,000 operations per second. The Apple M4 chip – today’s state of the art – has…wait for it… 28 BILLION transistors. Someone smarter than me can check my math, but I believe this is roughly a 12 million times improvement.

But it’s not just transistors – we’ve also figured out how to get more chips to fit into a computer. Laptops shipping today have 10 cores, which you can roughly translate to 10 chips. We’ve also gotten better at making these chips work together to get tasks done.

When you combine these two dynamics – more transistors and more chips working better together – you get much faster hardware. Readers graced by the sweetness of age will recall how incredibly slow computers of even 10 years ago were compared to the snappy experiences we have with modern laptops and servers.

The GPU

Powerful as the CPU is, it’s not perfect – like me, it has things it’s good at (writing) and things it’s not so good at (walking past a Shake Shack). CPUs are designed to do complex, intertwined computations in order. Think of it like a really long, convoluted checklist of things that have to happen in a specific sequence: that’s what most code looks like, and that’s what CPUs excel at. And for most of the tasks a computer needs to do, like swimming aimlessly through a sea of Chrome tabs, that’s exactly what you want.

Loading image...

But sometimes you need your computer to do the opposite: tons and tons and tons of very simple operations, but all at the same time. This is where GPUs come in.

I think a cooking analogy is appropriate here. Imagine you’re cooking some old school Pasta Genovese for your wife (or cellmates) and following a complex recipe. The recipe is sequential, and each step depends on the previous one: having an extra 5 people to help won’t really speed things up that much.

GPUs and AI: a match made in…science?

Continue reading with an all-access subscription

Continue reading with all-access

In this post

  • GPUs and AI: a match made in…science?
  • Neural networks are basically giant onions
  • You also need software to use a GPU
  • The supposed GPU shortage, and alternatives
  • You should know about ASICs

More in this track

What is Machine Learning?

How computers learn patterns from data: the foundation for everything from stock price prediction to ChatGPT.

How do Large Language Models work?

Breaking down what ChatGPT and others are doing under the hood

$15/month

30-day money-back guarantee

Or use
Up Next
Paid Plan

What’s an inference provider?Paid Plan

How the rise of open source AI models is fueling the growth of a new infrastructure category.

Content
  • All Posts
  • Learning Tracks
  • AI Reference
  • Companies
  • Terms Universe
Company
  • Pricing
  • Sponsorships
  • Contact
Connect
SubscribeSubstackYouTubeXLinkedIn
Legal
  • Privacy Policy
  • Terms of Service

© 2026 Technically.