↑ BACK TO TOP
open sidebar menu
  • AI, it's not that complicated/The Generative AI wave
    Knowledge Bases
    Analyzing Software CompaniesBuilding Software ProductsAI, it's not that complicatedWorking With Data Teams
    Sections
    1: The Basics
    2: The Generative AI wave
    It was never about LLM performanceWhat is RAG?What's a vector database?How do AI models think and reason?How to build apps with AIWhat is MCP?What is Generative AI?The beginner’s guide to AI model architecturesA deep dive into MCP and its associated serversThe scaling law and the “bitter lesson” of AIA practical breakdown of the AI power situationThe vibe coder’s guide to real coding2026 vibe coding tool comparisonHow to build AI products that are actually goodThe AI user's guide to evalsAI will replace you at your job if you let itAI and neuroscience
    3: Tools and Products
Sign In

AI and neuroscience

AI models seem to approximate the brain, intentionally or otherwise.

ai

Published: January 13, 2026

Like many of you I’ve been watching (listening to?) a lot of the Dwarkesh podcast over the past 6 months, and one theme that seems to come up a lot is the relationship between AI and the brain. The way we train and use GenAI models today strongly resembles how the pathways in the human brain actually work; and many neuroscientists and AI researchers believe the key to unlocking real superintelligence will lie in our ability to better understand and exploit that connection.

This post is going to explore a few ways in which this is true and explain some of these rather complicated ideas in more simple language.

Terms Mentioned

Training

LLM

GPU

ChatGPT

Migration

Token

Neural Network

Neural networks, the basis for everything

The obvious place to start is neural networks, the architecture for pretty much all of the AI models that you use today like the GPT family, Claude, Nano Banana, and the like. Obviously the first word – neural – likens these models to the animal brain. The human brain has something in the range of 86B neurons, which are specialized cells that transmit nerve impulses; essentially the core unit of how our brains transmit information and signals. The idea is that neural networks work in kind of the same way.

Loading image...

And indeed, you’d be hard pressed to find an explanation of neural networks that doesn’t make an analogy to the human brain. Take, for example, Technically’s very own breakdown of neurons from the prolific Nicole Errera:

Neurons are the basic building blocks of AI architectures, modeled after the actual biological neurons that transmit signals throughout the human brain. Remember, AI models are essentially pattern investigators; they find the underlying pattern in the data. You can think of these neurons as the mathematical functions that are doing this hard investigative work, getting into the weeds of the data and figuring out what’s going on.
The math performed by individual neurons is actually pretty simple – it’s usually just basic multiplication and addition that you could do with a calculator. So how are AI models able to capture such complex patterns, like the ones involved in language and vision? The trick is to string together a lot of neurons – like hundreds of millions of them.

So in practice, the neural net in a model like GPT-5 does, at least loosely, resemble how a mammalian brain works. This is no accident. If you trace the history of the neural network you’ll end up back in 1943 (when I was born), when Warren McCulloch (neurophysicist) and Walter Pitts (mathematician) wrote a paper proposing a mechanism for how neurons might actually work.

To illustrate their hypothesis they modeled a simple neural net using electrical circuits. Further attempts culminated in a breakthrough at Stanford in 1959 when MADALINE (it’s an acronym) became the first neural network applied to a real world problem – eliminating echoes on phone lines. So in short, the fact that neural networks (roughly) approximate how the brain works is not an accident, this insight is core to their entire historical origin.

Now any neuroscientist worth their salt will tell you that there’s more that we don’t know about the brain than there is that we do know. The true inner workings of this organ are still really a mystery. And so it would be naive to argue that neural networks work in the same way that the brain works. But it’s safe to say that they’re inspired by what we do know about how the brain works, at least loosely.

Evolution as a corollary for pre-training

The opening to Dwarkesh’s recent episode with neuroscientist Adam Marblestone is a good distillation of where researchers’ brains are at (😈) on the topic and why the stakes are so high:

Access the full post in a knowledge base

Knowledge bases give you everything you need – access to the right posts and a learning plan – to get up to speed on whatever your goal is.

Knowledge Base

AI, it's not that complicated

How to understand and work effectively with AI and ML models and products.

$0.00

What's a knowledge base? ↗

Where to next?

Keep learning how to understand and work effectively with AI and ML models and products.

Comparing available LLMs for non-technical users

How do ChatGPT, Mistral, Gemini, and Llama3 stack up for common tasks like generating sales emails?

Tools and Products
What does OpenAI do?

OpenAI is the most popular provider of generative AI models like GPT-4.

Tools and Products
Databricks is apparently worth $100B. What do they even do?

What we should really be asking is “What does Databricks not do?”

Tools and Products
Newsletter
Support
Sponsorships
X + Linkedin
Privacy + ToS

Written with 💔 by Justin in Brooklyn