The relationship between Brain Model and Large Language Model

Published on


Let’s start with a simple definition of a Large Language Model.

Think of it as an API endpoint. You send in a string as input, and the output is not predetermined. This input acts as an activation parameter — once received, the API begins to respond.

A conventional API returns a fixed, server-defined response based on the input. But an LLM works differently: instead of returning the entire response at once, it predicts the next token based on the previous one, one step at a time.

In other words, a Large Language Model is essentially a typewriter — producing text one character after another, each informed by what came before.

Now let’s look at the brain through the same lens.

Picture the inside of your brain as a universe filled with neurons, each glowing at different intensities — some as bright as the sun, others barely flickering.

When the brain is active, every thought and behavior emerges from multiple neurons firing together. But what triggers them in the first place? Our senses — especially our eyes — serve as the input parameters. Once they receive information, the brain is activated.

From there, neurons light up in sequence, like the first domino toppling over and setting off a chain reaction through propagation.

This is the core of my analogy: both LLMs and the brain take an input, activate, and then produce output through a cascade — one token at a time, one neuron at a time.