A single soap bubble floating against a soft green background, its surface swirling with iridescent color

Look, I'm just a guy. But I think that artificial general intelligence (AGI) isn't really a discrete threshold anyway. There's a gradient between the auto-completing of sentences, and actual reasoning. While mechanistically, LLMs ARE "stochastic parrots," on a purely representational level they could end up being something more profound.

We're used to models that are forced to take deterministic paths through the vector-space, but an LLM that uses the superposition of those representations would end up being able to link data in ways that produce novel reasoning. It still wouldn't necessarily be anything even close to consciousness, but we'd likely find it incredibly impressive (as a massive understatement).

The rough math

The following has caveats, but it's the rough math I just did.

The human brain: 100 trillion connections across 86 billion neurons, or around 1,163 connections per neuron.

Current LLMs (already, classically, on normal hardware): according to research from Anthropic, the most conservative estimates about this you could possibly make would be about 5-10x the feature compression, or 1.6 quadrillion interaction terms per forward pass. That's already 16x the brain's synapse count just from attention geometry, and it'd be the absolute floor.

To put that differently: every time you send a prompt to one of these models, the number of representational interactions happening in a single pass is over an order of magnitude larger than every synapse in your head firing at once. And that's the conservative number. The real number depends on how much information is being packed into each dimension, and the research suggests it's a lot more than we assumed.

The caveats

I said there were caveats, and there sure are. The brain runs its 100T connections continuously and in parallel, has persistence, and constant feedback loops, and probably a bunch of other stuff that will always make you an infinitely more wondrous machine (that you should be proud of). A forward pass happens once and is done. Your brain never stops. That's a fundamental difference that raw numbers can't capture.

But to be honest, as much as I think there's an AI bubble, and that a lot of you are talking absolute nonsense, there IS a chance that things improve despite the fact that the entirety of human knowledge on the internet and elsewhere has virtually been consumed. The superposition research suggests that models are compressing far more information into their parameters than we thought, which means there may be room to get more out of existing architectures before we hit the wall everyone keeps predicting.

Feel free to have dreams or nightmares about it as you please.

TL;DR: The LLMs are a bubble, but there's still some science left to do.


It's been a while since my first post. I'm still here, still writing. More to come.