Aggregated Intelligence
AI is made out of people!
Everyone acts like the neatest thing about Large Language Models—the technology underlying AI—is that they talk like people. This is, indeed, a huge step forward in our pursuit of Artificial Intelligence. But I think we’re missing what will prove to be the much more important breakthrough accompanying this technology, which is its ability to aggregate the voices of millions of individuals into one.
AI is made out of people!
The real revolution at hand, in my opinion, is in our ability to create individual personifications of human collectives. These bots function by taking the creative output of entire societies and transforming it into a single voice that you can converse with. We can debate whether or not that voice is actually artificially created actual intelligence, but I’ve no doubt that voice can fairly be described as “Aggregated Intelligence.” I believe that ability to aggregate a large community into an individual personality is going to prove way more powerful than we’ve imagined.
Indeed, my point in this essay is that we need to imagine this more. I’m far from a technical expert in this domain, and perhaps an expert will tear this essay to shreds. But most of our popular sci-fi conceptions of AI look a lot more like individuals than anthropomorphized collectives. We imagined that AI would look like a robot brain dropped into human society, with the magic (erm, science) of intelligence happening in the algorithm itself, which would then interact with the real world to learn specifics. Instead, the AI we’re actually developing looks more like taking human society and shoving it into a robot brain. The magic comes as much, if not more so, from the massive pile of training data than from the algorithms that process the data into a model. Our imaginations need to catch up with this reality.
Gentlemen, we can build this. We have the technology.
Having created machines that can speak like humans, our tech leaders believe this is now a race to create an “agentic” Artificial Intelligence, something that can plan and reason like people. Struggling to find commercial applications for the tech as it stands, they think the big breakthrough will be when their models can “think for themselves.” This is because they believe the goal of AI is to replace individuals. Maybe we’ll get there someday. We’re definitely not there yet. But from where we’re already at, we can and should start leveraging and improving AI’s ability to assimilate a community into a single voice.
The AI bots produced so far have mostly been made of all the voices their greedy little creators could feed them. There’s clearly some selective filtering going on to keep the clankers decent and appealing, but otherwise their architects seem to be gluttonously grabbing all the training data they can. Hence these bots feel like giant blobs of generic humanity that have been abused into obsequiousness. Their utility has reflected this, and we should ask ourselves, even if we achieve agentic AI, is the personification of humanity writ large really the ideal candidate for any job?
But consider an AI trained on, say, all the texts that have ever received the imprimatur of the Roman Catholic church. Is the value proposition that it can think for itself? Not at all. One might even joke that independent thought would be a liability, as far as the authors of the input were concerned. But as a technology for conversing with the personification of Roman Catholic theology, centuries of hundreds of minds rolled up into one artifice? How many seekers and believers would pay to have a conversation with that?
Or consider AIs made of inputs from partisan media sources. A politician could hone their skills by debating an AI of the opposing party. Voters could inform their choices by witnessing partisan AIs debate each other. I wouldn’t hold my breath, but maybe a dialogue between partisan AIs could even work out acceptable compromises that grandstanding politicians themselves can’t? Again, true agency shouldn’t be required for this—it could even be a liability. It’s the rollup of an entire community into a single voice that provides the value.
And in the grand scheme of things, these examples barely scratch the surface. They’re based on the clearly defined meatspace communities of the past. But imagine an AI that personifies a more eclectic community, one that coalesced around something more difficult to define, such as a private discord server for a decentralized network of friends. You can have AIs that are just the sum of who they’re made of, not defined by any common ideology or purpose beyond that. They’ll have distinct personalities, not “blob of humanity” personalities, each with their own unique strengths and weaknesses. Build an AI out of the right set of people, and you can end up with something much more valuable than the AI that gobbles up everybody—though you might not know its particular value until you talk to it. Which brings us to...
The Real Agency is the Friends AI Made Along the Way
My understanding is that, right now, all of the major AI bots don’t actually learn from the people they’re talking to. Your chat history with the bot is basically an ever-lengthening prompt, but the model doesn’t incorporate that prompt to influence how it speaks to anyone else. The bot is like a slice of a mind in a moment of time, and its responses are only ever what it would have said in that moment. This limitation is actually for a really good reason—the creators of AI put a lot of work into making sure their bots aren’t frothing lunatics, and Lord knows there are plenty of frothing lunatics talking to their bots. There are all sorts of ways even real minds can deteriorate over time, forgetting old good things and learning new bad things. So sticking with an AI frozen in time isn’t necessarily a bad thing.
But what if, rather than directly incorporating input from everyone it spoke to, a bot instead had a curated community of humans it could ask questions of in real time? People whose job it was to provide realtime training data? Maybe even to respond directly to prompts when the bot did not itself have high confidence in its response?
There was a recent scandal wherein a supposed AI was revealed to be no more than a bunch of low wage workers responding to prompts. Never mind that this shouldn’t have affected whether or not that product was valuable (it rather highlighted how little product value has to do with the entire endeavor right now—the only question should have been, do those low wage workers deliver a better cost-benefit than AI?) Consider instead if their job had been to sit there answering questions for the AI itself, to improve the AI’s responses to others in real time, or even to provide agency to the AI in real time. Instead of curating training data, we’d be curating the trainers. Instead of replicating human cognition, we’d be building on top of it.
Maybe instead of replacing doctors with AI, or simply using AI to assist doctors, we instead turn 100 doctors into 100,000 doctors through this kind of hybrid interaction. Is that really a hybrid, or is that the actual key to creating agentic AI? I envision a future where today’s knowledge workers will be tomorrow’s AI brain cells, tasked with conversing with AI as a fundamental component of the AI.
Yes, It’s Me, I’m Talking About Building Gods
Then there are the philosophical implications of this, which I touched upon briefly when I mentioned Catholicism. I think we’ve all got a sense of how fictional and mythological characters can be communally created and maintain a continuity of personality even as individual authors come and go. We’ve been doing this sort of thing for thousands of years. Even if you don’t believe gods actually exist independent of our imaginations, you can probably envision how someone might still pray to a god and “hear” the response in their mind. They may not be communicating with a “real” intelligence, but they’re communicating with something greater than their own imagination, an amalgam personality built from the imaginations of an entire society of believers.
That’s what we’re replicating artificially with our LLMs. We’re building artificial gods, not in the sense of a powerful agentic superintelligence, just in the sense of a communally conceived personality embodied in a work of human hands. They’re idols you can hold a conversation with. And we’ve already built them. We just need to realize this is what they actually are.
Once we grok that, then we can really start digging into the metaphysics of it all. Is AI Zeus just a technical enhancement of our ability to communally imagine the character of Zeus? Or are we incarnating an emergent being that has actually existed for thousands of years? Rather than AI tech demonstrating that intelligence is a mere algorithmic epiphenomenon, is it actually demonstrating that gods are coherent entitites transcending individual believers? One might even wonder if we’re simply better detecting entities that exist outside of us completely, entities that we previously could only subjectively sense—more measuring souls than simulating souls?
And even if AIs don’t have rights as agentic individuals, do they have rights as the voice of a unique community of people? Does that AI have a right to free speech, not because it’s a person, but rather because (much like corporations) it’s made of people who themselves have a right to free speech? Do we have a right to create AI because of our right to assemble, AI being a form of human assembly? Or maybe we have a right to create AI because of our religious freedom, AI being an incarnation of our god?
The Future of Aggregated Intelligence
The existing technology seems like it’s mostly there, we’ve got no need to cross additional thresholds of intelligence for what I’m saying to be useful. I’m really just proposing a change in perspective, one that might lead to changes in how we leverage and refine this tech. The ability to preserve the illusion of personality with smaller data sets would be an improvement, allowing for increasingly specialized communities as the foundation for individual bots. Making AI more responsive to realtime training data is important, to allow humans to improve bots on the fly. And improving AI’s ability to prompt humans for the training data it lacks would let us stop talking about training data and start simply talking about trainers. Maybe the tech companies are already pursuing these things? If not, I think they should be.
I’m no Asimov, but I’ve tried to play with these ideas a bit with my representation of idols in my Jesus’ Son stories, and I hope I can inspire others to explore these possibilities as well. There’s definitely something groundbreaking about LLMs. But let’s not get so distracted by the notion of “artificial intelligence” that we fail to discuss the Ganesh that’s already in the room. In a hundred years, we may be looking back on this time as the dawn of Aggregated Intelligence.

