A little over a decade ago, Felix Salmon and I co-wrote a WIRED cover story on algorithmic trading, "Algorithms Take Control of Wall Street." One of the parts of this article that was left on the cutting room floor due to the limitations of print was the idea that the different trading bots that are battling out in the electronic trading realm could synchronize with one another to form a kind of accidental, market-destroying Voltron.
You can see a hint of this thesis in the article’s closing reference to emergent behavior, but what we had really been getting at was something like the following: through a combination of two interrelated phenomena — what I'm calling "coupling" and "drift" — "the market" was possibly becoming something other than it had been before the advent of electronic trading. One kind of human-centered collective had now been translated to software — shades of "we'll upload the human brain into a computer," if you will — and in making that transition it had become another thing entirely. Something alien, with a mind of its own that operates according to an alien logic and alien values.
Lately, I've begun to wonder if this is exact transition isn't also playing out on the web, as sophisticated text-producing bots like GPT-3 work overtime to game discovery algorithms (i.e., search and social media feed curation algos), so that there's a stock-market-style transition happening from the old web of humans to... well, something else.
Coupling
In the stock market, different algorithmic trading bots are explicitly designed to communicate with one another via price signals — on bot bids, another bot asks, and eventually they settle on a price and trade (or not). The bots are also tracking price movements, and driving them up and down either as a deliberate strategy or as a byproduct of some other kind of play. So it makes sense to reason the bots are also communicating via the larger price signals and market movements they're a part of, and even synchronizing and coordinating in some fashion.
The "flash crash" concerns that were being raised at the dawn of algorithmic trading, then, were not just around the idea that humans had lost control of the market to some collection of bots, but rather that "the market" itself had morphed into a single, alien entity — that "the market" is now really just one giant AI that consists of a number of nominally/institutionally separate but de facto federated and synchronized machine learning modules that all come together to produce a unified entity that operates by its own internal logic, untethered from old-fashioned human considerations like "value."
Now let’s think about this in the context of SEO. I follow a black hat SEO forum where there are numerous threads from hackers who are interested in training large language models (LLMs) to produce high-quality content that's optimized for search engine discovery.
Some of these folks have GPT-3 API access, while others are looking at competing models, including the open-source GPT-Neo. All of them are looking to farm out the task of writing SEO bait to bots, with the result that an increasing amount of the content online will be just content bots talking to SEO bots for the purpose of honeypotting passing humans into an ecommerce conversion funnel.
But this bot-vs.-bot thing isn't just a black hat reality. It's already here on the white hat side, where quite a few of the well-funded startups based on GPT-3 are explicitly aimed at producing market copy and other types of content that performs. This means it performs on SEO and it performs in terms of virality (i.e., it can game social media curation algorithms).
So what happens when these bots that are already talking to each other start training each other? What happens when they get good at manipulating each other — when they meld into a tightly coupled system, where each individual bot is just a sub-module in a larger, unified AI?
The above is not a rhetorical question. I don't know the answer. I don't know what it would look like or really even mean for humanity if we had sort of accidentally and without knowing it built a giant AI out of a loose federation of ML models that had started talking to each other and eventually found some kind of coupled, synchronous steady state.
What would that system do to the unwitting humans that were embedded in it as actors? How might such an entity train us and manipulate us? How would we even know this was happening?
Drift
The GPT-3 LLM was trained on a corpus of texts that came from the wilds of the web. Much has been made of the many -isms represented in that corpus, and of the ways GPT-3 re-presents and possibly reinforces those -isms as it's used to generate new texts.
But however "toxic" and "problematic" the texts GPT-3 was trained on are, at least you can say one thing for sure about them: they were generated by humans.
This will not be the case for some future version of GPT-3. A successor LLM will, at some point be trained on a corpus of texts that were largely generated by ML models like itself.
Again, GPT-3 is so good that we now have no way of knowing if a text was generated by an AI. And while the OpenAI group might somehow be equipped to recognize all the output of its language models (possibly by just storing a copy of all the API output) so that this material can be rejected from future training runs, it'll have no way of knowing which texts in its training corpus were generated by a competitor bot.
Unless the training corpus is locked to some known-good, verifiably human-sourced state, then at some point, these LLMs are destined to drift as future versions are trained on the output of previous versions and they get further away from human-generated text.
What will this linguistic drift look like and what will it mean in practical terms? Again, I have no idea. Not a clue. I can't really imagine what it would do to human language and brains if we spent a significant portion of our days in conversation with LLMs that were trained corpora that multiple generations away from human-authored text.
Yeah, this all sounds crazy. Probably is
I'm going to be thinking about these issues for a long time, and I certainly don't have anything concrete or even non-crazy sounding to offer you by way of conclusion about the above. But I am noticing a few things that I'll put out there:
We modern media professionals are cyborgs. We're all very heavy users of machine learning, whether we know it or not, with the result that ML subtly influences everything we write, say, and even think.
The current modern era of ML, characterized by massive amounts of GPU power, giant models, and blockbuster results, is very new, as in about 2014 or later.
The ~2015 timeframe is, at least in my circles, where a lot of us date the start of the big "going off the rails" and feelings of "omg we're in a simulation."
The Discourse really does seem to be getting more insane to everyone at an even faster rate lately.
So, things are weird and getting weirder, and frankly even this post is kinda nuts, right? Like, this very post sort of reads like a conspiracy theory type of thing.
Does any of the present weirdness have anything to do with major advances in ML in the past ~5 years, or with any of the coupling and drift issues I'm talking about, above? I have no idea. I also have no idea how I’d know if there was a connection.
Our fundamental inability to really know what is going on out there in the wilds of the web brings me to one last parallel with the stock market battle bots: pervasive secrecy and competition make it impossible to know for sure what’s really going on.
In the case of the markets, we eventually just sort of stopped wondering about “the bots” and embraced an uneasy agnosticism. Maybe “the market” is some kind of mega-AI that does its own thing and bears little resemblance to the collection of emergent human behaviors it replaced. Or, maybe that’s all silliness and speculation, and it’s still meaningfully a human collective that’s somehow mediated by machines. Who knows, and who cares, as long as it more or less looks like this, right?
Same with social media and, ultimately, all of online publishing. In the absence of any way to verify what is human and what isn’t— and in the presence of systemic incentives to pass of bot behavior as authentically human— there’s just no way of knowing what, if anything, is changing about the way we interact with one another. All we can have are suspicious and allegations, and vague, conspiratorial rumblings.
As long as nothing too crazy happens, maybe not looking at it too hard is the best approach?
Things will probably get extremely weird once you introduce higher-order monetization. That is, when you have a ML algorithm that can produce monetizable content, and it's designed to reserve a fraction of its income to reward whichever sources informed its more successful pieces.