AGI Is Silicon Valley’s First Native Cryptid
I know a guy who knows a guy whose cousin has GPT-4 access.
I’ve been chronicling the hand-wringing and fear-mongering over AI — what it will do to our jobs, our art, our morals — since the start of this newsletter. The fears around AI are escalating daily on my feed, and they’re coming from a really wide array of actors — BigCos like Google, independent artists and creators, effective altruists, woke AI ethicists, etc.
I have a “The Story So Far” post for tomorrow’s newsletter that surveys these fears and how they’re resulting in the formation of a powerful, diverse coalition that’s fighting against decentralized AI in favor of centralized AI that’s controlled by a handful of mega-platforms. So if you’re not signed up for this newsletter, then do it now so you don’t miss it.
😵💫 But there’s a really weird and slightly scary layer to all of the above hand-wringing and fearmongering about AI, a layer that’s separate enough from all the other concerns that it deserves to be pulled out into its own post: chatter about an incredibly powerful artificial intelligence that some have seen, or have talked to someone who has seen, and what this is going to do to our entire civilization in short order (as in, this year or next at the latest).
One sees and hears stuff like the above and reads the feverish chatter about the supposedly godlike powers of the coming GPT-4 model from OpenAI, and one gets the impression that there’s a kind of new cryptid lurking about Silicon Valley — Bigfoot, the Loch Ness Monster, the chupacabra, GPT-4, some AGI deep inside of Google or Meta.
Something wild and alien is Out There — something that just breaks people’s reality when they encounter it first-hand. Everybody knows a guy who knows a guy who has seen one of these and who swears it’s gonna break the world.
👎 There’s even a crowd of debunkers who are trying to calm everyone down. They’re telling us to relax, that AGI is still way far off, and everyone needs to chill out about GPT-4 because it’s just not that big a leap.
🛸 To switch metaphors from cryptids to an adjacent area of High Strangeness: those who follow me on Twitter know that I’m a follower of the UFO/UAP scene, and what’s fascinating to me is just how much overlap there is between “a guy on Twitter saw a crazy-powerful AI” and “a guy on Twitter has seen the mind-blowing UFO videos that the Pentagon is sitting on.”
(Is it just me, or are we in a collective moment where everyone is waiting for some big reveal? UFO disclosure, an AGI product launch, smoking-gun confirmation that Trump was getting direct orders from the Kremlin the whole time, proof that covid is a bioweapon… There’s something in the water online, I guess.)
Collapse at the speed of meme
I bring up the cryptid angle because if there’s something to this chatter — even if it’s a bit overheated — then gives another view of who is trying to rein in AI and why.
I don’t want to get too far into this because it’s pretty speculative and I really don’t have a good sense of how much stock to put in the imminent AGI chatter, but I will offer one possible scenario for what could happen soon.
🦠 Remember how I framed machine learning as a contagion vector for deflation? The part of that story I want to stress is that everyone is already carrying around billions of network-connected transistors in their pockets, purses, backpacks, and briefcases, which means that vector can move across the world at viral speed.
The decentralized AI revolution isn’t going to be like the mobile phone revolution, the internet revolution, or the computer revolution — it’s probably not even going to be like the social media revolution. Unlike those other revolutions, this revolution doesn’t require a big infrastructure build-out.
🚢 The AI revolution can happen at the speed of code deploys and app downloads. If someone makes an ML model that’s a better doctor than my GP, I can get it within hours of them releasing it — possibly even minutes. The same is true of a legal advice model, a K-12 teacher model, a technical writer model, a tax preparation model, and the list goes on.
A model that is good enough could drive to zero the expected future earnings of an entire white-collar profession within a matter of hours.
To be clear: We may not get to this point with AI any time soon; or, we may get to this point with AI next week. It’s hard to say, which brings me to my next point…
Collapse at the speed of banking
🏦 It’s hard to say what is about to happen with AI the same way and for some of the same reasons that it was hard to say in the summer of 2008 that the entire global banking system was about to collapse.
I’ve previously used leverage as an analogy for tech in general and AI in specific. I’ve talked about two important ways AI is like leverage in the financial system:
It’s a force multiplier lets a few people do a whole lot with very few resources. In other words, starting a tiny bit of something and applying a lot of leverage is the same as starting a whole lot of that same thing.
It increases risk because you can blow up the system with it. This bit about blowing up the system is, of course, still theoretical in when applied to AI.
I’d like to add a third way that AI is like leverage in the financial system: no party inside the system has much visibility into other parties’ amount of and use of leverage, which makes for two ways the combination of leverage plus asymmetric information can blow up the system:
One party has secretly amassed a huge amount of leverage and is now a critical node that can blow up the system if it goes down.
Many parties have all used leverage to pile into the same trade (without explicitly coordinating), so that if that trade goes sideways it takes everyone out.
🤖 To unpack the above two conditions in explicitly AI terms:
One company (or country, like China) has secretly built a super powerful AI that it cannot or will not control and that is launched and then single-handedly dominates one or more parts of our civilization — doctoring, lawyering, news publication, warfighting, etc.
Many companies have secretly built moderately powerful AIs that are all tackling the same few parts of our civilization (doctoring, lawyering news publication, warfighting, etc.) from slightly different angles, and that are then launched at roughly the same time to dominate those areas in aggregate.
🥸 My point here is that AI research efforts have this kind of hedge fund quality to them, in that are different groups competing with each other in secret so that nobody has a birds-eye view of the whole space and can therefore say with any confidence when something really big is about to unfold in it.
In this respect, the AGI chatter is the Valley version of Wall Street rumors about a big trade that some whale is stuck in that’s supposedly going sideways, or about some other major move deep in the plumbing of the financial system.
And as is the case with market chatter, experts and insiders — all of whom, again, are operating with imperfect visibility into what their competitors are doing — are going to disagree on whether there’s anything to the rumors. Outsiders will have no way of making sense of any of this talk, other than to just wait and see if they wake up to the news that their financial prospects are suddenly far dimmer courtesy of the inner workings of some distant tangle of math and electrons.
👍🏼