Elsewhere: Search vs. ChatGPT, ELIZA, AI Futures
Will ChatGPT or something like it kill search? It's complicated.
I’ve recently written two pieces on AI for RETURN that I want to highlight for my newsletter audience:
I wrote the first piece because I keep getting asked by reporters and others whether ChatGPT and similar products will be a threat to Google’s search monopoly. What I argue in the article is that whether AI is a threat to Google depends on whether we end up with centralized or decentralized AI.
👎 Centralized AI scenario: If AI is sort of ring-fenced and confined to a handful of large tech platforms by a combination of regulatory action, case law, maybe a few new laws (but these probably aren’t necessary), and platform moderation decisions, then Google will have to compete in this new category with Microsoft, Amazon, Facebook, and Apple.
In other words, “search” as we currently know it is probably doomed, but some large platform (possibly even Google) will figure out how to make money off whatever centralized experience replaces it.
👍 Decentralized AI scenario: If incumbents aren’t able to keep open-source AI out of the public’s hands, then all of them are vulnerable to a whole range of smaller startups that can and will offer experiences that are superior to Google for the different things we use Google for.
Google search will still exist as a way to locate web pages, but all the other ways we use that one search box — to answer questions, check spelling, get recommendations, find images that fit a theme or idea, shop, etc. — will get disaggregated into separate, AI-powered experiences that are a lot better for those things than Google search ever was even at its peak.
Anyway, the point of the article is less the above analysis than the call to action at the end — we do have to fight for the decentralized scenario to win because it won’t win by default.
💬 As for the second piece, the one on ELIZA, the real meat of it (for me, at least) is in the second half, where I’m trying to work out a way to put something that’s been bothering me about the discussion around LLMs vs. consciousness.
I’m not sure I did a very good job making an argument here, so I’m trying to solicit feedback on it. I haven’t gotten any on Twitter, but maybe my readers here will have thoughts. (Be sure and read the whole thing before posting, though.)
It’s AI week at RETURN
We published on the web a bunch of the articles in the second print issue of RETURN dealing with big questions around AI and AGI, so be sure and check those out:
Niklas Blanchard: Artificial General Intelligence Will Cause the End of Privacy and Autonomy
Robin Hanson: Artificial General Intelligence Will Transform Economies In 100 Years
These are some great pieces and we’re really proud to have published them. Next week, I’ll be doing some Twitter threads with more detailed thoughts on some of these, so check my feed for that.
Links of interest from my Discord
I have a Discord channel that I mainly use to put links in. At some point soon I want to make sure paid subs have access to a special backchannel, but Substack doesn’t make this easy (they need to offer an API so I can programmatically check for subscriber status given an email), so I have to figure out how to make that work.
At any rate, here are some links of interest that I’ve put in there recently, in reverse chronological order:
Prompter Guide. This is a great resource that I want to spend more time with.
ChatGPT-style search represents a 10x cost increase for Google, Microsoft. Sam Altman said in a recent interview, though I can’t remember which one, that ChatGPT costs them pennies per session, which is astronomical considering the number of users. Inference is still very expensive.
Machine learning needs better tools. This is some SaaS blog content where they’re shilling their own tool, but it’s a good post and also their tool is good.
Software 2.0. This is a very good, important essay from a major figure. I have previously written a little tiny bit along these lines, but Karpathy’s post is a whole two or three levels better than my thoughts. I’ll probably read it a few times.
Some people in neuroscience think the hippocampus functions as a sequence model / generator, in a way that seems related to causal LM objectives: https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(18)30166-9
Though the details look pretty different, doesn't seem biologically plausible that the hippocampus *is* a transformer.
I’m behind on my reading, but re your second article I made similar points recently (without having caught up to your thoughts) to a relative who asked for my reaction to a recent Ezra Klein podcast.
I said:
I always find descriptions of AI as being alien or inhuman a bit missing the point.
Ezra said something along the lines if you zoom into how they “think” it’s a series of calculations that aren’t interpretable.
But this sort of statement presumes we understand how human thinking works.
But if you zoom into human thinking it’s a series of neurons firing that aren’t interpretable.
And much of the way we describe how we came to decisions are post hoc rationalizations.