What If The Future Of AGI Rhymes With The History Of Flight?
The story of Daedalus is useful for thinking about AGI, but not for the reason you might think.
👶 I often read that machine learning is a long way from being as smart as a cat, a dog, or a human baby. It seems we’re pretty far away from being able to recreate the most salient aspects of biological intelligence — whatever those aspects are (this is an open question).
Confession time: lately, I’ve quit caring about artificial general intelligence (AGI), because I think it probably doesn’t matter. We humans sometimes spend a long time trying to imitate some particular feat of this or that animal via technology before abandoning the goal of imitation entirely because it just stopped mattering on a practical level.
🦅 Flight is probably the best example of this, and lately I’ve been thinking: “what if the quest for AGI plays out like the very ancient quest for human flight?”
In the course of my life, I have flown in the following devices:
A jet airplane
A propeller airplane
A helicopter
A blimp
Not one of those things really operates like a flying animal or insect. We spent a long time trying to imitate birds and other insects and did manage to glide like some of the raptors. But the whole business of flapping to get lift just never worked out for us, so we came up with a set of alternatives that work dramatically better for different purpose-built scenarios.
🛩️ I now suspect our AGI quest is going to play out much like our “human-powered flight via large, flappy wings” quest — a quixotic and ultimately futile series of efforts to imitate nature that ended in a suite of technologies that are actually better for our purposes than any successful natural imitation would’ve been.
If you think about it, this is already more or less what’s going on with machine learning. We’re already quite successful at building AI models to do specific types of tasks, so a generative language model like ChatGPT might be better viewed as the text production equivalent of a propeller-powered aircraft rather than as not-so-smart fake brain.
🩺 To give a concrete example: Imagine a set of powerful medical models that replace most of the kinds of labor currently done by a general practitioner:
A question-answering model that can do basic diagnosis via a smartphone camera and/or sensor data from wearables.
A generative text model that can take the output of the QA model and give detailed treatment plans.
A model that checks for drug interactions.
A model that can refer you to specialists based on the output of any of the above.
None of this requires “AGI” — these models don’t need much in the way of bedside manner. It’s just a suite of specialized models that know how to interact with you and with one another to do types of labor that we currently call “medical care.” Really high-quality, ubiquitous medical models — jet-powered models with all the kinks worked out — might offer such a superior healthcare experience for everyone that we’d wonder that anyone ever wanted to replace “the doctor” with a single robot.
Ultimately, I suspect that our quest for artificial human-level intelligence, or even cat-level intelligence, is a fascinating and educational science project that can putter along in parallel with whatever machine learning is actually doing as it moves out of the Wright Brothers phase, through propellers and jets, to whatever the ML equivalent of SpaceX will be.
🚨 Having said all this, I should offer these crucial caveats:
I’m reasoning by analogy from history and suggesting that the development of machine learning will rhyme with the development of flight. This is always a bit suspect, so your mileage may vary.
Machine learning continues to surprise. One of those surprises could certainly be something everyone would agree is an AGI.
Everyone who is thinking hard about AGI, especially the risks, should keep doing that. It’s way better to prepare for an AGI that never arrives than to not prepare for one that surprises us.
I recently read the Wright Brothers from David McCullough and it was interesting how much time the brothers spent observing birds.
True their machine didn't look like a bird, or flap its wings, but they learned a ton from the birds.
AGI might be the same. Yes it will look different from biological intelligence, but it might be heavily inspired by it. I might use this thought as the start of an article for https://mythicalai.substack.com/ :)
Yann LeCun recently pointed to this article:
https://www.linkedin.com/pulse/real-revolution-ai-still-come-hussein-mehanna
which touches in some things that are still unsolved.
Related, I’m not sure that I would say that AGI is trying to precisely mimic how human brains achieve GI but more that there are still areas of hierarchical thinking & planning that have been difficult to achieve with AI.
(But I’m in the low level trenches of AI runtimes so I’m not claiming to be an expert in the current SoA.)