I’m behind on my reading, but re your second article I made similar points recently (without having caught up to your thoughts) to a relative who asked for my reaction to a recent Ezra Klein podcast.
I said:
I always find descriptions of AI as being alien or inhuman a bit missing the point.
Ezra said something along the lines if you zoom into how they “think” it’s a series of calculations that aren’t interpretable.
But this sort of statement presumes we understand how human thinking works.
But if you zoom into human thinking it’s a series of neurons firing that aren’t interpretable.
And much of the way we describe how we came to decisions are post hoc rationalizations.
Your comparison of LLMs and brains is a nice attempt to highlight a dimension most discussion seems to ignore. However, it seems hard to assign probabilities to the options: they don't feel evenly weighted but I am not aware of much research that would allow us to pick some of these options as more likely. Some models of cognition (like Friston's free energy principle or global workspace theory) do bear on these weights but it's not immediately obvious to me in what way. Hoel's forthcoming book might also be useful.
Some people in neuroscience think the hippocampus functions as a sequence model / generator, in a way that seems related to causal LM objectives: https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(18)30166-9
Though the details look pretty different, doesn't seem biologically plausible that the hippocampus *is* a transformer.
I’m behind on my reading, but re your second article I made similar points recently (without having caught up to your thoughts) to a relative who asked for my reaction to a recent Ezra Klein podcast.
I said:
I always find descriptions of AI as being alien or inhuman a bit missing the point.
Ezra said something along the lines if you zoom into how they “think” it’s a series of calculations that aren’t interpretable.
But this sort of statement presumes we understand how human thinking works.
But if you zoom into human thinking it’s a series of neurons firing that aren’t interpretable.
And much of the way we describe how we came to decisions are post hoc rationalizations.
Your comparison of LLMs and brains is a nice attempt to highlight a dimension most discussion seems to ignore. However, it seems hard to assign probabilities to the options: they don't feel evenly weighted but I am not aware of much research that would allow us to pick some of these options as more likely. Some models of cognition (like Friston's free energy principle or global workspace theory) do bear on these weights but it's not immediately obvious to me in what way. Hoel's forthcoming book might also be useful.