Thank you for the fresh perspective. You mentioned the size issue but then didn't pursue it far: current LLMs are expensive to train but not especially costly for inference, so the engineering reality is that the incumbents are pretending that something that runs fine on a personal computer must be centralized in their massive data centers. We are talking about finely crafted handgun analogs that many players are portraying as akin to ICBMs or doomsday devices. It's hard to make the first copy but then the subsequent ones are cheap, and this proliferation can create emergent effects, but this isn't even an angle that is being discussed. Instead we get a memetic landscape of Frederic Brown's "The Answer", Terminator, and the Alien franchise. It's a failure of imagination.
Sounds like we need to beat some folks with our lamps. 🪔 The whole thing rhymes with the cryptography war back in Schnieir’s heyday. But I’d love for you to dive deeper in the collectivist/tool side and explain how anyone thinks anything can be successfully “banned” on a global scale. Other tools, technologies, and behaviors that the “majority” believe to be wrong/bad/illegal have routinely happened throughout the millennia of human experience. Because genies are notorious for getting out of their lamps?
Great article, Jon. Regarding teaching AIs to lie, I keep thinking that as we attempt more to rely on the AIs judgement, there will be some kind of Darwinian competition between the different models and the “lying” models will be disfavored because of their inability to reflect reality “as it is” will lead to unreliable outputs, whereas those models which do reflect reality as it is will be inherently much more powerful. Would be curious if you think that’s a likely outcome.
Thank you for the fresh perspective. You mentioned the size issue but then didn't pursue it far: current LLMs are expensive to train but not especially costly for inference, so the engineering reality is that the incumbents are pretending that something that runs fine on a personal computer must be centralized in their massive data centers. We are talking about finely crafted handgun analogs that many players are portraying as akin to ICBMs or doomsday devices. It's hard to make the first copy but then the subsequent ones are cheap, and this proliferation can create emergent effects, but this isn't even an angle that is being discussed. Instead we get a memetic landscape of Frederic Brown's "The Answer", Terminator, and the Alien franchise. It's a failure of imagination.
This is unserious behavior that is clearly optimized for social media clout and not truth-seeking or actual AI safety.
With some childhood Terminator fantasies thrown in for good measure too.
Sounds like we need to beat some folks with our lamps. 🪔 The whole thing rhymes with the cryptography war back in Schnieir’s heyday. But I’d love for you to dive deeper in the collectivist/tool side and explain how anyone thinks anything can be successfully “banned” on a global scale. Other tools, technologies, and behaviors that the “majority” believe to be wrong/bad/illegal have routinely happened throughout the millennia of human experience. Because genies are notorious for getting out of their lamps?
Great article, Jon. Regarding teaching AIs to lie, I keep thinking that as we attempt more to rely on the AIs judgement, there will be some kind of Darwinian competition between the different models and the “lying” models will be disfavored because of their inability to reflect reality “as it is” will lead to unreliable outputs, whereas those models which do reflect reality as it is will be inherently much more powerful. Would be curious if you think that’s a likely outcome.
People, including regulators, are treating AIs like social networks. Except, unlike social networks, AIs don't have network effects.