The risks described are indeed about AI systems which don't yet exist, since we are still in the development phase. The idea of AI Safety is to look at what behaviors the AI systems we're developing will have and to prepare countermeasures.
To not do so is like designing a plane, someone pointing out accident risks "a large flying bird caught in the motors could cause them to fail", and responding "planes are fantasy, and the safety precautions you're recommending are also fantasy".
No, in fact the right analogy is to something like nuclear safety—overregulation due to real, but exaggerated, dangers kept us from the flying car and cheap energy. It is very suspicious that big players want to strangle this tech in the cradle and arrange so that only approved big incumbents and bad actors can use it.
Anyway, suffice to say that I'm familiar enough with Yud to not need you to link this sequence or that (what's next, his Harry Potter fanfic? Stuff on the dangers of AI math pets?)
My piece presumed familiarity with Yud's debate with Hanson about foom and the diffusion of info. Hanson won then, and empirical data since just strengthens the prior that there's no foom in the offing. But if there were, it cannot as a practical matter, i.e., in terms of political economy, be stopped.
You've addressed none of that. I don't think you can.
I'm not sure what point you think I should take into account. Is it the fantasy AI that doesn't exist, or the fantasy means to deal with it, or what?
The risks described are indeed about AI systems which don't yet exist, since we are still in the development phase. The idea of AI Safety is to look at what behaviors the AI systems we're developing will have and to prepare countermeasures.
To not do so is like designing a plane, someone pointing out accident risks "a large flying bird caught in the motors could cause them to fail", and responding "planes are fantasy, and the safety precautions you're recommending are also fantasy".
If you don't believe we're currently building powerful AI (or a plane), then there's a specific dive into that here : https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to
No, in fact the right analogy is to something like nuclear safety—overregulation due to real, but exaggerated, dangers kept us from the flying car and cheap energy. It is very suspicious that big players want to strangle this tech in the cradle and arrange so that only approved big incumbents and bad actors can use it.
Anyway, suffice to say that I'm familiar enough with Yud to not need you to link this sequence or that (what's next, his Harry Potter fanfic? Stuff on the dangers of AI math pets?)
My piece presumed familiarity with Yud's debate with Hanson about foom and the diffusion of info. Hanson won then, and empirical data since just strengthens the prior that there's no foom in the offing. But if there were, it cannot as a practical matter, i.e., in terms of political economy, be stopped.
You've addressed none of that. I don't think you can.
Ty for reading