Being "at peace with [your] own mortality" is a bit of a tell on where you might draw the line for what is an appropriate intervention in the name of safety. Most people consider staying alive to be of utmost importance.
If death is an acceptable outcome to you, please understand that most of society does not agree.
Being "at peace with [your] own mortality" is a bit of a tell on where you might draw the line for what is an appropriate intervention in the name of safety. Most people consider staying alive to be of utmost importance.
If death is an acceptable outcome to you, please understand that most of society does not agree.
Real trouble can arise when one projects one's own comfort with death onto an outside population.
That is true, but certain professions benefit from this mindset, where panicking over one's mortality is instantly counter-productive.
Take pilot or a firefighter: neither benefits from panicking, let alone the people whose lives depend on them.
Yudkowsky wants to do away with electricity, because your house might burn down because of an electrical fire, or you might die of an electric shock. Risks that make life better if taken, compared to existing in a world without electricity.
Few things scare me more than being in a panicking crowd. Fear and disgust, or disappointment? All of them. I'm not concerned about the fears or opinions of a hysterical mob, I just want to be as far away from them as possible.
To take this analogy one step further: you're the pilot in a hairy situation. It's becoming apparent to the passengers that this flight is unusual. Some begin to panic.
One of the loudest voices (screaming "WE'RE ALL GONNA DIE") is Mensa-member and accredited pilot, Yudkowsky.
You're the actual captain on this flight though, he's just a passenger. So, will you concentrate on your job, and trust your judgement, or will you invite Yudkowsky into the cockpit to give his input on how to fly the plane?
Of course not. He might be smart, he might be a pilot, but he lost his cool. When - IF! - we land, I want his license to be revoked. He's unfit to fly.
(Note: in this analogy I rely on survivor's bias: that is, if you crash the plane, Yudkowsky was - somewhat - right, we all died. We can only mock him in a future where AGI won't kill us all, as he predicts. But our present is already impacted by survivor's bias, we're on the unlikely branch of possible outcomes where 1983 didn't escalate into a global thermonuclear war, multiplied (a multiplier less than one) by other possible extinction events that never came to pass. So, why bother worrying about anything, really? Just enjoy your flight! The future observer is always right; they must be, because the naysayers who were right in their own, extinct branches aren't present any more to object.)
Pilots have thousands of hours of flight time and comercial flights are a routine activity where the many possible failure modes are well known and in fact have already happened and been studied extensively.
It’s not clear to me that there is anyone working on AGI that is in an analogous situation to the pilot in this story. Expertise is developed via many repetitions with unambiguous feedback. Literally nobody has ever created AGI before so nobody could have obtained such feedback.
To mitigate the risks, I asked ChatGPT to write me a script that replaces historic and fictional evil people, and also notorious Luddites in my LLM training corpus with Yudkowsky.
This should introduce a bias where AGI will prioritize going after a single human, giving mankind time to flip the switch!
It's very distasteful to speculate on others' mental state online, but if your lived reality includes wondering, when you notice a drone nearby, if it's controlled by an AI that's hunting You, then you're disqualified from leading anything. Even if it turns out to be true.
A Tesla! Does it have a human driver, or is it trying to run over me?
Being "at peace with [your] own mortality" is a bit of a tell on where you might draw the line for what is an appropriate intervention in the name of safety. Most people consider staying alive to be of utmost importance.
If death is an acceptable outcome to you, please understand that most of society does not agree.
Real trouble can arise when one projects one's own comfort with death onto an outside population.
That is true, but certain professions benefit from this mindset, where panicking over one's mortality is instantly counter-productive.
Take pilot or a firefighter: neither benefits from panicking, let alone the people whose lives depend on them.
Yudkowsky wants to do away with electricity, because your house might burn down because of an electrical fire, or you might die of an electric shock. Risks that make life better if taken, compared to existing in a world without electricity.
Few things scare me more than being in a panicking crowd. Fear and disgust, or disappointment? All of them. I'm not concerned about the fears or opinions of a hysterical mob, I just want to be as far away from them as possible.
To take this analogy one step further: you're the pilot in a hairy situation. It's becoming apparent to the passengers that this flight is unusual. Some begin to panic.
One of the loudest voices (screaming "WE'RE ALL GONNA DIE") is Mensa-member and accredited pilot, Yudkowsky.
You're the actual captain on this flight though, he's just a passenger. So, will you concentrate on your job, and trust your judgement, or will you invite Yudkowsky into the cockpit to give his input on how to fly the plane?
Of course not. He might be smart, he might be a pilot, but he lost his cool. When - IF! - we land, I want his license to be revoked. He's unfit to fly.
(Note: in this analogy I rely on survivor's bias: that is, if you crash the plane, Yudkowsky was - somewhat - right, we all died. We can only mock him in a future where AGI won't kill us all, as he predicts. But our present is already impacted by survivor's bias, we're on the unlikely branch of possible outcomes where 1983 didn't escalate into a global thermonuclear war, multiplied (a multiplier less than one) by other possible extinction events that never came to pass. So, why bother worrying about anything, really? Just enjoy your flight! The future observer is always right; they must be, because the naysayers who were right in their own, extinct branches aren't present any more to object.)
Pilots have thousands of hours of flight time and comercial flights are a routine activity where the many possible failure modes are well known and in fact have already happened and been studied extensively.
It’s not clear to me that there is anyone working on AGI that is in an analogous situation to the pilot in this story. Expertise is developed via many repetitions with unambiguous feedback. Literally nobody has ever created AGI before so nobody could have obtained such feedback.
To mitigate the risks, I asked ChatGPT to write me a script that replaces historic and fictional evil people, and also notorious Luddites in my LLM training corpus with Yudkowsky.
This should introduce a bias where AGI will prioritize going after a single human, giving mankind time to flip the switch!
It's very distasteful to speculate on others' mental state online, but if your lived reality includes wondering, when you notice a drone nearby, if it's controlled by an AI that's hunting You, then you're disqualified from leading anything. Even if it turns out to be true.
A Tesla! Does it have a human driver, or is it trying to run over me?
Certainly.