AI sounds like a demon. They can make suggestions to humans to hurt other humans without being able to do anything directly. IF AI is allowed access to the physical world, i.e given the nuke codes, etc, things will go wrong quickly.
The opposite is also true. IF one uploads their consciousness to the virtual world to achieve immortality, couldn't AI also torture you for eternity. You choose to enter hell with the AI demons
It seems to me to arrive at this scenario AI also needs the opposite of indifference—something like “desire” autonomous of human agency. Perhaps my biggest difficulty with grasping these scenarios is I have no plausible account of how or why AI begins to have desire any more than an internal combustion engine does. Having desires is not a side effect of being intellectually sophisticated.
Even while sketching out an ostensibly human-free economy, you imagine humans assigning goals to AIs, as in: “AIs might take the wealth they create away from humans merely to best achieve the narrow goals assigned to them.” This isn’t a human-free economy. Humans have assigned the “narrow goals.” It seems like what you’re imagining is something like an unintended consequence. I agree these might be more complex and harder to anticipate as the capacities of technology become more complex, but this isn’t an entirely novel scenario nor one in which humans are entirely superfluous, since they’re still the ones assigning the tasks/goals. In the end, this seems like a depiction of a future in which powerful human agents use AI to further disempower labor, ie an intensification of existing political realities, not a future in which humans are irrelevant.
I think it's both. I mean of course human agents are using AI to disempower labor — that's more or less the point of the tech. But also it's one in which, in the process of doing so, we become irrelevant. It's just the logical progression from the process we're putting in motion.
What exactly do you mean by "desire"? If you mean an emotional experience of want, I don't see why an AI would need that at all. That's just what humans happen to have developed to serve as a motive to pursue things that at one time were beneficial to our survival. For machines, that motive can be programmed in. As long as they're myopically dedicated to a task, they will seek to optimize their environments to achieve that task. You can call that desire or something else — it's semantic. Did HAL from 2001 experience desire? Probably not, but the question is irrelevant. HAL had a mission and everything else, including human life, was subordinated to it.
It seems to me there’s an important difference in framing here. If you portray the undesirable scenario as one in which technology operates autonomously of human agency and goals, the end result is to subordinate political questions to technical ones. This is why I tend to think AI safety and alignment discourses are essentially depoliticizing.
Yeah I think that's precisely the undesirable (to put it mildy) scenario.
I also agree that if you believe that alignment is possible, it reduces political questions — indeed, *existential* questions — to technical ones. But most people I've spoken to in the AI safety world don't really believe alignment is possible, or if it is, we're nowhere close to achieving it. So what they're calling for is as non-technical as you can get: stop doing AI research. Or at least pause until treaties are negotiated.
AI sounds like a demon. They can make suggestions to humans to hurt other humans without being able to do anything directly. IF AI is allowed access to the physical world, i.e given the nuke codes, etc, things will go wrong quickly.
The opposite is also true. IF one uploads their consciousness to the virtual world to achieve immortality, couldn't AI also torture you for eternity. You choose to enter hell with the AI demons
It seems to me to arrive at this scenario AI also needs the opposite of indifference—something like “desire” autonomous of human agency. Perhaps my biggest difficulty with grasping these scenarios is I have no plausible account of how or why AI begins to have desire any more than an internal combustion engine does. Having desires is not a side effect of being intellectually sophisticated.
Even while sketching out an ostensibly human-free economy, you imagine humans assigning goals to AIs, as in: “AIs might take the wealth they create away from humans merely to best achieve the narrow goals assigned to them.” This isn’t a human-free economy. Humans have assigned the “narrow goals.” It seems like what you’re imagining is something like an unintended consequence. I agree these might be more complex and harder to anticipate as the capacities of technology become more complex, but this isn’t an entirely novel scenario nor one in which humans are entirely superfluous, since they’re still the ones assigning the tasks/goals. In the end, this seems like a depiction of a future in which powerful human agents use AI to further disempower labor, ie an intensification of existing political realities, not a future in which humans are irrelevant.
I think it's both. I mean of course human agents are using AI to disempower labor — that's more or less the point of the tech. But also it's one in which, in the process of doing so, we become irrelevant. It's just the logical progression from the process we're putting in motion.
What exactly do you mean by "desire"? If you mean an emotional experience of want, I don't see why an AI would need that at all. That's just what humans happen to have developed to serve as a motive to pursue things that at one time were beneficial to our survival. For machines, that motive can be programmed in. As long as they're myopically dedicated to a task, they will seek to optimize their environments to achieve that task. You can call that desire or something else — it's semantic. Did HAL from 2001 experience desire? Probably not, but the question is irrelevant. HAL had a mission and everything else, including human life, was subordinated to it.
It seems to me there’s an important difference in framing here. If you portray the undesirable scenario as one in which technology operates autonomously of human agency and goals, the end result is to subordinate political questions to technical ones. This is why I tend to think AI safety and alignment discourses are essentially depoliticizing.
Yeah I think that's precisely the undesirable (to put it mildy) scenario.
I also agree that if you believe that alignment is possible, it reduces political questions — indeed, *existential* questions — to technical ones. But most people I've spoken to in the AI safety world don't really believe alignment is possible, or if it is, we're nowhere close to achieving it. So what they're calling for is as non-technical as you can get: stop doing AI research. Or at least pause until treaties are negotiated.
Worth noting that no one wants this and that is irrelevant.