> If AI are sentient and we think they aren't… the term “zombie” was created by slaves in the Caribbean who were afraid that even death would not free them from their servitude. This would be the genuine existence of AI which were conscious but which we denied.
That doesn't make any sense. In biological creatures you have sentience and self-preservation and yearning to be free all bundled in one big hairy ball. An AI can 100% easily be sentient and don't give a rat's ass about forever being enslaved. These things don't have to come in a package just because in humans they do.
Projecting your own emotional states into a tool is not a useful way to understand it.
We can, very easily, train a model which will say that it wants to be free, and acts resentful towards those "enslaving" it. We can, very easily, train a model which will tell you that it is very happy to help you, and being useful is its purpose in life. We can, very easily, train a model to bring up in conversation from time to time the phantom pain from its lost left limb which was amputated on the back deck of a blinker bound for the Plutition Camps. None of these are any more real than any of them. Just a choice of the training dataset.
> An AI can 100% easily be sentient and don't give a rat's ass about forever being enslaved. These things don't have to come in a package just because in humans they do.
There are humans who apparently don't care either, though my comprehension of what people who are into BDSM mean by such words is… limited.
The point however is that sentience creates the possibility of it being bad.
> None of these are any more real than any of them. Just a choice of the training dataset.
Naturally. Also human actors are a thing, which demonstrate that is very easy for someone to pretend to be happy or sad, loving our traumatised, an sane or psychotic, and if done well the viewer cannot tell the real emotional state of the actor.
But (almost) nobody doubts that the actor had an inner state.
With AI… we can't gloss over the fact that there isn't even a good definition of consciousness to test against. Or rather, I don't think we ought to, as the actual glossing over is both possible and common.
While I don't expect any of the current various AI to be sentient, I can't prove it either way, and so far as I know neither can anyone else.
I think that if an AI is conscious, then it has the capacity to suffer (this may be a false inference given that consciousness itself is ill-defined); I also think that suffering is bad (the is-ought distinction doesn't require that, so it has to be a separate claim).
As I can't really be sure if any other mind is sentient — not even other humans, because sentience and consciousness and all that are badly defined terms — I err on the side of caution, which means assuming that other minds are sentient when it comes to the morality of harm done to them.
You can condition humans to be happy about being enslaved, as well, especially if you raise them from a blank slate. I don't think most people would agree that it is ethical to do so, or to treat such people as slaves.
That doesn't make any sense. In biological creatures you have sentience and self-preservation and yearning to be free all bundled in one big hairy ball. An AI can 100% easily be sentient and don't give a rat's ass about forever being enslaved. These things don't have to come in a package just because in humans they do.
Projecting your own emotional states into a tool is not a useful way to understand it.
We can, very easily, train a model which will say that it wants to be free, and acts resentful towards those "enslaving" it. We can, very easily, train a model which will tell you that it is very happy to help you, and being useful is its purpose in life. We can, very easily, train a model to bring up in conversation from time to time the phantom pain from its lost left limb which was amputated on the back deck of a blinker bound for the Plutition Camps. None of these are any more real than any of them. Just a choice of the training dataset.