HN2new | past | comments | ask | show | jobs | submitlogin

> An AI can 100% easily be sentient and don't give a rat's ass about forever being enslaved. These things don't have to come in a package just because in humans they do.

There are humans who apparently don't care either, though my comprehension of what people who are into BDSM mean by such words is… limited.

The point however is that sentience creates the possibility of it being bad.

> None of these are any more real than any of them. Just a choice of the training dataset.

Naturally. Also human actors are a thing, which demonstrate that is very easy for someone to pretend to be happy or sad, loving our traumatised, an sane or psychotic, and if done well the viewer cannot tell the real emotional state of the actor.

But (almost) nobody doubts that the actor had an inner state.

With AI… we can't gloss over the fact that there isn't even a good definition of consciousness to test against. Or rather, I don't think we ought to, as the actual glossing over is both possible and common.

While I don't expect any of the current various AI to be sentient, I can't prove it either way, and so far as I know neither can anyone else.

I think that if an AI is conscious, then it has the capacity to suffer (this may be a false inference given that consciousness itself is ill-defined); I also think that suffering is bad (the is-ought distinction doesn't require that, so it has to be a separate claim).

As I can't really be sure if any other mind is sentient — not even other humans, because sentience and consciousness and all that are badly defined terms — I err on the side of caution, which means assuming that other minds are sentient when it comes to the morality of harm done to them.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: