Could you resist a true virtual reality and should you?
Lets assume we develop the capacity to create virtual worlds that are near indistinguishable from the real world. We hook you up into a machine and you now find yourself in what effectively is a paraller reality where you get to be the king of your own universe (if you so desire). Nothing is off limits - everything you've ever dreamt of is possible. You can be the only person there, you can populate it with unconscious AI that appears consciouss or you can have other people visit your world and you can visit theirs aswell as spend time in "public worlds" with millions of other real people.
Would you try it and do you think you'd prefer it over real world? Do you see it as a negative from individual perspective if significant part of the population basically spend their entire lives there?
This is where we start getting into the realm of philosophy as it relates to science fiction esq "true" Artificial Intelligence.
Taking the post at face value these AI persons that populate your individual pocket dimension would be, for all intents and purposes, sentient artificial minds, or at least controlled by 1 central mind.
So does that AI deserve human rights? Do laws apply to the and interaction had with them? If all they know is humanity then are they also "human"? Is this theoretically infinitely intelligent super computer even capable of truly understanding humanity, emotions, life in all of its facets?
I fully accept that I am getting too deep into this funny internet post but there have been hundreds upon thousands of books, thought experiments, and debates over this EXACT premise. Short answer is there is no answer. It's Schrodinger's morality lol
I guess it depends on how realistic the fake consciousness is. Is it indistinguishable from real consciousness? Or would I be acutely aware that every relationship I create is fake? I mean, I guess if we're claiming it absolutely is not real, then I'll always know that and it kinda taints the whole idea. It kind of makes me wonder about the whole concept. Like, if we did find a way to determine consciousness somehow, could that knowledge interfere with building an emotional relationship with a indistinguishable but fake conscious AI?
How do you test that? How do you know that people around you actually have conscious and not just seem to have? If you can't experience anything, how do you fake conscious? And is this fake conscious really any less real than ours? I think anything that resembles conscious well enough to fool people could be argued to be real, even if it's different to ours.
I don't think it matters in this case. I decided that they are not consciouss and only seem to be because I didn't want this thread to turn into debate about wether it's immoral to abuse AI systems or not.
No. I'm very certain that my Roomba is not conscious. But If we can't tell whether or not these people are conscious or not, then I don't think it's right to have this power over them. A better parallel than a Roomba would be an animal.
No. I wrote the premise myself and I specifically said they appear consciouss, not that they are consciouss. I get what you're saying but that does not apply here. In this specific case we know for a fact that they're not consciouss. The only other consciouss being there on top of you are the other real people in the simulation. Not the AI characters.
I'm saying that appears conscious and is conscious could very well be the same thing, we don't know, so in this imaginary world, I would not trust anyone who told me "don't worry, you can torture them, they are not actually conscious".
We do know. Consciousness is what you're experiencing now. Then again general anesthesia is what non-consciousness feels like. Nothing. It by definition cannot be experienced
What we don't know is how to measure it. There's no way to confirm that something is or isn't consciouss