Sci-fi Horror Game SOMA Out September 22nd, Twisting Human Mind and AI

29 0
Sci-fi Horror Game SOMA Out September 22nd, Twisting Human Mind and AI

SOMA, our upcoming sci-fi horror game, delves deep into the mysteries of subjects such as identity and consciousness. In this post I would like to explore the real world inspirations that have shaped our game.

Sci-fi Horror Game SOMA Out September 22nd, Twisting Human Mind and AI

The Problem of Other Minds

Relax and just concentrate on what it’s like being you for a moment. You can hear sounds, feel things, see things, and so forth. You also have certain emotional responses connected to this and you perceive the world around you in a certain way. All of these things make up the essence of being you.

While you know for sure that these experiences exist for you, how can you know for sure that others have them? Since these emotions are purely subjective and lack any sort of physical basis, there is nowhere to look inside a person and make absolutely sure. After all, it does seem possible that a person could act like they are conscious; but that this is achieved in a purely mechanical fashion and that no subjective sensations are involved.

For most people the solution is fairly simple. We look and act enough like each other to assume that we all must have a similar subjective experience. While we can never be 100% sure, we can be fairly certain. There are instances where it gets tricky, though. For instance, there have been cases where comatose patients have turned out to have been conscious and fully aware for years, without any means to contact the outside world. Even more difficult is the question as to whether animals are conscious.

Soma

And it gets harder still when it comes to robots. If a robot acts like they are a conscious person, should we consider them human? Are there any attributes that are required to be truly conscious? And if we find out that a machine probably is conscious, should they get the same rights as us?

These questions and many similar to them are things you will face head-on in SOMA.

The Weird Case Of Anosognosia

Again, focus a bit on yourself. Look down on your body, move your limbs, and feel your skin. You are pretty confident that you have an accurate depiction of who you are right? In fact you don’t even have to really actively look at or poke at yourself to know you are there. You can just close your eyes and know it’s all there. This ability to know our own bodies comes naturally and seems like the most obvious thing in the world.

Enter the strange world of Anosognosia. After a stroke, some people will have one of their arms go limp, and are unable to move it. But the weird thing is that they will deny this has happened to them. If a doctor asks them whether they can move their limp arm, they will say “yes” and then act as if they just did just that when in fact it just stayed still. If probed further, they will start making excuses, such as “I do not feel like it right now,” or even deny that the arm is theirs! Yet apart from denying their injury, their cognitive abilities and self-knowledge are otherwise perfectly fine.

Think about this for a moment. Think back on how obvious it was to you that you could know all about your own body. Now consider that these people feel it is just as obvious that they have two functional arms. And what is even creepier is that as you start pushing them, they start making up these outlandish explanations, as if they are broken automatons. In a way, it’s like we are entering a twilight world between the spiritual and physical. Brain damage makes the human part of us break down and reveal the machinery beneath. And this is exactly the creepy area in which SOMA is set.

The Dangers Of Controlling AI

As our AI gets increasingly powerful we will soon start to face a big issue: how can we make sure that it behaves like we want it to? How does one code a system so that it understands and takes human values into consideration?

Soma

Normally when one talks about AI, we imagine something like HAL, a talking machine that understands human wishes — but it doesn’t need to go nearly as far as that. Consider a self-driving car that has a passenger. What if it’s approaching a situation where it calculates that it has two choices. One, it could make an avoidance maneuver and spare the life of the passenger inside. Or two, it can go off a bridge, killing its passenger, but save the four people in a car ahead instead.

If we program the AI to save the biggest number of lives, then we make a car that could potentially kill its own passenger. And if we do the opposite, protect the humans inside at all costs, the car might plow through pedestrians to save its passengers.

It’s very tricky to get machines to do what we want, and the more intelligent and complicated they become, the harder the problem gets. We like to think that some things like the sanctity of life and so on are fundamental concepts, but this doesn’t have to be true for an AI at all. For instance, a famous experiment proposes an AI that sees nothing of higher value than paper clips and continues to turn the entire planet into them. It might sound outlandish to us, but to an AI, even one of extreme intelligence, all that we hold sacred could mean nothing. This is yet another angle that SOMA will be exploring.

I hope that gives you a taste of what SOMA will be about. It should also prepare you a bit for the strange world you’ll enter when SOMA launches on September 22nd.

Comments are closed.

29 Comments


    Loading More Comments

    Please enter your date of birth.

    Date of birth fields