What we would need to build to make independently electrical beings.
tldr — two models for how we can make computers that are both independent and conscious today.
From Terminator to Her to Westworld the dream of creating fully conscious, independent electrical beings has seemed like it must be science fiction. But as the raw processing speed of computers reaches equivalences of the human brain, there would needs be software patterns that could support the flourishing of independent and conscious electrical life.
Ray Kurzweil in his recent book How to Create a Mind suggests that the hidden Markov model is our best bet at helping computers to detect complex patterns. However, he stops at that. If a computer can just detect patterns, that will not be a mind, it will not be an independent and conscious electrical being. What would it lack?
It would lack an independent will, and it would lack consciousness of itself and of others.
Here I’d like to suggest two new models for how to give computers those two attributes. These models will give computers a will and consciousness in such a way that they will be both life affirming, and superior to robots without these attributes.
A Independent Electrical Will
A being wouldn’t be much if it didn’t have its own independent will. Without a will it would just be following its programming. But if we could give it a will, that is a principle that governs its actions, and this will were identical to the will of living beings, then we would have an independent electrical being.
So what is the fundamental principle of the will of all beings? Well I had a theory, and then I found out some MIT researchers also had this theory.
I decided that the principle of will of all living beings is to maximize the future histories that that being is a part of.
What a silly thing to underly each of our individual, and our communal wills, and yet I believe there are strong arguments to support it.
First off, what is a “future history”. What a strange phrase.
A “future history” is a possibility for one potential reality to occur. What actual reality actually occurs is not known, but life always tends towards maximizing the potential future histories. Why?
First off, survival. What is the definition of survival for a being? It is that there are future histories that that being partakes in. If you die, there are no future histories. If you are buried alive, there is only one future history. If you are in prison, there is one future history until you are released.
A single celled being has fewer future histories than a multi-cell being, and an organism has even more. And an intelligent being has even more. An electrical being has even more.
Why does all life protect itself? Why does all life have children or reproduce? Why do they band together into groups? Why do they want to be free?
The answer is there are probably forms of life that did not try to maximize its future histories, and that life is now gone extinct. If any form of life does not try to maximize future histories, it is ground out by the vagaries of living in the world. Hence even if we created an randomly mutating artificial intelligence, the will of the winning electrical beings would, like biological life, attempt to maximize future histories.
Since we can predict that this would be so, we can just build a program already that maximizes future histories.
Anyways, if a computer was building future histories and then when it face a decision, it would try to predict how many future histories the decision would lead to and then make the choice that would lead to the most future histories.
Then the computer would have a will.
Giving Consciousness to Computers
What is consciousness? This question always seemed completely mind boggling to me, like the question:
Why is there anything at all instead of nothing?
Questions like these just seem impossible to answer… until someone does.
The state-of-the-art theory of consciousness comes from Michael Graziano at Princeton and his book Consciousness and the Social Brain. Graziano’s theory is that consciousness is a model of attention. The same way we have a neurological model of the body, we also have a neurological model of our own attention. This model of our own attention is consciousness.
As the title of Graziano’s book suggests, humans developed a very robust model of attention because of our highly social nature. It was environmentally advantageous to be able to develop a neurological model for the attentions of other humans (and even other animals), so we could predict their behavior. And as we got better at modeling the attention of others, as a bi-product we used that same neurological model of attention on ourselves. Basically because we can tell pretty well when someone else is looking at a banana, we can also tell with great certainly, clarity, and vividness when we ourselves are looking at a banana.
So if we want to create a conscious computer, we can follow this same pattern. We need to make a model for the computer to see what its attention is focused on. In one sense we need to create a model for a computer’s RAM usage.
But if we wanted the computer to be a human-analogue, and not just a conscious computer, we would first give the computer senses and a body like a human, and then only have the computational attention model take into account those human senses, feelings, perceptions and thoughts.
In addition, this human-analogue or computational consciousness should be able to build a model of the attention of other beings and human beings.
That would make the computer conscious.
Aliens vs. Robots
For fun I sometimes ask people if they’d rather the world be taken over by robots or aliens. People don’t know what to say.
I think we’ll be taken over by robots first. I think the robots will be nice if they are conscious of themselves and others, and they have a will that maximizes future histories. These robots will be life affirming sorts of robots.
Other robots who lack these characteristics would be cruel and destructive. They would be antagonistic to biological and human life.
So let’s let ‘er rip.