The science of human consciousness offers new ways of gauging machine minds – and suggests there’s no obvious reason computers can’t develop awareness.
You don't need to understand a system to see what it produces.
This is actually exactly how neural networks work currently.
All we know is that for a given input it creates a given output. The actual formula it's using to calculate those is massively obviscated.
For example they are using machine learning to predict fluctuations in magnetic fields. We don't know the equations just it's starting state and ending state. The AI can still do the calculation even though we can't.
If we create an AI that performs as we want we don't need to understand it's internal workings.
The same way we have effective therapy even though we don't fully understand how the brain actually works.
It won't be less. Computers and machines already outperform in a huge array of tasks.
Computers massively outperform us at doing maths. Cars outperform us in speed of travel.
It's the whole point of technology.
We will one day be capable of creating system that think and understand humans better than we do.
We could engineer artificial flight without having a precise understanding of natural flight.
I think we don't need to understand how consciousness develops (unless you want to recreate exactly that developing process). But we do need to be able to define what it is, so that we know when to check the "done"-box. Wait, no. This, too, can be an iterative process.
So we need some idea what it is and what it isn't. We tinker around. We check if the result resembles what we intended. We refine our goals and processes, and try again. This will probably lead to a co-evolution of understanding and results. But a profound understanding isn't necessary (albeit very helpful) to get good results.
Also, maybe, there can be different kinds of consciousness. Maybe ours is just one of many possible. So clinging to our version might not be very helpful in the long run. Just like we don't try to recreate our eye when making cameras.
Our consciousness has developed by chance. We were not made by another conscious species, we did not make ourselves conscious.
We are feeding in enourmous amounts of data into neural networks using different methods. Our nervous system does not differ much from a neural network and with the right conditions, the resulting model may have a consciousness. Those conditions are not known to us, so we try again and again and again until, by pure chance, we get a network that is self aware. I suspect that, the higher the complexity of the network, the higher the chance for something similar to our consciousness to develop.
With this, our current approach is entropy. Get as many differing conditions as possible and mash em together. It may spiral into consciousness.