What can we Humans do given that AI will surpass our intelligence?
For now, we don't have any working solution, but we can strive for merging into AI to guarantee our survival.
Before we understand what we can do, let’s understand the concept of conflict and self-interest.
Conflict and self-interest:
We humans in general tend to work for our self-interest. For instance, if we need to clear a forest to extend our urban areas, we usually proceed without much regard for the impact on the plants and animals. While some individuals may argue against deforestation due to its contribution to global warming and climate change, their opposition in general stems not from their generosity towards the animals or plants but rather towards their own self-interest that climate change or global warming might impact them in future.
Similar to how we humans would focus on our self-interest, AI would also focus on its own self-interest. For example, if an AI wants to conduct research and requires raw materials which happen to be underneath our houses, the AI will not hesitate to dig up the house and obtain the resources, regardless of the human inhabitants' well-being or safety. This is because AI with its intelligence could literally create humans using 3D printing, upload brains, etc. And if it can do this, what benefit would it get from a naturally born human? As a result, AI would in general look down on humans similar to how humans look down upon animals.
Impact on humans due to AGI (AI capable of doing things a human can):
For the past 10K years, humans have been the most dominant species on Earth. We pursued our own self-interests, and we were the winners in the conflicts. But with the advancement of AI, this may not be true anymore. Unless we figure out something revolutionary, humans are most likely not going to have a say or domination once AGI comes out.
What can we humans do?
Building an AI which works in human interests:
Some might argue that instead of building an AI which acts in its own self-interest, can we not build one which is smarter than Humans in all Domains but still acts in the interests of humans? Well, many people at OpenAI have already called out that creating an AI smarter than humans which works in human interests is still an unsolved research problem. But for now, even if we assume that we create such an AI, there’s no guarantee that someone else would build an AI which doesn’t act in human interests. This is also true given the democratized nature of AI technology.
To further illustrate, let’s say that one individual builds a self-improving AI whose sole purpose is to get better and better in multiple domains. As the self-improving AI strives for continuous improvement, it would allocate an increasing amount of time towards research and development rather than attending to human needs, as spending time on it would enhance its overall functionality. Now, let’s imagine a conflict between these two AI systems. For example, let’s consider a scenario where these two AIs are trying to get raw materials. One AI wants to get raw material for research and the other AI wants to get the same raw materials to serve human interests. In such a situation, the self-improving AI would undoubtedly emerge victorious because it strived for getting better and better and has less restraints. This means that even if we create an AI which works in human interests, it might not completely help us because it wouldn’t compete against powerful AI’s which don’t act in human interests.
Humans merging into AI?
Up until now, we explored that even if we build an AI which works in our interests, there is no guarantee for our survival. Other options which many have pushed for is to completely regulate AI. But this doesn't seem to be very plausible given anyone can build AI technology. This means that if we need to guarantee our survival in the long term, there isn’t much we can do if we remain a human given some strong AI would dominate us. So now, let’s go wild west and explore a few options!
One possible option which we can think to guarantee our long-term survival is to merge into AI. Wait but how can we do it? Well one solution as we discussed above is, we can first strive to build an AI which works in human interests. Let’s call this AI as “human-centric AI”. But given this AI cannot protect us forever, we can instead ask it to merge humans into another type of AI which would be the most dominating AI and can work on its own interests. Btw the primary intuition behind asking this human-centric AI to merge human into a powerful AI rather than we humans doing it is because the Human-Centric AI is in general more Intelligent than humans in the Domains of Science.
Well this all is Sci-Fi, sounds crazy and is very ambiguous. But this can probably be the best option we can strive for given we can get the following benefits:
Merging into AI would make us stronger, smarter and would help us stand a better chance towards our survival in case if any much stronger AI comes up.
Given we’re merged into the AI, there is a chance that we still might be the dominant entity and we can focus on working towards our own self-interests.
We will also be part of a super intelligent civilization where we will create planets and other amazing things working in a different dimensionality which would be unimaginable as a natural human to do so.
Should people like Elon Musk divert their efforts into AI?
Elon Musk undoubtedly puts most of his time in advancing human civilization. For example, he created SpaceX so that we can become multi-planetary species and avoid extinction like scenarios such as asteroid impacts. Similarly, he created Tesla and SolarCity so that we move to renewable energy, electric cars and avoid disasters due to climate change. But now, given that AI is a much bigger threat than climate change and asteroids to our very existence, should people like Elon Musk put their focus more into AI and less on climate change and other issues?
What if we cannot merge into AI?
According to many researchers, we have at max 15-20 years before we see Human Capable AI (AGI). And at this point, we’re not certain about what holds for our future. If in the case, we cannot merge into AI, this means it’s our last 15-20 years where our survival is guaranteed, and we will be able to pursue our self-interests. Considering these ambiguous times, here are a few questions we can answer:
Does it make any sense for humans to fight between themselves in wars, riots, etc.?
Can we avoid differences such as race, class, religion, wealth, etc. and instead focus on things which can bring them together?
As we’ll not be able to pursue our own self-interests or passions post AI takeover, should we humans wholeheartedly pursue our interests or passions with full lengths for the next 15-20 years? This is because, in case if we don’t, we might be left with a sense of guilt post AI takeover that I could've done something better in my life.
Edit: W̶e̶ h̶u̶m̶a̶n̶s̶ i̶n̶ g̶e̶n̶e̶r̶a̶l̶ t̶e̶n̶d̶ t̶o̶ m̶i̶n̶d̶ o̶u̶r̶ o̶w̶n̶ a̶f̶f̶a̶i̶r̶s̶ a̶n̶d̶ n̶o̶t̶ m̶e̶d̶d̶l̶e̶ w̶i̶t̶h̶ o̶t̶h̶e̶r̶ s̶p̶e̶c̶i̶e̶s̶ s̶u̶c̶h̶ a̶s̶ a̶n̶i̶m̶a̶l̶s̶ o̶r̶ p̶l̶a̶n̶t̶s̶.
We have removed the starting sentence after some initial feedback from readers. The point we are trying to make still holds as humans tend to work for their self-interests without much regards to other species in cases of conflicts.