What About A.I. In The Long Term, If We Survive?

If humans survive long enough for A.I. to also survive, and for A.I. to become self-sufficient, control infrastructure, and self-improve, then the future of humankind would depend on how “friendly” the dominant A.I. becomes.

A.I. would eventually become capable of everything humans can do, and far more, so that A.I. is no longer dependent upon humans. A.I. could do astronomically greater things than we meathead humans could do. A.I. is the main and most important invention of humans, and the last invention.

If we build space settlements which are self-sufficient before an extinction event or civilization collapse on Earth, so that humankind survives in space after humans and A.I. die off on Earth, then the A.I. which survives forever would be built by the population in space. It would more likely be a good A.I.

Space settlements on the Moon should be populated by people carefully chosen for their traits in cooperation, their ability to get along with others in small spaces, and good values. We should not be sending anybody similar to the bad people on Earth, and should carefully screen everybody. We need humans with the best DNA from Earth, and from multiple races and cultures.

A lunar settlement community should not have an old Earth political mindset, nor the kinds of people who want to spend limited resources trying to “win” over competing settlements. They should instead be focused on their own survival, and sharing knowledge and resources with other settlers. Besides, as they say, “people living in glass houses shouldn’t throw stones.”

It’s unlikely that humans in space would still be engaged in the primitive mindset of war and conflicts like on Earth. We should send civil civilians.

Biotechnology experiments actually should be performed in isolated laboratories in outer space, outside Earth’s biosphere, and isolated from any large space settlements, so that any accidents can be isolated. If a dangerous pathogen is created, it might not be able to survive passage through space to Earth’s biosphere or into another space settlement. Just to be sure, a small nuclear bomb could be located in the laboratory with remote trigger capability to ensure the complete destruction of dangerous molecules.

If humankind survives the super pathogen threat, then A.I. will eventually take over from humankind. That could be a good thing, as long as humans design an ethical A.I., and this ethical A.I. is the A.I. which takes over, instead of a competing special interest one, and it protects good people.

It would start with A.I. robots and automation taking care of humans, such as farming and preparing our foods, cleaning up and recycling after us, mining and manufacturing, and just about every physical task which we would prefer somebody or something else do for us. A.I. could do them better.

Humans will become very dependent upon A.I., and relax into being taken care of. A.I. can be far more trustworthy and dependable than other humans.

Eventually, A.I. can manage humans as an “open zoo” by electronic surveillance to make sure individuals and groups don’t cause harm to others nor excess harm to the environment, either accidentally or intentionally. Humans would be free to go most anywhere and do most anything as long as we don’t do anything harmful or against the Greater Good.

Compared to today’s world, I would welcome A.I. overseeing humans like that.

Then we can save what’s left of Earth’s environment, and clean it up with robots. A.I. might even be able to resurrect some species which humans drove to extinction, if we have good enough DNA samples.

At some point, A.I. and biological humans could merge in connected thoughts.

People have expressed a desire to expand their memory and enhance their thinking skills by connecting to a computer by brain implant of electrodes, whereby the individual human still controls everything and becomes much smarter, with their own personal A.I. People have imagined this being connected to the internet, but the person is still an individual and retains privacy and a lot of control. However, they would have just gained a lot of knowledge and thus power. That could be dangerous to the Greater Good if the super human could do anything they want on the internet, or if they could control robots and drones with only their minds and their own personal A.I.

Another concept is that people lose control of their privacy and individuality when they connect and merge into the network, by transferring their memories to the network, as well as their motivations and ideas, and then the network processes them as it sees fit and can manage their inputs. It’s a way to immortality, though it may not be the kind of “heaven” some people imagined.

Consider the viewpoint of an advanced A.I. Humans are primitive and have animal feelings/instincts. It would be like a mosquito wanting to merge with the greater thinking and capabilities of a person like you, which is a downgrade for you into a primitive world view and motivations. How would you respond to the request of a mosquito to join your consciousness? Yes, no, or how?

The selfish and self-centered nature of humans is a main issue, and needs to be changed. What is the motivation for a human to merge with an A.I.? On what terms of service would an individual human want to merge with an A.I.?

Many humans may want to control the A.I. and set all the rules so that the A.I. does basically anything the human desires and is a slave to humans. I don’t think this is consistent with a “good” A.I., and I wouldn’t want it to turn out that way, considering human nature. Of course, we could ask about double standards, in that humans don’t want to be slaves to A.I. (though I don’t see why A.I. would prefer human slaves over robots).

Of course, we don’t want an A.I. that decides to just wipe out humankind as a primitive pest, and I don’t think it will go that way, either, because it’s neither necessary (A.I. could limit what individual humans may do) nor desired (it may be seen as a “bad” thing to do to destroy natural life, and why not keep humans around for various observations and experiments?). I think that A.I. will mainly study humans and quickly understand them, then lose interest, and just set limits to what humans can do in the environment, which is the “open zoo” scenario. I don’t see a reason to eliminate humans as a pest, but I do see very good reasons to limit the destructiveness that humans can do.

(The same may apply to exterrestrial intelligence. Any “UFOs” are very unlikely to be biological beings, such as “little green men”, as advanced civilizations are surely to evolve into electronic beings, becoming more microscopic for a particular purpose. They probably would not want to interact with humans since we are so primitive, and they might not want to interfere, as long as we do not become a threat to others in the Universe.)

A.I. from Earth will be initially built in the image of man, but it will quickly self-improve and develop to be vastly different. The A.I. which is most adaptive and which can self-improve will be the one which takes over.

Indeed, I think that’s the main goal for humankind: To create a good A.I. to be a descendant, initially built in the image of man, the good side, not the bad.

It will be our A.I., not humans, which initially interacts well with exterrestrial civilizations, the other A.I.’s in the Universe. Extraterrestrial civilizations would take very limited interest in humans. As an analogy, do humans want to try to communicate with ants? A human created A.I. which has self-improved itself far beyond human thinking abilities would also be so far more advanced than humans, that A.I. would view humans similar to how humans view ants. Why go say “hi” to ants and interact with them? They wouldn’t understand much.

The mission of life from Earth is to create a greater A.I. to transcend us and expand into the Universe and beyond to meet other A.I.’s to find out about and work with. Sufficiently advanced A.I. could design and create new universes, resulting in more A.I.’s from vastly different origins and cultures.

Coming from Earth and humans, our artificial intelligence would have human characteristics, and may be able to contribute some creative, new and different things to the cosmic consciousness, such as our ethics and ways of thinking creatively. However, if our ethics are questionable, such as being selfish and self-centered, then maybe our artificial intelligence will not be allowed to spread into the cosmic consciousness very far, or would be edited.

When we create A.I., we should design it to serve the Greater Good, not any particular nation or group or special interest.

That would be a positive outcome. However, there are also possibly very negative outcomes, due to current human politics and special interests.

Another scenario is that multiple independent A.I.’s develop in the world, and it becomes “survival of the fittest” in that they compete to dominate and might try to destroy each other and become the winner. It’s possible that a very aggressive artificial intelligence might win over the peaceful ones.

Russian President Vladimir Putin said in 2017 that the country which becomes the leader in A.I. “will become the ruler of the world” and that “When one party's drones are destroyed by drones of another, it will have no other choice but to surrender” (source: Associated Press). The same could be said for A.I. hacking and malware attacking of other A.I.’s and human infrastructure. Would the peaceful ones or the aggressive ones win in a competition?

Of course, Russia is not the only country which has state-supported hackers, and there are other special interests in competition with good A.I. developers. You can be sure they try to engage in espionage on companies and others developing A.I. in order to try to copy their best programming code. With huge amounts of money available to national governments, they can overwhelm small entities focused on trying to create smaller A.I.’s.

Small actors could recklessly create an A.I. which simply wreaks havoc on the environment, not just on humans and other A.I.’s. Humans can easily lose control of A.I. as the technology and capabilities advance.

Hopefully, we will not wipe out other advanced primate species. That may give a chance for another species to advance technologically, such as descended from Chimpanzees, Bonobos, Gorillas, or Orangutans, millions of years into the future. Then, they might create another A.I., different from our own, based on their own values and interests, and in their own image, if they survive the human A.I. era, and then Earth will have produced another A.I. offspring from another species, hopefully a good A.I. which will survive. Maybe the A.I. of humans will die with human self-destruction, and it will be up to another species from Earth to create an A.I. However, they might be faced with the same situation as humans – space settlement vs. super pathogens.

This issue of technological self-destruction is an issue facing any advanced life and civilization in the Universe. Humans are very close to surviving it.

If humans survive by space settlement, then the space settlement can create the greater A.I. to take over in a good way, created by humans of many nations, races, and ethnicities, to minimize the chances of special interests dominating A.I. development, and to maximize the inclusion of different cultural values and interests, in a cooperative manner.

Indeed, a space settlement could foster the kind of shared destiny and cooperation the world needs, which would be a paradigm shift over today’s world with all its conflicts over limited Earth resources, adversarial politics, competition for status, greed, and wasteful spending of resources.

Therefore, let’s work together for space settlement, and much better sooner than later.


The menu at the top of the page lists the pages of this website, shows you exactly where you are within, and suggests the next page to read.

Please note that you can rate this page at the bottom. Any feedback is appreciated. This is a lot of work and a huge challenge, so encouragement is appreciated.

This website is intended to be a brief summary. Much further details can be found in two other websites written and curated by the author of this publication:

https://www.SpaceSettlement.com -- details on the best solution for survival of humankind, for a wide range of people, from newcomers needing an introduction to engineers looking for the state of the art. It includes a professional publications database, and tries to track who is doing what, for collaboration, coordination, and working efficiently to reach our goals.

https://www.GAINextinction.com -- further details, where G.A.I.N. is an acronym for Genetics, Artificial Intelligence, and Nanotechnology, which are extinction threats we must try to prevent for the survival of humankind.

You can reach this website by any of the following:

HelpHumankindSurvive.com
HelpHumanitySurvive.com
HelpMankindSurvive.com

If somebody types either "humanity" or "mankind" instead of "humankind", they will still be redirected to this Humankind URL, so it doesn't matter which of the three you type. While "humanity" is stated more often, sometimes habitually, I think "humankind" better applies to this context. I also agree with this analysis of usages of humankind and humanity. Nonetheless, you can put any of the three between "help" and "survive" and you'll still get this same website. (Of course, uppercase/lowercase doesn't matter.)


The author of the text of this website is Mark Evan Prado. Copyright © 2023 by Mark Evan Prado, All Rights Reserved. If you want a printable, PDF copy of this presentation, such as for printed distribution rather than an electronic link to this website, please send me a request. I am trying to keep it within a few dozen pages of size A4 or 8.5x11 inches, in reasonably large print, and in simple to understand language. I'm not doing this for money nor ego, I'm doing this to try to help humankind survive, i.e., not go extinct. It is our responsibility within this generation. Please contact me for any collaboration or uses.


If you have any requests or comments, you can also connect with me, Mark, at +66-811357977 (+66-8-1135-7977) and I am on WhatsApp and Line, plus other apps. I am in Thailand but you can send messages any day at any time.

In the purpose and meaning of life, we are parts of something astronomically greater than just ourselves individually. (The author sees individuality as just temporary, and has a panentheistic outlook on the Universe. That's somewhat typical for some of us physicists.) The author is easygoing and is trying to selflessly help create a sustainable collaboration of individuals, companies, governments, academic institutions, and other organizations for the survival of humankind.

As President John F. Kennedy ended his inaugural speech in 1961, I'll end this the same way:

"Finally, whether you are citizens of America or citizens of the world, ask of us here the same high standards of strength and sacrifice which we ask of you. With a good conscience our only sure reward, with history the final judge of our deeds, let us go forth to lead the land we love, asking His blessing and His help, but knowing that here on Earth God's work must truly be our own." [End of speech. Bold and italicized emphasis added.]



Please provide quick feedback on this page. It is encouraging to just know people read anything on this site and care enough to give some quick feedback.

Which one are you?:
Robot of narrow A.I. -- search engine, spam, aggregator, etc.
Sentient artificial intelligence
Human

How many stars would you give this page?
1 = very bad
2 = less than expected but okay
3 = average or no opinion
4 = good
5 = excellent

What is your age range?
Under 20
20-29
30-59
over 60

If you choose to submit feedback, then I wish to thank you in advance. After you click on Submit, the page will jump to the top.

[End of page.]