by Mariana Bozesan
Elon Musk is a well-known critic of unsafe and unethical AI. He argues that, if we do not pay attention, the percentage of nonhuman intelligence (i.e., AI) on our planet will continue to grow until it supersedes human intelligence, with potentially dire consequences.[1] Echoing the words of the late Stephen Hawking, Musk warned about the dangers of AI in saying, “We need to be super careful with AI. Potentially more dangerous than nukes”[2] while expressing his frustration about his futile efforts to get governments to regulate it.[on YouTube] During a panel with Alibaba’s founder Jack Ma at the 2019 World Artificial Intelligence Conference, Musk stated, “Most people underestimate the capability of AI” and “the biggest mistake AI researchers are making is to assume that they are intelligent.”[on YouTube] In his view, the difference between AI and humans in the future will be like the difference between current humans and chimpanzees. Counteracting Musk’s assessment, Ma argues that “only college people are scared of AI, street-smart people [like him] are not,” because once people begin to understand themselves better, they can improve the world. Musk jokingly called Ma’s statement “famous last words” and argued for the importance of fighting for the preservation of human consciousness. Musk added, “If you can’t beat them, join them” which is one of the reasons he invested in Neuralink, a company that creates brain-machine interfaces aiming to enhance the bandwidth and other capabilities of the human brain. This is obviously an emotive topic. In order to understand what “join[ing] them” means, we must understand what is at stake, what we are trying to preserve, what we are fighting for, what consciousness is, and what human intelligence is. More importantly, we must understand what AI is, the dangers associated with its development (starting with AGI and ASI), and what the rest of us, particularly investors and company builders, can individually do to “secure the future of consciousness such that the light of consciousness is not extinguished” without going to Mars.[see on YouTube from minute 12:19]
As in so many other areas, Musk has shown us the way. In January 2015, he donated US$ 10 million to the Future of Life Institute, an organization founded by Max Tegmark, Jaan Tallinn, Anthony Aguire, et al., to keep “AI beneficial for humanity,” jumpstart AI safety research, and make sure AI is regulated before it spirals out of control.[here] After agreeing that superintelligence presents a clear and present danger to humanity, in January 2015 the “world’s top artificial intelligence developers sign[ed an] open letter calling for AI-safety research,” which on January 6, 2017, led to the development and adoption of the 23 Asilomar AI Principles.[here and here] These principles acknowledge the benefits of AI without being blinded by them. I have also signed them, and I encourage everyone to do the same and to adhere to them. (excerpt from my book Integral Investing )