Interview with the Author

INTERVIEW WITH THE AUTHOR, JIAQI HU

1. You have studies human issues for almost 40 years. What motivates you to never give up?

When I was in my primary and middle school, it was in the cultural revolution era in which China’s education system was brought to a virtual halt. In addition, I grew up in a remote village. As a result, I knew little about science and technology before I went to university in 1979. After that, I suddenly came into contact with a lot of knowledge that I’ve never contacted before at the university. At that time, I learned that the former Soviet Union detonated a nuclear bomb which released energy approximately equal to 70 million tons of TNT equivalent. In fact, it was possible in terms of both technology and theory to develop nuclear weapon whose TNT equivalent reached hundreds of millions of tons or even greater. That meant power of a bomb was equal to that of total amount of high explosive in a train circling the earth. The destruction was definitely terrible. All these things made me think that if science and technology continued to develop, would humans become extinct one day? I quickly realized that it is an issue worth studying. It doesn’t matter if science and technology takes thousands of or tens of thousands of years to get humans extinct. But if it is in the near future? Is there any issue more important than that? I quickly realized the issue is very worthy to be studied and I have studied it for almost 40 years from then. As the research goes on, I’ve increasingly noticed it is serious, vital and pressing.

 

2. You think continued development of science and technology will definitely result in human extinction in the near future. What are the bases?

In fact, nuclear weapon is a terrible means of destruction, but it will not cause human extinction. Because nuclear energy is released at one point in explosion of nuclear weapon. Some scientists suggest if all nuclear weapon around the world explodes, nuclear winter, in which billions of people will die, will occur. But someone will survive in the end.

According to scientific theory nowadays, more terrible weapons than nuclear weapons can be developed. For example, biotoxin produced by transgenic technology can target humans’ vital organs. It has greater destructive power than nuclear weapon. Another example is AI that attracts our considerable attention at present. If the technology is used to develop intelligent machine for killing people, it will get out of people’s control and independently decide which one it will kill. What’s more, it is a self-replicating machine. Isn’t it terrible? Besides, AI will create consciousness similar to humans sooner or later. That is to say, it will eventually have thinking ability like humans. The machine, with thinking ability of humans, whose response speed is tens of thousands even hundreds of millions of times faster than that of our brains, so it has much greater ability to handle complicated problems than humans. As we know, the higher organism will look down on the lower organism. Even the higher will regard the lower as its food. It will be absolutely impossible for humans to control AI once it really reaches this stage. At present, further development of lots of science and technology, according to the theory, will possibly cause human extinction. However, more terrible is the fact that humans put great enthusiasm in the development of science and technology just from the industrial revolution over 200 years ago. Humans have developed science and technology from very low level to so incredible level in over 200 years. We have many 200 years. Thus, it will not take long time for science and technology to get humans extinct according to simple reasoning. My conclusion: Science and technology will get humans extinct in 200 to 300 years or even in this century.

 

3. If science and technology really have the potential to extinct humans, what should we do?

Continued development of science and technology will definitely get humans extinct soon, which is my first conclusion. The conclusion also tells us that if we want to avoid human extinction, we must limit the development of science and technology. The logic is very simple. Now that continued development of science and technology will definitely get humans extinct soon, we must strictly limit the development of science and technology if we want to avoid human extinction.

 

4. Science and technology is necessary for everything about us. Does your opinion completely negate science and technology?

Absolutely not. Science and technology is a double-edged sword. Sometimes it can benefit humans; sometime it can destroy humans. When the power that benefits humans gets great, the power that destroys humans will gets great, too. What worries me is its limitless development will get humans extinct and eventually make humans completely disappear. I say that we must strictly limit development of science and technology, because it must be so as a whole. But for the achievements of science and technology which are definitely safe and mature, we help them spread over the world instead of limiting its development. Because, without science and technology, it’s difficult for humans not only to do lots of things but to survive as well.

 

5. So what should we do to limit the development of science and technology?

I don’t think it’s possible in this society. For humans, present social formation is country-based society. The highest power is the country in the society. Countries compete for various benefit. Competition between countries always results in the war where people will die. Both economic and military competition is in essence science and technology competition because Science and technology are primary productive forces. Thus, any country, for its survival and development, will not limit the development of science and technology for humans benefits. This is because the country will perish once it fails in the competition. I think only when human unification is realized can the power of world government achieve the goal of strictly limiting the development of science and technology. Because there is competitive relationship between countries, while world government is shared by the world. World government considers and handles the issue from the perspective of humans.

 

6. Humans has always been in separate governance. Is it possible to realize human unification?

It now seems very difficult to realize human unification because it will affect many people’s benefits. But we can presume that whether we can unify to fight with aliens when they invade. When aliens invade, they are our common enemies. If we don’t unify, we’ll be destroyed. In fact, we are facing a huge threat of being completely destroyed. This threat is that the development of science and technology will get humans extinct soon. Why is it very difficult to realize human unification? Because the threat has not been generally recognized yet. The development of science and technology will definitely get humans extinct soon. If we can spread the idea over the world and help everyone truly realize humans will have future only by unifying to limit the development of science and technology. If we don’t unify to limit the development of science and technology, humans will be completely destroyed. When the idea is widely accepted, human unification will be promising.

 

7. Even if human beings have achieved unification, how to guarantee that in the long term human division and the crazy development of science and technology will not happen one day in the changeable society?

This is a very good question, it’s also the most difficult problem we face. In my research, my answer to this question is as follows: We must impose strict restrictions on science and technology if human beings can achieve unification. This is only one aspect, the other more important aspect is to seal up permanently the advanced science and technology that may be harmful to humans and eventually forget, especially the scientific theory. Even if human society may be divided, science and technology are developed again after that, it still takes a longer time to reach a certain stage. When humans awaken again to impose strict restrictions and seal up technology, it may result that science and technology cannot reach the stage that can get humans extinct. There are a series of designs for the future society in my research. For example, our future society should be peaceful and friendly instead of today’s highly competitive society. A non-competitive society is not only beneficial to the realization of the people’s happiness, but also could restrain the dizzying development of science and technology. I also make corresponding designs for the various systems including political system in the future. I divide the various political systems into two types: One is a centralized system, the other is a decentralized one. It’s hard to say which one is better. For instance, the benefits of a decentralized system is its checks and balances of powers. Rights will not be in an unconstrained expansion. Its replacement is also comparatively scientific. Its institutional policy execution and continuity of policy still need to be improved. While, the benefits of a centralized system is its strong enforcement and good continuity of the policy. Over-centralized rights may result in dictatorship, governor’s absurd and irrational behavior, which is difficult to stop. I also make a design of the political system in the future society. I think I can combine the advantages of the two systems according to my design. In this way, our future society can safeguard not only checks and balances, smooth transition of powers, but also its strong enforcement and good continuity of the policy. I think this designs are beneficial to long term stability of the future society and long term unification of the human world.

 

8. Who do you think who can play the biggest role in saving humanity?

Saving humanity is a big event of mankind, we have to rely on ourselves. I think the most capable people are countries’ leaders. Therefore, I have written to big countries’ leaders like China, the United States of America and Russia for many times to appeal. As a scholar, what I can do is to spread my research results.

 

9. You said you have encountered incomprehension and many obstructions when you study on human issue these years. Isn’t that a contradiction because your company studies science and technology as well?

My book Saving humanity was published over a decade years ago. After only 2 days, it was asked to stop publishing. In these years, I have delivered speeches in schools and institutions, published many articles. I have written to many national leaders, influential research institutions and scholars expressing my points, but few people agrees with me. As a matter of fact, there is no much time for humans. Every step we take today is pushing humans to the abyss of extinction, which is why I’m so anxious.

Science and technology has achieved rapid development in past over 10 years. Especially the recent achievements like AI make many scientists realize the extinction problems might brought by this technology. Some scientists even called for rational development of AI, but these people are a minority. People don’t realize that it is not a certain technology that gets themselves extinct, but the continued development will certainly make themselves extinct. Without AI, there will be other technologies that will get humans extinct. It is time for us to make actions.

The last thing I want to say is bigger companies conduct scientific and technological research in their own fields. Even if my company does not conduct scientific research, other companies will also do it. If our company can go further in science and technology, we will give a hand when humans encounter unexpected incidents in the future.

Second Edition Preface Revised Hawking Raised Three Views Identical to Mine

Second Edition Preface Revised

Hawking Raised Three Views Identical to Mine

Today marks the eleven-year publication anniversary for the first edition of my book Saving Humanity. In these eleven years, science and technology has developed rapidly, and human society has changed drastically as well.

Thinking back eleven (should be eleven as well to keep with the timeline) years to when Saving Humanity was first published, almost no one concurred with all my core views, and even those who agreed with one core view were few and far between. However, in the short span of eleven years-a mere second in human history-the development of science and technology has greatly changed the way the world is perceived. Some of my predictions back then are becoming reality today, and some well-known scientists and scholars have come up with views similar and even identical to my own.

I chose “Hawking Raised Three Views Identical to Mine” as the preface title for the second edition of Saving Humanity, in order to use Stephen Hawking-the most famous physicist today- to illustrate my points. In reality, many scientists have proposed ideas similar to mine in the past eleven years, using Hawking as the example is due solely to his fame.

Stating that “Hawking Raised Three Views Identical to Mine” is obviously to demonstrate the importance of my research using Hawking’s reputation, that I do not deny. The reason I say, “Hawking Raised Three Views Identical to Mine” and not vice versa, is because my views existed first and Hawking’s came later, and because I proposed my views much earlier than he did. These three viewpoints are all clearly elaborated in the first edition of Saving Humanity published in 2007.  

The first point: in April 2010, Hawking stated that aliens almost certainly exist, that we should avoid them and never attempt to contact them. Contacting aliens would likely result in the destruction of humanity as human beings cannot defeat aliens. Hawking’s reasoning and examples are exactly the same as mine, yet three years later.

 The second point: In December 2014, Hawking pointed out that artificial intelligence will eventually develop self-consciousness and replace human beings, since technology develops at a greater speed than biological evolution. Once artificial intelligence completes its development, they may cause humanity’s extinction. Hawking’s reasoning and examples are basically my own, but seven years later.

The third point: in March of this year (that is, 2017), in an interview with the London Times, Hawking observed that without proper control, artificial intelligence was likely to destroy humanity. He reasoned that we must identify these kinds of threats faster and take action before losing control, in order to do so, some type of world government must be established. Once again, Hawking’s reasoning and examples echo mine, this time a decade later than me.

The above viewpoints are certainly not borrowed from Hawking, more likely he referenced me. Not only because my ideas came before his, but also because my related works and articles have been sent to many national leaders and a number of institutions. The electronic version has been published in both Chinese and English on multiple sites, not only in China but on scientific, social and political websites in the United States, Britain and many other countries as well.

What I am trying to say is that the world’s top scientists are starting to agree with my core views, and these core views are essential to the continued survival of humanity. Striving for the rest of the world to accept these views and take appropriate action is vitally important. Simultaneously, I believe that Hawking’s views are still not deep enough, not comprehensive enough and not thorough enough.

 In 1979, shortly after I started college, a strange idea occurred to me. Science and technology is a double-edged sword that has the potential to both benefit and destroy mankind, then if science and technology continued to develop at its continued rate, could it eventually drive humanity to extinction? The possibility of human destruction through technology in the near future is undoubtedly a huge deal, so is a solution to this problem possible? I quickly realized that this was a worthwhile subject—one of those rare things worth dedicating one’s life to. I soon decided that this would be the only cause of my life. I left my government job to pursue business in order to better fund the research for this cause, and to provide better conditions to promote and further my study.

A few decades have gone by, and now this cause is no longer merely a career for me, but more of a responsibility and mission. That is because my research has yielded truly terrifying results—that humanity’s development has taken a fundamentally erroneous turn, one disastrous enough to end mankind and push us into extinction once and for all.

It is very challenging to fully prove that science and technology will destroy humanity and to find a solution to this crisis—it took me nearly twenty-eight years. I finished my book “Saving Humanity” in January of 2007. When this 800,000-word work was still a sample book, I delivered it to 26 world leaders, among them include the Chinese president, the president of the United States and the UN secretary. Apart from one phone call from the Iranian Embassy, I did not receive any feedback.

In July of 2007, “Saving Humanity” was published in 2 Volumes by Tong Xin publications, but was asked to stop issuance after just one day.

 I later published the book “The Greatest Question”, and two other books “On Human Extinction” and “Saving Humanity (Selected Works)” in Hong Kong, as well as numerous articles. I also put both the Chinese and English versions of my articles and “Saving Humanity (Selected Works)” online. A website was specifically built for this purpose, under the name “Hu Jiaqi Website”. In order to promote my views and suggestions, I wrote numerous letters to the leaders of major world powers, the UN secretary and other relevant agencies. In addition, I have also lectured at many universities and research institutions.

Over the years I have exchanged opinions with many people, but my core views are often considered unfounded, while some people directly accuse me of fallacy.

Although I am pleased that many scientists and scholars today have offered up views similar to mine, it is still regrettable that my core views have not been seconded nor recognized by any major players. I am deeply anxious about this, since humanity truly does not have much time left.

Through these many years of research, I have formed a few core views and a series of secondary views. In the second edition preface of “Saving Humanity”, I will only discuss my three core views which entertain practical significance, the other core views will be further discussed in the book. My first point of practical significance is this: the continued development of science and technology will soon destroy humanity—at best in two or three hundred years and at worst by the end of this century, I believe the latter to be more likely.

Similar views have been broached by others. In May 2013, Oxford University’s future of humanity institute—located in the same city as Hawking—published results stating that humanity could reach extinction.

The president of this research institute Nick Bostrom commented in an interview, “Human’s scientific abilities are at war with human’s wisdom in using these abilities, I fear that the former may far outreach the latter.” (Bostrom’s worries are what I call the evolutionary imbalance phenomenon, I published a specific article on this topic.)

Bostrom further elaborated that threats like planetary collisions, super volcanoes, nuclear explosions etc are not enough to impend the survival of humanity. The biggest threat to mankind comes from the “uncertainties” brought on by technological innovations like synthetic biology, nanotechnology, artificial intelligence and other scientific achievements that have not yet emerged.

When I learned that Oxford University’s future of humanity institute had come to the same conclusion as I had, I was overjoyed. I immediately wrote a letter addressed “To: Professor Nick Bostrom, Dean of the Future of Humanity Research Center” and an article titled “Finding a common voice”. I translated both writings into English and sent them to Nick Bostrom as well as published them online. In order to better get his attention, I specifically used my title “Beijing Mentougou District CPPCC member”, but I never received a reply.

  In early 2016, the intelligent robot “Alpha Dog” developed by Google defeated South Korean Go master Li Shishi, shocking the world and fully demonstrating that robots possess deep learning ability—a chilling thought upon further pondering. Soon after, some of the world’s top scientists such as Hawking, Musk and Bill Gates pointed out that the development of artificial intelligence could be the biggest threat to human survival. The fact is, once artificial intelligence acquires human-level thinking abilities and selfawareness, their response time will be thousands, tens of thousands or even hundreds of millions of times faster than that of humans. The simple rule of evolution tells us that humans cannot hope to control artificial intelligence anymore once they reach such a stage.

In May of 2016, Bill Gates pointed out in the Reddit AMA series, “in a few decades, artificial intelligence will be strong enough to warrant concern… have no idea why some people are completely indifferent to this.” This prompted me to consider the study of artificial intelligence within my own company, since so many people are studying it already and I can only keep track of its movements by researching myself. If humanity really encounters unexpected disaster in the future, maybe I will be able to help in some way.

My second point of practical significance is this: only through human unity, the establishment of a world government and a world regime can we firmly grasp the developmental course of science and technology; i.e. the “World Government” approach Hawking proposed to control artificial intelligence.

The reasoning for the above view is, no country can wholeheartedly control and regulate science technology since countries are in constant competition and any failure could lead to total destruction. With science and technology being a main force in production, the competition between countries is essentially a technology race. Who would sacrifice a national competitive advantage for the overall benefit to mankind?! Even the United Nations could not bring about such change—only the unity of humanity as a whole and the establishment of a world government could accomplish this. A World government considers things on the level of all mankind instead of on a national standpoint, it removes the pressure of individual competition, thus making it possible to regulate world order through the power of an international regime.  

My third point of practical significance is also the one that still lacks influential backing, it theorizes that we must strictly limit the development of science and technology in order to prevent the extinction of humanity. Due to the uncertain nature of technology, the more advanced a technology is, the harder it is to control and regulate. Not even the world’s top scientists can accurately predict the consequences of scientific discoveries—even Einstein, Newton and Hawking make errors in scientific judgment. Many technological developments have already brought disaster to mankind.

With science and technology already developed to a hazardous height and still continuing its dangerous ascension, the risks are difficult to predict. Distributing and sensibly managing the safe mature scientific achievements we already enjoy on a global level is more than enough to guarantee a comfortable existence for mankind. If we keep up the unchecked demand on technology, human extinction will not be far off. The fact of the matter is, while some high-tech developments may be harmless to humans, not all high-tech developments are. We may be able to control the development and use of certain high-tech developments, but there is no way to control them all. He that touches pitch shall be defiled—accidents are always bound to happen and one big scientific blunder could be the end of humanity. Hawking believes that just controlling artificial intelligence is enough to solve the problem, but we can only control them for a time—is there any guarantee that the control will last forever?! And even if we could control one technological advancement, how could we possible control all of them?! Only by utilizing the World government to rigorously screen and adapt mature, established scientific achievements; permanently seal and eventually forget all other technological advancements as well as strictly limit scientific research, can humans ensure a continued existence. This is precisely where I find Hawking’s views lacking in depth and comprehensiveness. Perhaps people will think my ideas to be absurd, but I firmly believe this to be the truth.

 What needs to be clarified here is that restricting the development of science and technology is not denying the positive contributions science has made to humanity, but merely a concern that its negative effects could destroy mankind; in other words, this is not calling for just China or the United States to restrict or lead the way in restricting science development, it must be a synonymous global effort.

I’d also like to touch on the issue of aliens, where me and Hawking again share similar views—this is just one of my secondary points.       

I believe that aliens definitely exist, as there are hundreds of billions of galaxies in the universe, with each galaxy housing hundreds of millions of stars (we have two or three hundred million stars in the Milky Way). Though only planets orbiting a stellar have the capacity to produce intelligent life forms— so the probability is very small—the overall total capacity is still fairly high.

However, it is extremely difficult for alien life to traverse the interstellar distance and reach earth. The distance between stars is calculated in lightyears, for example, our closest neighbor in the solar system is the Alpha Centauri— which is still 4.25 light years away. Even using the fastest non-manned spacecraft we currently have, it would take tens of thousands of years to arrive, let along manned ones which are substantially more difficult. Moreover, it is pretty much impossible for there to be intelligent life on or near Alpha Cen, since it is a triple star system consisting of three stars orbiting each other. This type of star system is not conducive to the existence of life. That is why we have not found any trace of aliens on earth, even though earth formed 4.6 billion years ago, and the earliest microbial life can be traced back 4.28 billion years.

Conversely, if aliens really did reach earth it could be extremely problematic. Since the universe formed 13.8 billion years ago, intelligent life could have formed four or five billion years earlier. Any aliens who could travel through the interstellar divide and reach earth would be at least thousands, or even hundreds of millions of years ahead of us in terms of scientific development. The laws of nature inform us that more civilized groups despise less civilized groups; higher species kill lower species and even treat them as food to be fried and cooked. Once highly civilized aliens reach earth, our fate would be similar to that of the American Indians or Australian Aboriginals in the face of colonialists—or even worse. Nowadays, some people are keen to communicate with aliens, that is in fact very irrational behavior.

Finally, I believe that humans must rely on themselves to save mankind, and the leaders of major world powers possess the most capacity to do so. As a scholar myself, I can only sound the alarm, which is why I wrote so often to the leaders of powerful nations.

If one takes my three core views to heart, then the conclusion that only a unified front can save humanity is a logical one. Known as “the father of artificial brain” Hugo de Garis believes that artificial intelligence will destroy mankind, yet he also believes that the destruction of lesser species by higher species is a natural course. Thus, the destruction of humanity by artificial intelligence is completely reasonable and the creators of these higher species can be considered deities, or God themself. Let us imagine then, as members of the human species ourselves, can we bear to be destroyed?

 While I began editing the second edition of “Saving Humanity”, the cinemas were showing the movie “Resident Evil: The Final Chapter”. The movie centers around a high-tech company that developed a bio-weapon which could destroy humanity. The company’s leaders planned to retain only their own shareholders and destroy the rest of humanity with this weapon. These are obviously the actions of truly deranged immoral people—but with so many high-tech companies around the world, who’s to say there aren’t such deranged immoral individuals among them?

第二版前言

霍金提出了三个和我一样的观点

今年是我《拯救人类》一书的第一版出版十一周年的时间,十一年间科学技术又有了飞速的发展,人类社会也有了许多的不同。

回想十年前,《拯救人类》出版发行时几乎没有一个人全部认同我的几个核心观点,哪怕只是认同我的某一个核心观点的人都少之又少。然而,仅仅短短的十一年,人类历史的一瞬间,科学技术的发展却已很大程度地改变了世界的认知。一些我当时的预言似乎今天就快要变为现实。以至于一些著名的科学家和学者都陆续提出了一些和我类似,甚至于是完全相同的观点。

霍金提出了三个和我一样的观点为名作为《拯救人类》的第二版前言,就是想借当今世界最著名的物理学家史蒂芬霍金展开我的说明。其实,这十一年来提出和我相同的一些观点的科学家有很多,以霍金为例完全只是因为他的名气大的缘故。

霍金提出了三个和我一样的观点当然是想借霍金的名气衬托我研究成果的重要,这一点我毫不讳言。说是霍金提出了和我一样的观点而不是相反,是因为我提出的观点在先,霍金提出的观点在后,而且我提出的观点比他要早很多。这三个观点都在2007年出版的《拯救人类》第一版中有明确与详细阐述。

第一个观点:20104月,霍金指出,外星人几乎肯定存在,我们应该避免与外星人接触,千万不要联系外星人,如果联系外星人有可能会导致人类被毁灭,因为人类斗不过外星人。霍金的推理和举例与我的完全一致,但比我晚了三年。

第二个观点:201412月霍金指出,人工智能技术终将产生自我意识并取代人类,因为技术的发展速度要快于生物的进化速度,当人工智能发展完善后,可能会导致人类灭绝。霍金的推理和举例与我的基本一致,但比我晚了七年。

第三个观点:也就是今年(即2017年)3月霍金在接受《伦敦时报》采访时指出,人工智能如果管控不好有可能灭绝人类,我们需要以更快的速度识别这类威胁,并在失控前采取行动,要做到这一点,需要组建某种形式上的世界政府。霍金的推理和举例与我的基本一致,但比我晚了十年。

以上观点肯定不是我借鉴了霍金的,多半是他借鉴了我的,不仅因为我在先,他在后,而且我的相关著作和文章不但发给过多国领导人和多个机构,其电子版都用中、英两种文字在多个网站上公开发过,不只有中国的网站,还有美国、英国等国家的科技、社会和政治类英文网站。

我想说,世界顶级科学家思考的观点正在趋近我的核心观点,而这些核心观点我认为事关人类的生死存亡,努力让世界接受这这些观点并采取相应的行动,这才是最重要的。同时我还想说,霍金的观点还不够深入,不够全面,不够彻底。

还在1979年我刚上大学不久就产生了一个很怪的想法,科学技术是一把双刃剑,既能造福人类,也能毁灭人类,那么科学技术照此发展下去会不会有一天把人类给灭绝。如果在不久的未来科学技术就会灭绝人类无疑是天大的事,那么我们有没有可能解决这一问题呢?很快我就觉得这是一个很值得研究的课题,人的一生难得做成一件值得去做的大事。不久后我就决定,我这一辈子只有这一个事业。我从国家机关下海经商也是为了这一事业,因为有一定的经济基础后就可以静下心来进行研究与写作,并有条件推广我的研究成果。

几十年过去了,对于我而言,这件大事早已不止是一个事业,更多的是一份责任和使命,因为我的研究结论非常可怕,它告诉我,人类的发展道路出现了根本的方向性错误,这种错误有可能彻底葬送人类,使人类坠入万劫不复的灭绝深渊,而没有重新再开始的机会。

要严密地论证科学技术是否会灭绝人类,并找到化解这一危机的方案是很难的,完成这一工作我用了近二十八年的时间,并终于在20071月完成了我的《拯救人类》一书。这部八十万字的著作在还是样书的时候,我就将其寄给过中国国家主席、美国总统和联合国秘书长等26位人类领袖,除了伊朗使馆来过一个电话外,再没有任何反馈。

20077月,《拯救人类》分一、二卷由同心出版社出版,但仅发行了一天就被要求停止发行。

之后我又出版了《最大的问题》一书,还在香港出版了《论人类灭绝》和《拯救人类(精选本)》两部书,并发表过多篇文章,而且将《拯救人类(精选本)》和多篇文章用中、英文放到了网上。并为此建了一个网站,名为胡家奇网。为了陈述自己的观点和建议,我之后又多次给各大国的领导人和联合国秘书长等人类领袖,以及相关机构写信,还在一些大学和研究机构发表过演讲。

我曾与许多人进行过交流,但我的核心观点常被认为是杞人忧天,有些人甚至直言不讳地说我的观点是谬论。

虽然今天已经有不少科学家和学者提出了一些和我相同的观点,这是我感到欣慰的地方,但也非常遗憾,我最核心的观点目前还没有有分量的人提出,也没有有分量的人认可,这是我忧心如焚的事,因为留给人类的时间真的已经不多。

这些年的研究我形成了几个核心观点和一系列的分观点,做为《拯救人类》第二版的前言,这里只谈我的三个具有现实意义的核心观点,其他的核心观点本书都有全面的阐述。

我的第一个具有现实意义的核心观点是:科学技术的继续发展必定很快灭绝人类,长则二、三百年,短则就在本世纪,而且我认为多半会是后者。

其实现在已经有人提出了和我几乎一样的观点,20135月,与霍金同在一座城市的牛津大学人类未来研究院发布研究成果指出,人类最早将在下世纪被灭绝,罪魁祸首就是科学技术。这是由众多数学家、哲学家、科学家组成的专门的小组研究得出的结论。

该研究成果和用例与我的研究成果与用例几乎完全一致,但比我晚了六年。

该研究院的院长尼克博斯特罗姆在接受网络采访时表示:人类的科技能力与人类运用这些能力的智慧正在展开一场重大比赛,我担心前者可能会遥遥领先于后者。(博斯特罗姆说的这一担忧我称为进化失衡现象,曾发表过专门文章)。

博斯特罗姆进一步阐述:行星撞地球、超级火山、核爆炸等危害不足以威胁人类的生存,人类最大的威胁来自科技创新所带来的不确定因素,如合成生物学、纳米技术和人工智能,以及其它还没有出现的科学技术成果。

当我得知牛津大学未来研究院得出了与我一样的研究成果后喜出望外,随即写了一封致:牛津大学人类未来研究院院长尼克博斯特罗姆教授的信,同时我又写了一篇文章终于有了知音,这两部分文字我都翻译成英文发给了尼克博斯特罗姆并放到了网上,为了引起对方的重视,我还用了我是北京市门头沟区政协委员的头衔,但始终没有回复。

2016年初,由谷歌开发的智能机械人阿尔法狗战胜韩国围棋高手李世石,震惊了世这充分显示机械人已经具备了深层学习的能力,对其进一步的思考是很怕的。随即霍金、马斯克、比尔盖茨等世界顶级科学家都指出,人工智能的发展可能成为人类面临的最大的生存威胁。因为,当智能机械人具备了人类的思考能力和自我意识后将会比人类的反应能力高出千倍、万倍以至亿倍,高等物种毁灭低等物种的简单道理告诉我们,那时人类再想控制它就困难了。

20165月比尔盖茨在红迪网站有问必答AMA)栏目中指出,再过几十年,人工智能就会强大得足以引起担忧,……不知道为什么有些人对此无动于衷。这促使我考虑,我的公司也要抓紧进行人工智能的研究,因为我不研究,许多人也在研究,我研究还能跟踪这一技术的动向,如人类真有不测,到时也许还能帮得上忙。

我的第二个具有现实意义的核心观点是只有实现人类的大统一,即成立世界政府,建立世界政权才有可能把握科学技术的研究方向。即霍金提出的用世界政府把握人工智能的观点。

上述道理是,任何国家都不可能真心实意去把控科学技术,因为国家之间是竞争关系,失败者有可能亡国灭种,而科学技术是第一生产力,国家之间的竞争关键是科学技术的竞争,谁会为了人类的整体利益失去国家竞争的优势呢?!联合国出面也做不成这件事,只有实现人类的大统一,建立世界政府,世界政府是站在全世界和全人类的立场而非国家的立场考量一切,没有国家竞争的压力,便可以用世界政权的力量把控全球。

我的第三个具有现实意义的核心观点,也是目前还没有有一定影响的人接受的观点是,为了避免人类的灭绝必须从现在起就要严格限制科学技术的发展。因为科学技术有不确定性,越是高深的科学技术越难把控,连最顶级的科学家也不能完全准确地预测科学技术成果带来的后果,正如就连爱因斯坦、牛顿以及霍金本人也有不少科学判断的失误一样,正如有不少科技成果已经给我们带来了不少灾难一样。

科学技术已经发展到现在这样的高度后还在冒然前行,风险之大谁都难以预测。将现有安全、成熟的科技成果广泛普及到全球,并进行合理的管理,完全可以保障全人类丰衣足食。如果向科学技术进行无限制的索取,人类灭绝必将为时不远。道理是,也许有一些高科技成果对人类无害,但不可能所有的高科技成果都对人类无害;也许我们能够把控一些高科技的发展方向和使用方向,但不可能把控所有高科技的发展方向和使用方向。常在河边站哪有不湿鞋,夜路走多了必定碰到鬼,只要有一次大的科技失误人类的路就走到了尽头。霍金认为管控好人工智能就能解决问题,那么我们能够管控一时,能够管控永远吗?!能够管控一项科技成果,能够管控所有科技成果吗?!只有通过世界政权的力量,将现有安全成熟的科技成果进行严格筛选后加以普及利用,而后将其他的科技成果和科学理论严密、永久地封存并达到遗忘,同时严格限制科学技术研究,这才是人类永续未来的唯一途径。这正是我认为霍金的认识还不够深刻和全面的地方。也许人们认为这是谬论,但我坚信这是真理。这里需要澄清的是,说限制科学技术的发展不是否定科学技术对人类的正面作用,只是担心它的负面作用有可能彻底灭绝人类;说限制科学技术的发展不是只要中国或美国限制,也不是只要中国或美国首先限制,而是全球要同步限制。另外,我还说一下霍金与我就外星人的问题提出了相同观点的事,这只是我的一个分观点。我认为外星人肯定存在,因为宇宙中有几千亿的星系,每个星系又有数亿颗恒星(我们所在的银河系有二、三千亿颗恒星),能够孕育智慧生命的是绕恒星运转的行星,所以,有可能孕育智慧生命的星球即使概率极小,但总量都会很大。但是,外星人要穿越恒星际的距离来到地球却是非常困难的。恒星间的距离论光年计,如距我们所在的太阳系最近的恒星是半人马座α星,即比邻星,距离我们都达4.25光年。即使采用我们目前最快的非载人航天器也要数万年才能到达,而要载人就更是困难得多,况且比邻星周边几乎肯定不可能有智慧生命的存在。因为比邻星是三合星,也就是三颗恒星绕在一起,这样的星球周边是不适合生命生存的。由此可知,地球形成了46亿年,最早的微生物距今42.8亿年,我们一直都没有确切地在地球上发现外星人的痕迹应该正是在于此。

然而,如果有一天外星人真的来到了地球就麻烦了,因为宇宙诞生了138亿年,宇宙中在四五十亿年前就有可能出现了智慧生命,能够穿越恒星际的距离来到地球的外星人,其科学技术的领先程度且能是我们用千年、万年甚至亿年可以赶上的?自然规则告诉我们,高文明群体鄙视低文明群体,高等物种杀戮低等物种甚至将其当成自己的食物煎炸烹烤。文明程度高出我们不知道多少倍的外星人来到地球后我们地球人多半就像美洲印第安人和澳洲土著人面对殖民主义者的命运一样,甚至更糟。所以,我们千万不要联系外星人。现在一些人热衷于向外星人传递信息,其实是很不理性的。最后,我认为拯救人类要靠人类自己,而最具拯救人类能力的是大国领袖,我本人作为一个学者,只能起到呼吁的作用,这就是我多次给大国领袖写信的原因。其实,如果坚信我的第三个核心观点,得出只有实现大统一才能拯救人类这一结论就是顺理成章的。被称为人工大脑之父的雨果加里斯相信人工智能会灭绝人类,但他却认为高等物种灭绝低等物种是理所当然的,人工智能灭绝人类也是理所应当的,而制造这种高等物种的人就是神,是上帝。那么让我们试想,作为人类这一物种本身,我们能够容忍被灭绝吗?在开始《拯救人类》第二版修改工作时,电影院正在放映一部电影《生化危机:终章》,说的是一个高科技公司研制出了一种可以灭绝人类的生化武器,公司的股东们便策划只保留本公司的股东,而将全人类其他的所有都灭绝。这种事只有那些丧尽天良的家伙才会干。但全世界那么多高科技企业,就没有一些丧尽天良者吗?

 

 

Copyright hujiaqi.com Copyright All Rights Reserved No: Beijing ICP Reserve 17047407-1