As a mathematician, Willard Wells provides much of his thought in probabilities as in his other book, Apocalypse When?, in Prospects for Human Survival. As scientifically valid as it may seem, there is reason to be skeptical of such an approach. It is hard to account for the proliferation of unknowns using probabilities based on current data.
No study of existing firepower in 1943 or 1944 would have told you that bombs would be able to blow up entire cities in a single blast by 1945. The humanity-killing forces of the future will be equally sudden and unexpected. They may suddenly emerge and destroy us all tomorrow, or they may never emerge. They could be developed in secrecy, as the Manhattan Project was, making any predictions based on what we do know unhelpful. Often, such things impose themselves on civilization without any omens, invented and used recklessly before they are even known to be dangerous even to the wisest and most skilled thinkers.
In the domain of atomically precise manufacturing (APM) or nanotechnology (nanotech) as it is commonly called, Wells correctly predicts new means of assassination (p. 67-69) by programming tiny robots to kill with poison. Remarkably, he then fails to acknowledge that governments would be the biggest abusers of such technology, instead arguing that giving even more authoritarian powers and invasive surveillance technologies to states (p. 91-92) is the only solution to such threats.
Consider the behavior of governments in the modern day. Although it is not law, they seem bound by an instruction to seek out, possess and use to maximum lethality and invasiveness any technology they find. They did this with the internet. No-one who made the internet or smart phones possible saw them as a way of having a bug or a camera installed in everyone's home, a way of quickly judging who to detain or assassinate to protect a regime. But governments still managed to make this nightmare possible.
The "grey goo" ecophagy (ecosphere-eating) nanotech disaster scenario presented by Robert A. Freitas is given some attention by Wells (p. 69). This is the scenario in which microscopic robots are capable of reproducing independently using whatever matter they encounter, and proceed to "eat" the world - or more specifically the biosphere, bringing an end to life as we know it on Earth. He argues, correctly, that this danger exists (albeit extremely unlikely) but that it cannot be averted by any ban on nanotech. Such a ban might only encourage more dangerous activities to be undertaken covertly, without sufficient review or intervention by the scientific community.
Wells asserts that there must be regulation of emerging nanotechnology to prevent or detect early the formation of such a disaster. This position in itself can be rejected for the same reasons as the hypothetical ban. Heavy regulation would only have the same result of pushing risk-prone entrepreneurs to working covertly, thus the danger of "irresponsible development" proliferates exactly as it would under the nose of any government ban. More probably, having maximum freedom coupled with transparency in the development of nanotech would be the safest route, as this way everything may be seen and the "good guys" can create defenses in time, as Wells encourages.
The best defense against runaway nanotechnology may be the fact that there is no rationale for someone in search of profit to produce self-replicating robots, as Wells himself points out:
"No sane robot manufacturer working for profit would make a self-replicant on their own because their market vanishes the moment their customers start giving away surplus units (just as people give away surplus kittens)." (p. 70)
So there is no reason for corporations to make the "grey goo" creating robots, at least when we look at it as a problem of self-replicating machines. It is perhaps possible, though, that some tiny refining or mining robots could uncontrollably malfunction and begin mining or cutting up everything they come into contact with, in a belief they are collecting minerals. If they had been deployed on a large scale by a mining company to process tons of ore, they might not need the ability to replicate in order to cause massive destruction in the surrounding environment.
Wells repeatedly imagines "terrorists" being the ultimate agents behind any possible technological threat emerging in the future, but often this seems close-minded or ignores far more obvious culprits. He writes, "Terrorists want self-replicators; legitimate users want factories making factories". This is based on the assumption that "legitimate" means commercially-minded, and anything else must be irrational terrorism. However, what of state agencies? The most powerful scientific end engineering corps today, those making the greatest strides in technology and paving the way for the corporations, are not profit-hungry corporations but state agencies. Self-replicators would almost certainly be needed in space colonization, so NASA (not ISIS) are the most likely ones to place an order for self-replicating robots.
Genetic engineering and its more advanced cousin, synthetic biology, could present similar threats of consumption or infestation of the environment. Wells offers a fascinating hypothetical scenario in which some type of manmade infestation (whether biological or technological) causes the destruction of vital marine ecosystems and destroys more than half the world's oxygen supply (p. 74-78). Wells postulates "conspirators" might seek to do this intentionally. It is such a specific event that an accident seems unlikely to cause it. However, this belief in exceedingly nasty and yet highly capable inventors ought to be rejected. It is not even clear how any terrorist would benefit from doing this. No extremist ideology exists, or has existed, that would want to destroy the world's oceans and make everyone sluggish through lack of oxygen, so it seems strange to theorize about this scenario at all.
Much like the above unlikely scenario is the "mad scientist" germ attack hypothesis, which is hardly valid from any historical perspective. The idea holds that a "mad scientist" might plot to destroy humanity by engineering a virus (p. 79). However, there is no real-life example of an evil scientist of the kind found in movies and comic books, so it does not make sense to ever expect there to be any in the future.
Within Prospects for Human Survival, little attention is given to biological threats. Biological agents have been intentionally designed to destroy entire continents' food supplies, and could be a very real threat to human survival if ever used, even coming back to wipe out the side that deployed the weapon in the first place. J. Craig Venter's discovery of how to artificially synthesize entire new genomes and invent and patent new living organisms is possibly the most consequential discovery of the century, and is not mentioned at all.
Wells' attitude towards surviving nuclear war and disaster seems ill considered. The talk of preserving humanity's seed using underground survival bunkers stocked with plenty of women for breeding purposes is something right out of Dr Strangelove. Wells argues that it doesn't matter if the wealthiest one percent (likely the ones who started the war) are the only ones who get to escape into these bunkers.
The political rationale for expenditures to save humanity's genetic future in the first place is not shared by Wells. Who told him anyone wants to save humanity? Most people actually have no interest in it, and would only be concerned by the more unpleasant scenarios in which they would personally undergo pain (e.g. shredded by a swarm of malfunctioning nanorobots). Couples voluntarily exterminate their genetic future all the time using contraceptives, and for worry over finance and the world's overpopulation. Wells (and for that matter Steven Hawking, who also comments that humanity must avoid extinction) have offered no argument for why human genes are special enough to be worth saving. For most people, whether humanity endures as a species is just irrelevant, and Prospects for Human Survival fails to appeal against their philosophy.
Although I concur with Wells on a number of issues about science, I disagree with many of the book's recommendations and fail to see the rationale behind others. Although there is no good reason to fear the development of artificial intelligence at this stage, Wells' kind of authoritarian artificial intelligence appointed to watch over and farm humanity for its safety is not enticing and seems dystopian (p. 91-92).
Futurism should not be about making excuses for concentrated authority, controlled scarcity, and hubs of control and supervision. We should be making excuses for total equality, total abundance, total freedom, and humanity's ultimate achievement of technological adulthood. If humanity is "irresponsible", it should not be treated like a group of children, but raised to adulthood, even at grave risk.
Existential risks don't matter to politics: The BlogCurrent political science and ideology concentrates on the... https://t.co/4JriUXBDEh— The clubof.info Blog (@ClubOfInfo) March 18, 2016