Showing posts with label Lifeboat_Foundation. Show all posts
Showing posts with label Lifeboat_Foundation. Show all posts

8 April 2016

"Visions" (Lifeboat Foundation, 2015)

The Blog


Prior to this blog's Harry J. Bentham review of the Lifeboat Foundation book Prospects for Human Survival, a review went up at h+ Magazine of an additional Lifeboat book called Visions of the Future.


Both books were published in 2015. Whereas Visions brings together many different futurist authors' works into a single book, Prospects was the sole work of mathematician Willard Wells. Prospects focused mainly on the idea of existential risks rather than future prosperity.

In the March 31 review of Visions at leading transhumanist publication h+ Magazine, the verdict given is positive, expressing that it "belongs on the bookshelves of anyone trying to get acquainted with what futurism, and more so the Lifeboat Foundation, are about".

Great attention goes to Metcalfe's Enernet theory already mentioned in earlier posts. Of this, the h+ Magazine review states:
Jose Cordeiro‘s essay contribution to the book draws attention to the idea of the Enernet, which has been of great interest to me. Ethernet creator Robert Metcalfe’s idea, the Enernet would be part of the “Energularity”, a “global energy network” that would dispense free energy in much the way the internet dispenses free information today. It would, Cordeiro predicts, “positively transform humanity by increasing the global standard of living and connecting everybody around the planet” (p. 596). My own prediction is that providing free energy from distributed sources would be enormously empowering to impoverished communities and isolated, poor countries.
Despite this, various technological advances in energy storage and a revolution in manufacturing may need to occur to really produce such empowering results, the review speculates.


The clubof.info Blog

Read More »

5 April 2016

Prospects for Human Survival (review)

Harry J. Bentham at the Blog


As a mathematician, Willard Wells provides much of his thought in probabilities as in his other book, Apocalypse When?, in Prospects for Human Survival. As scientifically valid as it may seem, there is reason to be skeptical of such an approach. It is hard to account for the proliferation of unknowns using probabilities based on current data.


No study of existing firepower in 1943 or 1944 would have told you that bombs would be able to blow up entire cities in a single blast by 1945. The humanity-killing forces of the future will be equally sudden and unexpected. They may suddenly emerge and destroy us all tomorrow, or they may never emerge. They could be developed in secrecy, as the Manhattan Project was, making any predictions based on what we do know unhelpful. Often, such things impose themselves on civilization without any omens, invented and used recklessly before they are even known to be dangerous even to the wisest and most skilled thinkers.

In the domain of atomically precise manufacturing (APM) or nanotechnology (nanotech) as it is commonly called, Wells correctly predicts new means of assassination (p. 67-69) by programming tiny robots to kill with poison. Remarkably, he then fails to acknowledge that governments would be the biggest abusers of such technology, instead arguing that giving even more authoritarian powers and invasive surveillance technologies to states (p. 91-92) is the only solution to such threats.

Consider the behavior of governments in the modern day. Although it is not law, they seem bound by an instruction to seek out, possess and use to maximum lethality and invasiveness any technology they find. They did this with the internet. No-one who made the internet or smart phones possible saw them as a way of having a bug or a camera installed in everyone's home, a way of quickly judging who to detain or assassinate to protect a regime. But governments still managed to make this nightmare possible.

The "grey goo" ecophagy (ecosphere-eating) nanotech disaster scenario presented by Robert A. Freitas is given some attention by Wells (p. 69). This is the scenario in which microscopic robots are capable of reproducing independently using whatever matter they encounter, and proceed to "eat" the world - or more specifically the biosphere, bringing an end to life as we know it on Earth. He argues, correctly, that this danger exists (albeit extremely unlikely) but that it cannot be averted by any ban on nanotech. Such a ban might only encourage more dangerous activities to be undertaken covertly, without sufficient review or intervention by the scientific community.

Wells asserts that there must be regulation of emerging nanotechnology to prevent or detect early the formation of such a disaster. This position in itself can be rejected for the same reasons as the hypothetical ban. Heavy regulation would only have the same result of pushing risk-prone entrepreneurs to working covertly, thus the danger of "irresponsible development" proliferates exactly as it would under the nose of any government ban. More probably, having maximum freedom coupled with transparency in the development of nanotech would be the safest route, as this way everything may be seen and the "good guys" can create defenses in time, as Wells encourages.

The best defense against runaway nanotechnology may be the fact that there is no rationale for someone in search of profit to produce self-replicating robots, as Wells himself points out:

"No sane robot manufacturer working for profit would make a self-replicant on their own because their market vanishes the moment their customers start giving away surplus units (just as people give away surplus kittens)." (p. 70)

So there is no reason for corporations to make the "grey goo" creating robots, at least when we look at it as a problem of self-replicating machines. It is perhaps possible, though, that some tiny refining or mining robots could uncontrollably malfunction and begin mining or cutting up everything they come into contact with, in a belief they are collecting minerals. If they had been deployed on a large scale by a mining company to process tons of ore, they might not need the ability to replicate in order to cause massive destruction in the surrounding environment.

Wells repeatedly imagines "terrorists" being the ultimate agents behind any possible technological threat emerging in the future, but often this seems close-minded or ignores far more obvious culprits. He writes, "Terrorists want self-replicators; legitimate users want factories making factories". This is based on the assumption that "legitimate" means commercially-minded, and anything else must be irrational terrorism. However, what of state agencies? The most powerful scientific end engineering corps today, those making the greatest strides in technology and paving the way for the corporations, are not profit-hungry corporations but state agencies. Self-replicators would almost certainly be needed in space colonization, so NASA (not ISIS) are the most likely ones to place an order for self-replicating robots.

Genetic engineering and its more advanced cousin, synthetic biology, could present similar threats of consumption or infestation of the environment. Wells offers a fascinating hypothetical scenario in which some type of manmade infestation (whether biological or technological) causes the destruction of vital marine ecosystems and destroys more than half the world's oxygen supply (p. 74-78). Wells postulates "conspirators" might seek to do this intentionally. It is such a specific event that an accident seems unlikely to cause it. However, this belief in exceedingly nasty and yet highly capable inventors ought to be rejected. It is not even clear how any terrorist would benefit from doing this. No extremist ideology exists, or has existed, that would want to destroy the world's oceans and make everyone sluggish through lack of oxygen, so it seems strange to theorize about this scenario at all.

Much like the above unlikely scenario is the "mad scientist" germ attack hypothesis, which is hardly valid from any historical perspective. The idea holds that a "mad scientist" might plot to destroy humanity by engineering a virus (p. 79). However, there is no real-life example of an evil scientist of the kind found in movies and comic books, so it does not make sense to ever expect there to be any in the future.

Within Prospects for Human Survival, little attention is given to biological threats. Biological agents have been intentionally designed to destroy entire continents' food supplies, and could be a very real threat to human survival if ever used, even coming back to wipe out the side that deployed the weapon in the first place. J. Craig Venter's discovery of how to artificially synthesize entire new genomes and invent and patent new living organisms is possibly the most consequential discovery of the century, and is not mentioned at all.

Wells' attitude towards surviving nuclear war and disaster seems ill considered. The talk of preserving humanity's seed using underground survival bunkers stocked with plenty of women for breeding purposes is something right out of Dr Strangelove. Wells argues that it doesn't matter if the wealthiest one percent (likely the ones who started the war) are the only ones who get to escape into these bunkers.

The political rationale for expenditures to save humanity's genetic future in the first place is not shared by Wells. Who told him anyone wants to save humanity? Most people actually have no interest in it, and would only be concerned by the more unpleasant scenarios in which they would personally undergo pain (e.g. shredded by a swarm of malfunctioning nanorobots). Couples voluntarily exterminate their genetic future all the time using contraceptives, and for worry over finance and the world's overpopulation. Wells (and for that matter Steven Hawking, who also comments that humanity must avoid extinction) have offered no argument for why human genes are special enough to be worth saving. For most people, whether humanity endures as a species is just irrelevant, and Prospects for Human Survival fails to appeal against their philosophy.

Although I concur with Wells on a number of issues about science, I disagree with many of the book's recommendations and fail to see the rationale behind others. Although there is no good reason to fear the development of artificial intelligence at this stage, Wells' kind of authoritarian artificial intelligence appointed to watch over and farm humanity for its safety is not enticing and seems dystopian (p. 91-92).

Futurism should not be about making excuses for concentrated authority, controlled scarcity, and hubs of control and supervision. We should be making excuses for total equality, total abundance, total freedom, and humanity's ultimate achievement of technological adulthood. If humanity is "irresponsible", it should not be treated like a group of children, but raised to adulthood, even at grave risk.


Harry J. Bentham

Read More »

18 March 2016

Existential risks don't matter to politics

The Blog


Current political science and ideology concentrates on the liberties of the individual, therefore lacks any and all theoretical grounds to bother opposing "existential risks" to human progeny or civilization, a blog post asserts.


This view, based on an unpublished review of a Lifeboat Foundation book, appears in a Beliefnet post. The argument goes that there is a lack of support in existing political theory for the pleas of Stephen Hawking, the Lifeboat Foundation boards, and countless other futurists and scientists who say space colonization should be pursued to ensure the perpetuation of the species.

No one in modern politics will be moved by the notion of safeguarding human posterity. In fact, most governments and political movements do not care for the long-term survival of humankind and will never invest any effort in it as their priorities are very clearly elsewhere:
[worrying about posterity] it is contrary to existing political norms. The prevailing liberal, centrist, libertarian and even socialist philosophies in the west today mainly focus on the rights, pleasures, and just treatment of individuals. Where they are concerned, it doesn’t actually matter if no humans exist a couple of centuries from now, as long as people didn’t die painfully.
To put it more consequentially, this means no electable politician or political scientist in the west would be swayed by negative-minded futurist arguments about saving humans from existential risks. Basically, the idea of posterity - of saving future generations to inhabit the world or even other worlds beyond - is completely unheard of to politicians and social science experts and cannot be expected to impress them.

Calling this problem a "gap on all our bookshelves", where there is simply no valid political theory and a lack of literature about why to save civilization or ensure posterity, the blog repeats its earlier value judgment that global injustice is ultimately worse than extinction in any binary choice between the two.

Maybe the political science is on the right track. If the current social system is unjust, efforts to save civilization are only about saving injustice.


The clubof.info Blog

Read More »

19 February 2016

Enernet would bring "liberation"

The Blog


While continuing to bet on synthetic biology as the eventual solution to energy inequality and crises, a Beliefnet post nevertheless embraces another vision called enernet.


Enernet is the idea of providing energy to everyone for free via a type of global power supply network constructed similarly to the internet. It would adhere to a similar philosophy to the internet too, making it almost a human right that everyone can be connected to this network of sharing and mutual survival.

The post is a partial review of a Lifeboat Foundation essay series. It suspects energy firms and powerful states would prefer to keep control of global energy supplies in the hands of a small few government and industrial elites, somehow, and the enernet might be no exception to this. Although such injustice would certainly be the result of some futuristic schemes to build a small number of giant thermonuclear fusion reactors like the ITER (International Thermonuclear Experimental Reactor) to replace fossil fuel supplies, the post sees the enernet as something different.

To cut a long story short, because the enernet would be connected to widely distributed and decentralized sources of fuel, it should not result in political or strategic imbalances, inequalities and lopsided international relations of the kind currently seen. The post reports that the enernet sounds like a fair and balanced solution to the energy needs of disparate individuals, communities and isolated states across the world. The prediction given is that, if feasible, it would bring a form of technological "liberation" to the world's impoverished and voiceless populations similar to the internet.

A complete theory of techno-liberation to follow up from the development of the internet is explained in Catalyst, the main source of inspiration for this blog.


The clubof.info Blog

Read More »

29 January 2016

Global injustice as an existential threat

The Blog


A Beliefnet blog points to "injustices, prejudices and other ailments of global society" as a greater priority than "humanity-threatening disasters" of the kind addressed by the Lifeboat Foundation think tank.


Appealing to the utilitarian philosophy of Jeremy Bentham, the L'Ordre blog, also authored by another Bentham, talks of the "happiness of the greatest number" being preferable to any small minority evading fallout or potential extinction in a global war or disaster.

In the brief semi-review of the Willard Wells book Prospects for Human Survival, the blog attacked the teleology of humanity wanting to avoid genetic-cultural extinction (as opposed to human suffering) at all:
...if the goal in life was to avoid extinction, in a genetic sense, then it is not only impossible (because all lines eventually die out, even the entire human species), but would lead to the absurdity of encasing human DNA in probes and sending them out into space to ensure the maximum possible survival of our genetic material for the longest possible time...
Read more: http://www.beliefnet.com/columnists/lordre/2016/01/needs-of-the-many-against-survival-of-the-few.html#ixzz3ySVD7TXJ
Read more at http://www.beliefnet.com/columnists/lordre/2016/01/needs-of-the-many-against-survival-of-the-few.html#tRH9ZokXciROlysP.99
If we consider that humans voluntarily go "extinct" using contraceptives and abortion everyday, the call for making sure humanity still exists in the distant future appears destined to fail because the notion of human survival already has no appeal in modern society. The only reason nuclear war and other sources of suffering are resisted by most people is due to them being unpleasant, and not due to them erasing humanity's DNA.

Even more strongly rejecting the idea of the super-rich saving humanity by saving themselves from a global disaster, the blog argues the super-rich would be to blame for any potential global nuclear war, therefore should get killed in the war rather than being tempted to retreat into bunkers. However, the blog acknowledges the negativity of such speculation, and urges more optimistic attitudes towards the future.

The small semi-review at Beliefnet appears ahead of a full-size review of the Willard Wells book, to be published separately.


The clubof.info Blog

Read More »

Featured

Ukraine’s Shadow War In Africa: Bold Strategy Or Geopolitical Overreach?

Ukraine’s Shadow War In Africa: Bold Strategy Or Geopolitical Overreach? By Uriel Araujo A covert Ukrainian footprint in Africa is taking sh...

Follow Me on Twitter