I just finished reading Yuval Noah Harari’s second book, Homo Deus: A Brief History of Tomorrow, and I think it confirms many of the ideas I discuss on my blog, like the importance of understanding the biases of software from an almost non-technical perspective, e.g. the political, economic, and social implications of code. Harari goes further, asking what will eventually come about as our technology begins to outstrip our ability to understand it.
Indeed Harari, coming off the strength of his first book Sapiens, which is a history of the last 70,000 years (and which I previously read and reviewed), here in Homo Deus explains that we are likely coming to the end of the humanist era, where technology and data will begin to, and eventually supersede, human capacity and thus not only dispense with individuals political importance, but their very relevance to the continued unfolding of history.
It is interesting that in this great talk that Harari recently gave:
He says that he himself is almost a technophobe, relying on his husband to change television channels. He nonetheless, in an answer to the moderator, makes the point that you do not need to understand how a nuclear bomb works to be able to work out some of its implications from the fact that it can destroy cities wholesale. In this way, Harari uses his knowledge of history, social science, forays into the hard sciences, and anecdotes from our outdated religions to try and tease out some of the choices we are about to face as our technology gives us god like powers.
I found Harari’s multi-disciplinary approach to futurism on the whole convincing. As I wrote about in my review of Sapiens, Harari largely dispenses with almost all of the fictions we have told ourselves, from ancient religion to even the ideas of the individual and free will implicit in humanism (which he expands upon in Homo Deus), giving way to new possible religions; chiefly two, the milder one: post-humanism, and the more radical, where humanity is subsumed to increasingly powerful algorithms piggybacking off of ever larger data sets, he calls Dataism.
In Harari’s exploration of Dataism he explains the seemingly provocative idea that algorithms will soon know us better than ourselves, and so we will find it downright convenient to outsource decision making from our inner-selves to entities like Google and Facebook and their descendants. Perhaps we are only one quantum-leap away from computers actually being able to make better decisions for us than we can in fact make for ourselves. Harari gives a useful example from the biometrics arena, whereas when we look at another human being in an unaided way we can only use traditional clues such as eye movement and body language to deduce their intention, with biometrics we could use such things as heart rate and even blood flow through the brain correlated in a million and one ways with stimuli from the physical world to understand them better. And what happens when these powerful tools are turned upon ourselves, and perhaps, on humanity as a whole?
Some would argue, as in my example in Quantum Maybe about Google Maps, and further through such tools as Google Now, that if we have already given enough data to Google and Facebook that their suggestions through Google Now and the Facebook News Feed are already better than what we could have come up with ourselves. In some respects, because these entities already use heavy doses of machine learning and Big Data, they are arguably already way better than anything our unaided selves could come up with. Therefore, while not yet a true “religion”, it seems elements of Harari’s Dataism are already a reality.
I think a strong argument in Harari’s two books, and which he makes also makes more explicitly in the cited YouTube talk above, is that there is almost no talk about the grand implications of our technological development in current political debate. Harari jokes that in the perpetual elections of his country Israel he never once was asked about the Internet. Indeed, if civilization’s religion is indeed transitioning from a phase of humanism started centuries ago to something much more technological, infused with software on all sides—shouldn’t we all have a say in what this future “religion” or “ideology” is to be? Or am I already outmoded: have we moved on already to post-humanism or even Dataism, where democracy is moot and small groups of enhanced humans, or the algorithms themselves, make all the key decisions?
We may already be at that point.