We may say that Bostrom’s “high-impact philosophy” is one that has been subject to and transformed by technique, which, on Ellul’s analysis, implies that it has traded its freedom for efficiency. Put differently, the virtue of perennial wisdom is replaced by a sort of smart utilitarianism.
No Free Choices?
Modern people are culturally conditioned to think they are freer than previous generations, in large part due to the blessings of sciences and technology in their everyday lives. Admittedly, we are generally enjoying a materially more comfortable life than did our forebears, but we are also beginning to be alerted to the problems involved in this kind of life. I have already mentioned climate change but numerous examples can be drawn from the way smart technologies and social media affect us. For instance, there are endless possibilities for self-expression by way of social media, but we have also become increasingly aware of being “owned” and conditioned by the big tech companies and billionaire CEOs. But a dim awareness is not to be confused with an insight into our state that will lead to action, at least not for most individuals. Moreover, according to Ellul’s analysis, those very big tech elites are equally determined by the technology they produce. In fact, in a technological society, there is no longer a controlling elite, because politicians, journalists, technicians, and philosophers (which is to say, government, media, big tech, and the academy) are also defined by—and perhaps in the end replaced by—the perfection of technique: the machine.
By extension, then, even the individual or corporate choice to enter a transhumanist existence is not a free choice. Even less free are the individual and corporate choices involved in such an existence. With the emergence of the Janus-faced artificial human intelligence, at some point morphing into superintelligence, the rendering of what constitutes “individuality” will be determined by a superior technological power whose goals we may not be able to predict or understand. It has been suggested that perhaps the goal for a Superintelligence is to produce an infinite number of random objects, like paper clips. Then everything will serve that arbitrary end-as-means, humans included. (Think humans as “batteries” in The Matrix films.) This suggestion is supposed to highlight the potential dangers of letting an unharnessed artificial intelligence loose in the world. The transhumanist future hope is thus very fragile.
Despite these hypothetical risks, let us grant, for the sake of argument, that should humanity survive the supposed point of singularity, when artificial intelligence exceeds human intelligence, enhancement technologies will bring high and stable degrees of happiness, physical strength, and perceptive awareness to transhumanist individuals. However, such enhanced features are not unequivocally identical to a heightened individuality or autonomy according to Ellul’s basically personalist analysis. He remarks: “When Technique displays an interest in man, it does so by converting him into a material object,” and man will be guaranteed the kinds of “material happiness as material objects can [guarantee].… But the technical society is not, and cannot be, a genuinely humanist society, since it puts in first place not man but material things.” Ellul is convinced that human or “spiritual” excellence and progress is not reducible to technique. Conversely, material development is not identical to spiritual or intellectual maturation. (Ellul here has an argument with both the capitalist and the Marxist visions of well-being, which were clearly displayed in the postwar European scene.)