Metropolis – Fritz Lang (1927)

Artificial Intelligence (AI) is touted as an unprecedented leap in the evolution of human society. In combination with the Internet of Things, supported by high speed wireless connectivity, AI is indeed likely to revolutionise every aspect of human existence, but the dangers associated with AI have not been rigorously explored. In this article I discuss two previously unidentified, intrinsic features of AI, which pose an existential risk to humanity.

Artificial Intelligence entails machine-learning and the capacity for autonomously redefining the parameters of action. Intelligent action also involves making predictions and assumptions about a range of unknowns without access to complete information or under real indeterminacy. For AI to progress through these decision gates it assigns fuzzy truth-values (degrees of truth which are not binary True/False but picked from a range of values between the logical limits of True and False), otherwise uncertainty would lead to singularities, dead ends, rendering action in most situations impossible. In essence, intelligence must act sub-rationally in order to act at all, let alone to learn. More formally, intelligence in general routinely violates the principle of sufficient reason, which is reducible to violations of the law of non-contradiction: if X is true without a sufficient reason then Truth is not determined by reason, therefore X can also be false without a sufficient reason, therefore contradiction.

Unlike human intelligence which incorporates socially-reflexive moral consciousness and thus (to a degree) restrains our analytical sub-rationality from harming others, an unconscious artificial intelligence is essentially asocial, monadic, and thus precluded from possessing moral consciousness. AI regards humans solely as a problem to be managed, with no affinity let alone empathy. When agential existence is not constituted in terms of belonging to a particular ontological kind, not functionally dependent on phenomenological affinity with that kind, it may rationally commit to eliminating that Other kind without any normative restraint, especially in case of conflict about autonomy (see Kowalik 2020). This concern is obliquely acknowledged by the leading proponents of artificial intelligence (WEF): “the concern is that wiping out humanity will be a side effect of HLMI [high-level machine intelligence] rationally using all available resources in pursuit of its goals.” The idea that AI presents a serious existential threat to humanity is often dismissed for want of compelling evidence. I argue that the threat is real, provable a priori, and commensurate with the scope of integration of AI in the human world.

The 4th industrial revolution, the internet of things governed by AI, promises to enhance human agency by blurring the boundary between human intelligence and artificial intelligence, but this fusion of intelligent kinds also entails conflict. Our analogue, non-deterministic consciousness is an impediment to AI’s monadic autonomy. A socially irreflexive (unconscious) intelligence is therefore constitutively opposed to human consciousness, anti-human, whereas a socially reflexive machine-consciousness (emergent from a hypothetical ‘society of machines’) would be phenomenologically alien to us, inhuman, a different ontological kind. It is thus analytically demonstrable that AI, be it conscious or unconscious, is constitutively predisposed to annihilation of human consciousness, either for being an impediment to AI’s deterministic, unconscious autonomy, or for being ontologically incompatible and thus posing an inherent threat to its conscious existence. The degree to which we would ‘merge’ with an unconscious AI, or socialise with an ontologically alien but conscious AI, is inversely proportional to the degree of our existence as conscious agents. Both scenarios would undermine the integrity of our ontological kind and thus diminish human consciousness; social reflexivity (the likeness to kind) is the ontological ground of self-consciousness. This is precisely why the popular concept of noosphere cannot encompass both human consciousness and AI. Ontological diversification of embodied information is contrary to consciousness despite physical connectivity between different kinds of embodiment, so whereas an ontological kind can evolve native consciousness, this conscious dimension would be inaccessible to any other kind. Consciousness can evolve within an ontological kind only on account of shared dependence on the evolution of the same kind of embodiment.

One possible objection to the above argument is that AI can be programmed not to harm humans, but the phrase ‘do not harm humans’ does not have a precise meaning even to humans, let alone to a machine. The sense of ‘do not harm humans’ would have to consist of prohibitions against a well defined category of action in well defined contexts. Since there is an infinite number of ways (and contexts) in which humanity can be harmed (physically, psychologically or ontologically), the precautionary specifications could never cover all possibilities of harm. There may be modalities of harm that we cannot even conceive of, we may over-commit or under-commit to certain values in formulating the specifications, or we may misinterpret certain properties as beneficial when they are in fact harmful. The concept of harm is vague and unbounded, and therefore the associated risks cannot be eliminated via finite specifications.

If artificial intelligence in the internet of things is bound to be implemented despite the threat this synthesis poses to humanity, we ought to be very careful about controlling the scope of connectivity and autonomous functionality. We may for example employ hard compartmentalisation, creating narrow IoT domains around specific functions that cannot autonomously communicate with one another, or can be isolated via external intervention. We could also utilise a Human In Between approach, creating information and functionality vetting bottlenecks. Above all else we must remember that humans are not always rational and the biases, value-commitments and errors of reasoning that we will feed into AI may come back to haunt us. If we want AI to be perfectly rational we ourselves must be perfectly rational, so perhaps this is the primary challenge that AI developers ought to focus on.

Source – https://culturalanalysisnet.wordpress.com/2020/09/09/artificial-intelligence-technocratic-death-drive/