Understanding AI from the perspective of the social sciences

Luk Van Langenhove 

Brussels School of Governance, Vrije Universiteit Brussel


The 1947 invention of the transistor at Bell Labs marked the beginning of a process of digitalization that revolutionized just about everything.[1] One of the most impressive things in that development is the growing performative power of so-called artificial intelligence (AI), in contrast to the ‘natural’ intelligence of human beings. Today, we are witnessing a moral panic over AI, fueled in part by the people who play an important role in AI’s development.  

I argue that this moral panic might be related to wrong or incomplete assessments of what is going on. A major source of confusion is the human tendency to anthropomorphize everything that is believed to express agency. Lack of conceptual clarity might be another reason why AI seems to be demonized. One must be as precise as possible in pointing out the nature of the dangers of AI and adequately comparing them with the situation humanity faced before AI. This is where psychology, social sciences and philosophy can help.  

The problem is not that we should fear the tremendous capacity of the AI algorithms, nor that there is a risk of AI autonomous intentional acting against humans. The real problem is the capacity to mimic personality and the changes in conversational space that are related to that.  

The computational capacity of AI 

Before the existence of digitalized computational machines, the social reality of human beings consisted of other people, material artefacts created by people, natural objects existing independent of people, and institutional facts that existed because people defined them as existing. Of these four types, people occupy a special position as they are the only social entities showing agency and intentionality. Human beings are indeed considered to have a certain degree of autonomy while living in a society that both stimulates (learning) and contains (social control) their personal agency. Each person is equipped with one brain, but homo sapiens also uses the brainpower of many other people.  

In terms of cognitive capacities, human beings are thus punching way above their individual intellectual weight. To make this possible, the collective brainpower of humans is embedded in societies that act as complex systems in the division of labor, hierarchies, regulation and governance structures, although obviously with deficits, inequality being one of them. But the gap between what we can achieve collectively and what we are capable of as individuals is spectacular.  

Another way to look at this is that long before AI existed, people already used ‘artificial’ intelligence in the sense they used brainpower external to their own brains to do things they could never do as isolated agents dependent only on their own brain. One does not need to know how to build a plane to be able to fly across the ocean. For a long time now, people have had access to an “extended brain” consisting of  one’s own brain as well as the brains of thousands and thousands of others living elsewhere or having lived in the past. The reality is that we can now add to that AI as part of that extended brain. AI is a tool that people can use to grow their intelligence. The tool itself does not possess an intelligence itself, as it stands.  

Intelligence is a catch-all concept for talking about decontextualized intellectual capacities. But intelligence capacity is not the same as willingness to use that capacity, or indeed as the appropriateness of using that capacity. Intelligence is a tool that human beings can mobilize on an individual level. The beauty of the human brain is that we invented all kind of tools to help us pursue certain goals, whether the rhyming of poems to help remember stories, or the printing of stories to remove the need to remember them. It seems to me that many of the concerns about AI center on the intentions or the autonomous behavior (agency) of the AI systems, which in turn are related to its cognitive capacities, as well as to the potential of acting as if it is a person in a conversation.  

Changes in the role of conversations in society 

The backbone of the complex system we call society is an endless flow of conversation between people that is species-wide and historical, and to which each of us contributes from time to time.[2] Conversations are also the locus of something profoundly human, the utterance of speech-acts.[3] This is a linguistic device that not only allows us to do things with words, but also to make others do things. Speech-acts are in that sense units that have a certain power, if spoken in the context of a conversation.[4] Speech-acts are a fundamental characteristic of human beings.  

AI systems are already to a large extent part of our everyday conversations. When a simple GPS system tells you to turn left, that is speech-act. So, producing speech-acts is no longer the prerequisite of humans. This means that the conversational space in which humans are embedded also contains speech-acts. In a way, this is not so different from the “conversations” between people and, for example, signs in which an iconic image of a speech-act is represented.  

The issue is that by mobilizing other human beings in our reservoir of tools, we become dependent on people who have their own agency, and an agency that can turn against us. I’m inclined to say that there is no substantial difference between a situation without AI and one with AI: we are already surrounded by potential malignant agents and meanwhile it remains still to be proven that AI systems could also become such autonomous agents.  

Changes in the personification of actors 

Can we assume that AI systems are developing personality or are they just good in mimicking people? The core issue here is personhood. The questions of what distinguishes a person from a non-person and is made up out of four elements: agency (expressed in intentional acts), rationality (expressed in goal achievement), identity (expressed in singularity), and reciprocal achievements.  

To start with the last element, human beings can only ‘become’ people if they are treated by other people as if they are people. This is what happens with babies, in as much as it is the baby-talk of parents that will help them develop their personality. Try the same with your pet cat and it will not happen. My take is that AI systems resemble cats more than they do babies and that they never will develop a real personhood. Which does not exclude the possibility that they can act as if they possess personhood.  

Policy consequences 

To paraphrase Mark Twain, the rumors of the dangers of AI are greatly exaggerated. We should stop the moral panic. This does not mean we are without problems. Every technology developed by humans can be used for good or bad. In all cases, the technology is neutral and depends on the reasons why and how someone uses it. But the ultimate danger might be that although AI has no personhood, it can mimic personal agency by uttering speech-acts. As a result, the future may be one in which people live in a hybrid space of conversations with people, and conversations with AI systems. 

In that context Burgelman is right to call for more regulation, as he is right to call for rules that oblige AI systems to make themselves known as AI. Here the comparison can be made with the way in which we deal with legal representatives: we will always need a (human) spokesperson who represents the legal entity. I am less convinced about Burgelman’s second proposal to establish a global data and AI agency, analogue to the International Atomic Energy Agency. Such a single, centralized global institute risks joining one of the many multilateral institutes of the UN family that are working in an old fashioned bureaucratic model based upon resolutions and countless amendments. The alternative is Multilateralism 2.0 as applied to internet governance through the Internet Corporation for Assigned Names and Numbers (ICANN).[5]  

But even more importantly, we need to better understand how the presence of AI generated speech-acts changes the apparent entanglement of social structures and personal agencies. The development of generative AI systems does not fundamentally change the already-existing ability of people to work with extended brains. But it is changing the conversational space in which people are embedded, such that we see a hybrid space in which speech-acts generated by people and machines interact. Time will be needed to understand how this change can be put to of improving our understanding and eventually solving the climate crise. Maybe generative AI will come just in time.


[1] See for the story of the creation of the transistor: Skalli and Van Langenhove (2023). De erfenis van 1947 (the heritage of 1947). Brussels: ASP editions.

[2] The idea of thinking of society as a history-long and species-wide web of conversations was first advanced by the philosopher of science, Rom Harré (Harré, 19); and

[3] The notion of speech-acts stems from John Austin and was further developed by John Searle.

[4] The link between the power of speech-acts, the rights and duties that a person has and the conversational storyline is the basic premise of Positioning Theory as developed by Rom Harré and Van Langenhove (1998).

[5] See Multaraism 2.0 text.

Previous
Previous

Claiming Open Science via Human Rights? An Analysis of General Comment No. 25 on Science and Human Rights

Next
Next

Transformative industrial policy in Europe through a Schumpeterian “looking glass”: Capitalism, sustainability and democracy?