“Can you take AI out of the wild and should you?”

Thomas Hartung

Johns Hopkins University

Field Chief Editor Frontiers in Artificial Intelligence


Welcome to the Wild West of AI and its ongoing gold rush. AI industry is expected to increase by over 13x over the next 7 years (a compound annual growth rate (CAGR) of 38%, i.e., the rate of return for an investment)[1]. Is it time to regulate or is this obstructing these exciting developments? And what are the chances to do so? In his commentary “Getting a grip on data and Artificial Intelligence”, Jean-Claude Burgelman views data and AI as strategic societal assets and calls for a global regulatory approach, akin to an International Data and AI Agency. Burgelman advocates for a use-case-based regulatory approach with FAIR principles[2] and obligates AI systems to reveal their AI nature, two very reasonable requests.

When invited to comment on this piece, I started with an experiment asking the latest GPT-4 released two months ago to summarize, praise and criticize Burgelman’s commentary using the text as prompt. The results are quite impressive (supplementary file), which demonstrates once again the capabilities of AI.

Similar to AI experts Gary Marcus and Anka Reuel earlier on 18 Apr 2023 in The Economist[3], Brugelman claims successful precedents with the International Atomic Energy Agency (IAEA). Roger Koppl, a professor of finance at the Whitman School of Management of Syracuse University and the author of the book Expert Failure, criticized[4] “Unlike nuclear technology, generative AI changes rapidly. It is used by many dispersed actors for an unspecifiable array of purposes. Many of those purposes will be formulated in the future and cannot be imagined now. Who, then, are to be the signatories, and what would an inspection regime look like? The answer, of course, is that the mass of users of generative AI will not and cannot be signatories to any supposed agreement created … Through such an organization, a highly invasive, active, and empowered government could monitor AI use and punish anyone not using it for approved purposes. … it would bring us closer to a bureaucratic hell. Their concrete proposal would create expert power that is easily abused. And they want to use that power to decide for you what is true and what is false. There is a word for governments who try to control your thoughts, and that word is tyranny. Value expertise, but fear expert power.”

However, in a blog post published on May 22[5], OpenAI co-founders Sam Altman, Greg Brockman, and Ilya Suskever also suggest the International Atomic Energy Agency (IAEA) as a blueprint for regulating “superintelligent” AI. In response to the dilemma of nuclear power and its possibly devastating consequences, Swiss author Friedrich Dürrenmatt said in his monumental play The PhysicistsNothing that has been thought can ever be taken back”. So, can we tame the AI beast? Probably even less than the nuclear one. Burgelman compares the societal integration of AI and data to these previous technologies and highlights the concept of "technological sovereignty." While the current hype around large language models seems to be transatlantically dominated, AI has many less visible and even more dominant players involved, especially China[6],[7]: Chinese institutions author 4.5x the number of papers than American institutions since 2010; China dominates AI journal citations with 28%, compared to Europe 21% and the US at 17%; and even more AI patents originate with 52% from China with only 7% from the US and 4% from Europe. These numbers should be sobering for discussions of a moratorium for AI developments as well as International regulations and shows a Western bias in the discussions including the commentary. It makes taming the AI beast much less likely. But do we want to tame its anarchical energy? Seeing the democratizing effect of AI on making data, information and ultimately knowledge available, AI seems to finally bring sense to the information flooding. It allows to condense complex data in a much more often than not reasonable way. It is finally reversing T.S. Elliot’s worries: “Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?” (TS Eliot – The Rock). AI is finding the wisdom of the crowd, which was about to be lost in information.

Supplementary File

CHAT-GPT4 was asked to summarize, praise and criticize the Commentary “Getting a grip on data and Artificial Intelligence” published on May 8, 2023 by Prof Jean-Claude Burgelman as Editor in Chief in Frontiers Policy Labs. The text was used as prompt.

SUMMARY

In his commentary, Prof. Jean-Claude Burgelman argues that data and Artificial Intelligence (AI), like ChatGPT, have become strategic assets to modern-day societies. Therefore, they require global regulation and potentially the establishment of an International Data and AI agency.

The emergence of AI systems such as ChatGPT has raised a need for regulation of these systems and the data they rely on. This has resulted in a notable group of scientists and businesspeople calling for a temporary halt to research into these algorithmic technologies. However, Burgelman questions the potential effectiveness of this call, noting past concerns raised about new technologies, like cable and satellite TV, computers, the internet, and the web, which did not halt their development and integration into society.

Burgelman contends that, much like energy, data and AI tools have become the fundamental building blocks of modern societies. Any halt or absence of these tools would significantly impact how societies currently function.

He also provides a geopolitical perspective, noting that we're now in an era of "technological sovereignty," where countries strive to control the key technological assets necessary for society's functioning. This trend is seen in Europe's data policies, which aim to cut down dependency on strategic assets' imports, leading to substantial European investments in renewable energy, data, and AI tech.

The author argues for a global regulatory approach and the creation of an "International Data and AI Agency" (IDAIA), similar to the International Atomic Energy Agency (IAEA) established after World War II to manage the risks of atomic energy. The IDAIA could similarly manage the global use of data and AI, ensuring they're findable, accessible, interoperable, and reusable (FAIR), and obliging AI systems to make themselves known as AI when interacting with humans.

The commentary emphasizes that any regulation should be use-case-based and not intention-based and that self-regulation is insufficient given the high stakes and conflicting interests involved in AI and data use. The global approach proposed would require buy-in from all nations, with the United Nations being a potential host for the proposed IDAIA.

PRAISE

Prof. Jean-Claude Burgelman’s commentary offers several positive and insightful aspects:

1.      Recognizing the Value of Data and AI: The author rightly highlights the importance of data and AI as strategic assets in modern societies. His comparison of data and AI to energy reinforces their integral role in driving societal functioning and progress.

2.      Promoting Global Regulation: He advocates for a global regulatory approach to manage data and AI use, emphasizing the need for interoperability, transparency, and accountability. This perspective is important given the borderless nature of digital technologies and the global implications of their misuse.

3.      Addressing Technological Sovereignty: Burgelman introduces the concept of "technological sovereignty," underlining how countries are striving to control key technological assets necessary for societal functioning. This contributes to a deeper understanding of the geopolitical dynamics shaping the global tech landscape.

4.      Proposing an International Agency: His proposal to establish an International Data and AI Agency draws parallels to the International Atomic Energy Agency, suggesting a potential model to manage the global use and potential risks of data and AI.

5.      Emphasizing FAIR Principles: The emphasis on FAIR (findable, accessible, interoperable, and reusable) principles for data and AI systems echoes best practices in data management, highlighting the importance of making data a strategic and shared asset.

6.      Responsible AI Use: The commentary stresses the need for AI systems to reveal their AI nature when interacting with humans, advocating for more ethical and transparent AI practices.

Overall, the commentary adds valuable perspectives to the ongoing discourse about data, AI, their societal implications, and the need for effective regulation.

CRITICIZE

While Prof. Jean-Claude Burgelman’s commentary provides valuable insight into the importance of data and Artificial Intelligence (AI) as strategic assets for societies, there are several points that could be criticized.

1.      Broad Assumptions: Burgelman assumes that all societies universally value and depend on data and AI in the same way. This overlooks the diversity in technological adoption and dependency across different societies and cultures. Not all countries or regions have the same level of access or reliance on data and AI technologies.

2.      Lack of Evidence or Case Studies: The commentary would have been stronger with empirical evidence or case studies to support the author's claims. For example, when asserting the importance of data and AI as strategic assets or when suggesting the creation of an International Data and AI Agency, it would be beneficial to provide concrete examples or precedents.

3.      Practicality of a Global AI Agency: While the idea of an International Data and AI Agency is interesting, the author does not adequately address the practical challenges this might entail. Differences in national interests, cultures, and policies could significantly hinder the establishment and operation of such an agency. The success of the International Atomic Energy Agency (IAEA) doesn't necessarily imply a similar outcome for an AI-focused agency.

4.      Bias Toward Western Societies: Burgelman’s commentary heavily leans on the perspectives and experiences of Western societies, particularly in discussing historical tech-related fears and the recent call to halt AI research. He largely excludes non-Western perspectives on these issues, which could offer a more balanced and globally representative viewpoint.

5.      The Impact of AI and Data on Jobs: Burgelman dismisses the potential negative impact of AI and data technologies on jobs. While it's true that past technological advances did not lead to a complete loss of jobs, they did lead to substantial job displacement and changes in job nature. With AI and data technologies becoming more sophisticated, the fear of job loss or drastic changes in job nature is not entirely unfounded.

6.      Simplification of Technological Sovereignty: The author simplifies the concept of "technological sovereignty," treating it as a universally agreed-upon concept. However, this concept is complex and can be interpreted differently depending on the political, economic, and social contexts of a country or region. He should have provided a more nuanced discussion on the topic.

7.      Overreliance on FAIR Principles: While the FAIR principles are valuable, they may not be sufficient or universally applicable for all data and AI contexts. Different types of data and AI use-cases may require additional or different principles. The commentary does not acknowledge this complexity.


[1] https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market

[2] https://www.go-fair.org/fair-principles/

[3] https://www.economist.com/by-invitation/2023/04/18/the-world-needs-an-international-agency-for-artificial-intelligence-say-two-ai-experts

[4] https://reason.org/commentary/an-ai-regulatory-agency-would-politicize-artificial-intelligence/

[5] https://openai.com/blog/governance-of-superintelligence

[6] https://www.stateof.ai

[7] https://hai.stanford.edu/ai-index-2022

Previous
Previous

Transformative industrial policy in Europe through a Schumpeterian “looking glass”: Capitalism, sustainability and democracy? 

Next
Next

Rebuilding European Bridges: Science diplomacy and the Ukraine-Russia conflict