Key takeaways from AI and Academic Research panel

In May, Frontiers Policy Labs published a commentary by its editor-in-chief Jean-Claude Burgelman entitled, “Getting a grip on data and Artificial Intelligence.” In the piece, he proposed the creation of a global regulatory body, an “International Data and AI Agency,” to regulate the development of AI systems and their underlying data. The rationale behind the establishment of such an agency is similar to that of the International Atomic Energy Agency: to address fears about atomic energy – or in this case AI – and its potential uses and abuses. 

Burgelman is not alone in his calls for regulation, and it remains a topic high on the policy and public agenda. Earlier in the year, a significant group of scientists and businesspeople requested a pause in the training of powerful AI systems. Their reason being that powerful AI technologies “should be developed only once we are confident that their effects will be positive, and their risks will be manageable.” 

During this time, data and AI tools to interpret it have become fundamental building blocks of modern-day society, making them a key asset for any country or human activity, much like energy production and use. This observation can be applied to science as well. Today’s science is data driven and the potential benefits that AI offers to science are immense. Therefore, it’s understandable that the scientific community is equally concerned about regulating AI. 

To foster the discussion, Frontiers Policy Labs and Frontiers’ publishing development department organized a live multidisciplinary panel session in October. The event featured distinguished academics from various fields of research, who are recognized for their knowledge and implementation of AI systems in their respective areas. Mathieu Denis, head of the Centre for Science Futures at the International Science Council, moderated the discussion. 

Below is a summary of the expertise showcased during this panel session. This includes a closer look at how AI is currently influencing diverse fields of research and transforming our approach to science. Their perspectives also highlight the opportunities and challenges involved in employing and regulating AI to ensure it best serves the scientific community and the work of researchers. 

Ruth Morgan

Editorial Board Member Policy Labs

Professor of Crime and Forensic Science and Vice Dean (Interdisciplinarity Entrepreneurship), University College London, UK

“Technologically, AI is opening up many opportunities for broader, more lateral thinking and societally, it's opening up conversations, which take us into a very interesting place as we shape the future.” 

As an interdisciplinary researcher, Professor Morgan offered a unique outlook on the intersections between disciplines and the opportunities they hold for both science and society. Reflecting on the use of AI in forensic science, she noted that AI has long been relied on for assistance in pattern recognition, such as with fingerprint comparison. The continued development of AI has made it possible to interrogate larger datasets from various sources over the last several years, further aiding in facial recognition or reconstruction of digital fingerprints. While these developments are greatly beneficial to the field itself, the lessons learned from the way AI has been used in forensic science have broader implications for science itself. This is best seen in AI's ability to remove limitations in terms of the breadth and depth of knowledge that can be assessed, meaning researchers are able to evaluate larger amounts of data from different disciplines, industry, geographies, and generations in a holistic manner. 

Professor Morgan also highlighted the need for regulation that does not stifle innovation, adding that currently the idea of regulation largely deals with the intended consequences of good actors, not unintended consequences or bad actors. 

Barend Mons

Professor of Biosemantics, Human Genetics Department, Leiden University Medical Center, Netherlands 

“More importantly, what we should do is rather than feeding any AI type models with text which is highly ambiguous, is feed AI algorithms with what we call ‘fully AI ready, fair data.’ That is machine readable data knowledge graphs with very strong provenance, so you know exactly where every triple in a graph comes from.” 

If ‘intelligence’ is defined as the ability to see problems, work through them, and find a solution, then artificial intelligence cannot be considered intelligent. Biosemantics professor Barend Mons shared how AI's so-called ‘intelligence’ is actually advanced pattern recognition and statistics. This means that AI systems achieve their end goal because they simply follow the ‘rules’ laid out by carefully designed algorithms. Therefore, the clearer the rules and the higher quality of data fed to AI systems, the better the outputs they will produce. This is same logic can be applied to the regulation of AI. Rather than trying to limit the use of AI, it would be far more beneficial to input AI-ready data with strong provenance and providing clear conceptual models to provide context as to how information relates. Improved provenance will not only improve the results AI systems produce, but it will also benefit researchers as they will have more transparency into where data came from, the quality of said data, how it was used with certain models, and the ability to replicate results. Even with these improvements, Professor Mons emphasizes the importance of human review.  

Chaomei Chen

Field Chief Editor of Frontiers in Research Metrics and Analytics

Professor of Information Science in the College of Computing and Informatics at Drexel University, USA 

"AI, or similar powerful tools, are going to make transformative changes. Not only do we want to think about to what extent the human researchers should stay in the loop, the question is [also] how the current scientific literature or scholarly communication may change altogether fundamentally.” 

AI tools are used in almost every part of scientific research and communication process, from generating abstracts and hypotheses to analyzing data and writing articles, noted Professor Chen. On one hand, this can help free time and resources for researchers to spend in other, less automated areas. On the other hand, as AI evolves it can further blur the lines and make it challenging to differentiate between what is produced by a human versus a machine. This is a main factor as to why regulation is needed, to provide transparency around how and what AI was used to create and ensure replicability for others. While there is still much work to be done, this meeting of the two does present opportunities to strengthen our scientific base. With a focus on the quality, value, and use of information, Professor Chen explained how AI can bridge different areas of science, allowing for more knowledge exchange and interdisciplinary breakthroughs like what is seen in many Nobel Prize-winning discoveries. With a stronger shared knowledge and automation of writing-related tasks, the question remains how the communication around research and findings might change. Could we see the length and format of research articles change? 

Izuru Takewaki

Field Chief Editor of Frontiers in Built Environment

Professor of Structural Engineering, Kyoto Arts and Crafts University, Japan 

“The metaverse is a combination of virtual reality and augmented reality. So, we can experience a virtual space using the metaverse and to realize original and creative actions. The combination of the metaverse with the real world is not easy, but we have to do that in the near future.” 

Artificial intelligence has already shown its potential in the field of structural engineering. It made it possible to simulate artistic, structural, and environmental aspects of building before the construction process beings. This has led to a more holistic exploration of different decisions in a time- and cost-efficient manner. The long-term impact of AI is promising, and considering the benefits it has brought with it in the relatively short period it's been in use makes Professor Izuru Takewaki optimistic for what is to come. In the future, AI could be used alongside virtual reality (VR) and augmented reality (AR) to accelerate the metaverse and create immersive 3D environments. As with any transformation, it will take time to adapt and change mindsets, which is why long-term use of AI needs to be considered alongside short-term decisions that are being made today in order lay the right foundation and be better prepared. 

Nova Ahmed

Editorial Board Member Policy Labs

Professor of Computer Science, North South University, Bangladesh 

It's on us, how much we are training the data, how much we should include it, how we should make sure there is enough representation, or, if there's not enough representation, whether or not to use that data for making a critical decision.” 

Professor Ahmed underscored the priority that should be given to recognizing and removing instances of bias in AI systems and the data they rely on. She cautioned that not all benefits of AI may be as they appear. For instance, while AI can assist non-native English speakers in improving their written papers and increasing their chances of publication, this is not solving the problem. Rather, it is upholding more traditional mindsets and biases that English is the language of science and researchers must have native-level abilities to be successful. There is also mutual concern about the data used to train AI not accurately representing the relevant communities. Professor Ahmed referred to early generative models that perpetuated negative stereotypes and, in some cases, harassment. To counteract this, she stressed the importance of human inspection when it comes to accurately representing populations, ensuring the participation of marginalized groups, and reviewing outputs produced by AI. 

Leslie Paul Thiele

Specialty Chief Editor, Frontiers in Political Science

Professor of Political Theory, University of Florida, USA

“A lot of what's in scientific papers isn't just the reporting of the findings, but it has to do with the implications, the discussion, and the application. And I think there's a danger in letting AI write up our reports for us in that regard.”  

In acknowledging the many capabilities and benefits AI has provided to his field of political science, Professor Thiele simultaneously addressed a potential unintended consequence: deskilling. The widespread use of AI could lead to a significant reduction in skill levels within academia. For example, the more affordable cost of AI may reduce the use of research assistants, who traditionally develop their skills through apprenticeships. Additionally, as AI is increasingly employed to generate hypotheses and draft scientific articles, it could further contribute to diminishing cognitive skills. This can be compared to the widespread adoption of GPS and the decline of wayfinding skills of the average person. The scientific community's overreliance on such technology may not only result in incorrect or misleading outcomes as many have pointed out. It may also remove the explorative or reflective component that comes with writing one's ideas down, a process that is often necessary to better understand the importance and ramifications of one's work. In response, Professor Thiele is among those already incorporating AI into undergraduate and graduate courses so that students learn to use AI as a tool to assist with their work, rather than a tool to do the work for them. Informed use, with appropriate human oversight and participation, can over the danger of relying too heavily on AI and depleting skillsets to the point that humans resemble AI in ways that are not productive. 

The session, which offers a more in-depth conversation on the themes captured above, is available to watch in full below.

Previous
Previous

The fossil fuel policy gap  

Next
Next

Key takeaways from the WEF 2023 Top 10 Emerging Technologies panel