Commentaries Guest User Commentaries Guest User

Toward a polycentric or distributed approach to artificial intelligence & science

Even as enthusiasm grows over the potential of artificial intelligence (AI), concerns have arisen in equal measure about a possible domination of the field by Big Tech. Such an outcome would replicate many of the mistakes of preceding decades, when a handful of companies accumulated unprecedented market power and often acted as de facto regulators in the global digital ecosystem.

Read More
Guest User Guest User

Defining PHOSITA: Access to AI tools and patentability standards

To receive patent protection for their invention, inventors are required to describe their inventions in such “full, clear, concise, and exact terms” that “one skilled in the art” can make and use the claimed invention. Further, inventions are not patentable “if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious….to a person having ordinary skill in the art to which the claimed invention pertains.” Both of these standards involve evaluations in light of fictitious people engaging in the art – herein referred to as a person having ordinary skill in the art (PHOSITA). They also inherently require attention to the definition of art, the specific technology field to which the invention pertains.

Read More
Guest User Guest User

Does history rhyme? Supercomputing, AI, and the US government’s support for a research data infrastructure

The author Mark Twain supposedly said that “history does not repeat itself but it rhymes.” And with respect to support for AI research, a number of recent actions by the US government appear to rhyme with similar actions it took in the 1980s, when it recommended (and ultimately implemented) significant support for supercomputing-based research.

Read More
Conversation George Thomas Conversation George Thomas

Key takeaways from AI and Academic Research panel

To foster the discussion on AI and academic, Frontiers Policy Labs and Frontiers’ publishing development department organized a live multidisciplinary panel session in October. The event featured distinguished academics from various fields of research, who are recognized for their knowledge and implementation of AI systems in their respective areas. Mathieu Denis, head of the Centre for Science Futures at the International Science Council, moderated the discussion.

Read More
George Thomas George Thomas

Does the hype of Generative AI need top-down regulation, or will it implode?

The large language model (LLM) tools we see today are essentially using a form of advanced autocompletion based on massive input, which is potentially itself of questionable validity. The infamous ‘hallucinations’ we see being produced are at least in part a result of poor inputs as well as of a lack of validated conceptual models to constrain the LLM’s algorithms and output. Attempts to regulate these tools, and the concomitant hype, may only play into the commercial interests of their creators.

The ‘blind’ use of computational models to analyze anything (data or information), without the proper underpinning of conceptual modelling (data and algorithms), is dangerous and leads to all kinds of meaningless extrapolations, including the famous ‘hallucinations’ of LLM outputs.

Read More
Commentary Guest User Commentary Guest User

Strategic autonomy in the digital world

Over 65% of the European cloud market is in the hands of US companies. There are no significant social media platforms in European hands. Although a global leader in the 1990s, Europe’s share in semiconductor production has fallen to just 10% of the global market. Risk-capital investments are US dominated. These are just a few indications of how the EU is losing its strategic digital autonomy.

Read More