AI must prove itself in 2025

AI must prove itself in 2025

AI must prove itself in 2025

A critical look at the prospects of AI in Society 

The year 2025 may be the year we will become more realistic about artificial intelligence. The big "wow effect" from November 2022 is history. Instead, attention will shift to what results we can see from the technology in the near future.

Portrait Eric Monteiro
- It is crucial to expand what is understood as AI research,
says Eric Monteiro on the challenges ahead for the new technologies.

Professor Eric Monteiro, who has joined NorwAI's work package "AI in Society", says that it is a challenge to operationalize AI in the further work. For NorwAI, an important part of this operationalization is to get AI methods and models from the lab and into practical work routines in businesses

- We can assume that "Society" is in a way everything and nothing. It is abstract, it is all-encompassing and airy. When we are going to work further with “AI in Society”, it is crucial to expand what is understood as AI research, says Eric Monteiro.

A broader concept

Established fields within computer science have partly consisted of developing technology and hoping that someone will use it. This emphasizes the necessity of including learning, both iteration and interactive forms of learning in general, to the AI research field:

- We have developed AI tools and that is where it ends as far as many are concerned. Now we must include implementation processes so that our research is put into use without which it cannot have social consequences. The concept of AI has become broader in recent years; it includes not only technology, but also ecological processes and its impact on society.

Professor Eric Monteiro says there is an interesting debate going on whether AI will really have a great effect on society as the earliest predictions indicated at AI's last breakthrough in 2022. The British magazine "The Economist" asks whether the AI bubble will burst in 2025, or whether it will start to deliver and writes in a commentary article before the start of the new year:

"Today’s mania for artificial intelligence (AI) began with the launch of ChatGPT at the end of November 2022. OpenAI’s chatbot attracted 100m users within weeks, faster than any product in history. Investors also piled in. Spending on AI data centers between 2024 and 2027 is expected to exceed $1.4trn; the market value of Nvidia, the leading maker of AI chips, has increased eight-fold, to more than $3trn.

And yet most companies are still not sure what the technology can or cannot do, or how best to use it. Across the economy, only 5% of American businesses say they are using AI in their products and services. Few AI startups are turning a profit. And the energy and data constraints on AI model-making are becoming steadily more painful."

There is still a long way to go between convincing results in practical use, in businesses, both qualitatively and quantitatively, it is claimed in the debate that has arisen about AI. For example, it is rhetorically said that the enormous investments are not neutral but are driven by someone who has an interest in them.

A bubble? 

- Some say that what looks like a bubble probably is a bubble.

- It is difficult to conduct a sober assessment of what is happening in and around AI right now. The use of resources around AI is enormous. There is clear signs of a hype of expectations. Those of us who have been around for a while know that the subject is colored by a mismatch: computer science goes in waves, it is cyclical with inflated expectations and results that are long in coming, says Eric Monteiro, who remembers a similar uproar from the mid-80s and into the 90s.

- It was like a hot summer with sensational new results in computer science, but which was followed by a long winter period, to use such an image. It may well turn out that we are facing something similar now. It takes time to create results, says Eric Monteiro.

- How do we approach this?

- I think we have to speak explicitly about our expectations. Not everything in our field has been followed by tangible results, especially not in the short term (while we simultaneously tend to under-estimated long-term, cumulate effects). We have to consider whether our investments are in proportion to the results, have a sober conversation about lines and a call for sobriety, understand that we are in a phase of enlightenment and understanding about what we are facing, and by no means let stock prices be our witnesses to the truth, says Eric Monteiro.

Professional profile

He has been a professor at NTNU since 1992 with a professional profile in the digitalization of such important national sectors as health and energy, oil and gas. Now the impact of AI in the framework of NorwAI is his coming project:

- We must put pressure on following up on, for example, the language models we have developed. We must know what is happening out there with all those who have downloaded our models. We need to get more interactive thinking in the extension of our research, he says. 

- I also think we need to think of NorwAI's partnership as a cluster community where companies and organizations have different roles that are complementary. We need to look for both short-term and long-term effects. The latter can be gradual processes. We also need to develop how we measure the effect of the services that are developed. In this game, you have to be patient, says Eric Monteiro.

In the work package "AI in Society" he will collaborate with the University of Oslo, which has so far submitted contributions to the field. In the coming period, NorwAI will invest more capacity to increase the weight of the work package with an additional fellow in addition to Eric Monteiro himself.

 

2024-12-17

By Rolf D. Svendsen