Researchers Lemei and Peng to share their AI competence
Nice tribute to a hard-working couple: Researchers Lemei and Peng to share their AI competence
Writing for Springer Nature: Tutorials to be textbook on multimodal foundation models
Lemei Zhang and Peng Liu, both prominent researchers at the Department of Computer Science at NTNU, have been invited to write for Springer publishers in Germany. The ambition is to write a book to be a state-of-the-art textbook on large language models (LLMs). Or rather, the “Next gen of personalization with multimodal foundations models” as the working title reads.

The book will be published by Springer Nature, which is one of the world's largest academic publishers, dedicated to providing the best possible service to the whole global research community.
As generative LLMs are architecture built from scratch and prepared for specific purposes, foundation models (FMs) are pretrained with large and professional datasets with less finetuning. In addition, FMs are multimodial models.
- Peng and I and our three cowriters are very pleased to be given this mission by such a respected publisher. Springer asked us for this mission after we gave tutorials on the topic at the 27th European Conference on Artificial Intelligence (ECAI’24) held at Santiago de Compostela in Spain this autumn, says postdoc fellow Lemei.
She and Peng have worked intensively with LLMs the last years and have played a pivotal role at developing the NorLLM portfolio of the Norwegian Research Center for AI Innovation (NorwAI).
The cowriters are Prof. Jon Atle Gulla, at NTNU, head of NorwAI, who has supervised both Lemei and Peng for years in their academic career, writing the concluding chapter in the book. Also cowriting one chapter each are Yong Zheng, Associate Professor from Illinois Institute of Technology and Yashar Deldjoo, Assistant Professor from Polytechnic University of Bari.
The starting point for the project was the tutorial in Spain in 2024. Lemei and Peng will write one chapter each and one together.
- Kickoff is in February, and the contract says we must be finished by the end of June. We are now designing the writing process with the three other writers. There is a lot of work ahead, says Peng Liu.
Their primary goal for the book is to write a basic concept for how to use FMs for personalization tasks. Target groups for the book are master's students and other graduate students as well as people with some programming skills. And of course, other researchers and professional industrial people working on foundation models and personalization.
As researchers Lemei and Peng have worked on different datasets and architectures and tested which are useful and which have advantages compared to others. When NorwAI launched its six Norwegian NorLLM models in May last year, the couple trained the models and gained many experiences, also on which data influence the performance of models the most. This experience constitutes much of the knowledge basic for the upcoming book. And by the way, to this date, more than 10 000 users have downloaded these models.
- We also want to use the book for educational purposes and creating material for upcoming courses. When we write, we will share some codes and possibly some exercises. Readers will be able to learn by themselves, says the writers.
The book is divided in six chapters:
- The Evolution of Personalization in the Age of AI (Yong)
- Foundation Models: The Basics (Lemei)
- Adaptive Foundation Models for Personalization (Lemei, Peng)
- Benchmarking and Evaluation (Peng)
- Ethics, Privacy, and Security (Yashar)
- Prospects and Challenges (Jon Atle)
Language models are now at a technology innovation peak. New models are launched continuously. Both Lemei and Peng worked with the Mimir project where researchers from the National Library, the Language Technology Group (LTG) at the University of Oslo, NorwAI, and Sigma2 created new training datasets based on the Library’s collection and data harvested from the Internet. New evaluation data were developed for Norwegian language and Norwegian contexts, a total of 17 large language models were trained, all were evaluated using the same methodology, and the project was summarized and documented by the National Library, the University of Oslo and NorwAI.
- Do we have an established methodology for evaluation of foundations models, Peng?
- There are some existing evaluation frameworks, such as EleutherAI's lm-evaluation-harness, and generic benchmarks for assessing LLMs on text+image multimodal tasks. However, given the potential societal harms and risks posed by LLMs, holistic assessment frameworks are needed to capture their multifaceted implications—balancing rigorous measurement with social considerations in real-world contexts.
A date for publishing is not set yet. The writers hope that the book will be available the coming autumn.
By Rolf D. Svendsen
2025-02-27