May 14, 2025

Research team presents an AI agent article at a workshop held in conjunction with NAACL 2025, a leading conference on natural language processing

Keyword:RESEARCH

OBJECTIVE.

A research team led by a Specially Appointed Associate Professor from Rikkyo University has presented an article related to artificial intelligence (AI) agents at a workshop held on the sideline of the 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics (NAACL 2025). The presentation took place on May 3, 2025, in Albuquerque, New Mexico.

The team, led by Shin-nosuke Ishikawa from the Graduate School of Artificial Intelligence and Science, delivered its presentation on its paper exploring emotional expressions in large language models (LLMs) with the use of AI agents. This paper was accepted for the workshop, called the 5th International Conference on Natural Language Processing for Digital Humanities (NLP4DH).

NAACL, one of the most authoritative conferences in the field of natural language processing, is regularly attended by researchers from around the world.

Research overview

Thanks to the development of LLMs — a type of AI program designed to recognize and generate text based on data they have been trained on — humans are increasingly delegating many tasks, previously handled only by people, to AI. In certain areas, AI’s capabilities often surpass those of humans. However, AI that can feel, think, and act independently has not yet been developed. LLMs lack personalities and emotions, and thus do not have a sense of purpose. That limits their ability to behave like humans.
One approach that might address LLMs’ inability to express emotions is the use of AI agents. These agents are autonomous systems or programs that are designed to assign specific goals to LLMs, have them understand their environment and repeatedly select behaviors to achieve these pre-set goals. Even though AI does not have innate goals, it can role-play by virtually setting a state where it has a purpose, thus broadening its potential for handling a wider range of tasks.

When the concept of role-play is practiced involving simulating conversations or scenarios where AI agents adopt specific roles or personas, the relationship between LLMs and AI agents is analogous to that between the brain and the personality. Just as a personality, controlled by the brain (its backend), appears to have goals even though the brain itself does not possess them, AI agents can be seen as having goals despite LLMs inherently not having any. (See the illustration)
In this study, conducted under the concept of having AI agents assume certain roles, the team explored the ability of LLMs to express emotions in their outputs. The evaluation focused on whether the emotional states inferred from the LLMs’ responses—after the AI agents were prompted to behave as if they had specific emotions—were consistent with the intended emotional states. The results demonstrated that LLMs have a sufficient capacity to express emotions within role-playing scenarios. It is anticipated that developing more user-friendly systems will be possible as LLMs become capable of exhibiting more human-like behavior through emotional expression.

It is expected that future research on AI agent applications will extend beyond emotions to include capabilities such as possessing a form of free will.

Article information

Authored by Shin-nosuke Ishikawa and Atsushi Yoshino in 2025. “AI with Emotions: Exploring Emotional Expressions in Large Language Models.”
In Proceedings of the 5th International Conference on Natural Language Processing for Digital Humanities, pages 614–627, Albuquerque, USA. Association for Computational Linguistics.

You are viewing this site in a browser that is no longer supported or secure.
For the best possible experience, we recommend that you use a modern browser.