EVALITA provides a shared framework for evaluating and comparing different Nautural Language Processing (NLP) and speech systems across various tasks suggested and organized by the Italian research community. These tasks represent scientific challenges and allow testing of methods, resources, and systems on shared benchmarks related to linguistic open issues and real-world applications, including considering multilingual and/or multi-modal perspectives. The EVALITA 2023 edition consisted of 13 different tasks grouped into four research areas: Affect, Authorship Analysis, Computational Ethics, and New Challenges in Long-standing Tasks. The participation saw 42 groups from 12 different countries, indicating an increasing international interest, partly due to the proposal of multilingual tasks. The final workshop showcases the results obtained and highlights the growing interest in using deep learning techniques based on Large Language Models as a new trend. Overall, EVALITA serves as a valuable platform for Italian and international researchers to explore NLP-related challenges, develop solutions, and foster discussions within the community.
EVALITA 2023: Overview of the 8th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian / Lai, Mirko; Menini, Stefano; Polignano, Marco; Russo, Valentina; Sprugnoli, Rachele; Venturi, Giulia. - ELETTRONICO. - (2023), pp. 3-9. (Intervento presentato al convegno Eighth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian (EVALITA 2023) tenutosi a Parma nel 7-8 September 2023).
EVALITA 2023: Overview of the 8th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian
Rachele Sprugnoli;
2023-01-01
Abstract
EVALITA provides a shared framework for evaluating and comparing different Nautural Language Processing (NLP) and speech systems across various tasks suggested and organized by the Italian research community. These tasks represent scientific challenges and allow testing of methods, resources, and systems on shared benchmarks related to linguistic open issues and real-world applications, including considering multilingual and/or multi-modal perspectives. The EVALITA 2023 edition consisted of 13 different tasks grouped into four research areas: Affect, Authorship Analysis, Computational Ethics, and New Challenges in Long-standing Tasks. The participation saw 42 groups from 12 different countries, indicating an increasing international interest, partly due to the proposal of multilingual tasks. The final workshop showcases the results obtained and highlights the growing interest in using deep learning techniques based on Large Language Models as a new trend. Overall, EVALITA serves as a valuable platform for Italian and international researchers to explore NLP-related challenges, develop solutions, and foster discussions within the community.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.