In recent years there has been a renewed burst of interest in systems able to textually summarize data, producing natural language text as a description of input data series. Many of the recently proposed approaches to solve the data-to-text task are based on Machine Learning (ML) and ultimately rely on Deep Learning (DL) techniques. This technological choice often prevents the system from enjoying explainability properties. In this paper we outline our ongoing research and present a framework that is ML/DL free and is conceived to be compliant with xAI requirements. In particular we design ASP/Python programs that enable explicit control of the abstraction process, descriptions' accuracy and relevance handling, and amount of synthesis. We provide a critical analysis of the xAI features that should be implemented and a working proof of concept that addresses crucial aspects in the abstraction of data. In particular we discuss: how to model and output the abstraction accuracy of a concept w.r.t. data; how to identify what to say with controlled synthesis level: i.e., the key descriptive elements to be addressed in the data; how to represent abstracted information by means of visual annotation to charts. The main advantages of such approach are a trustworthy and reliable description, a transparent methodology, logically provable output, and measured accuracy that can control natural language modulation of descriptions.

Towards explainable data-to-text generation / Dal Palù, A.; Dovier, A.; Formisano, A.. - 3428:(2023). (Intervento presentato al convegno 38th Italian Conference on Computational Logic, CILC 2023 tenutosi a ita nel 2023).

Towards explainable data-to-text generation

Dal Palù A.;Dovier A.;Formisano A.
2023-01-01

Abstract

In recent years there has been a renewed burst of interest in systems able to textually summarize data, producing natural language text as a description of input data series. Many of the recently proposed approaches to solve the data-to-text task are based on Machine Learning (ML) and ultimately rely on Deep Learning (DL) techniques. This technological choice often prevents the system from enjoying explainability properties. In this paper we outline our ongoing research and present a framework that is ML/DL free and is conceived to be compliant with xAI requirements. In particular we design ASP/Python programs that enable explicit control of the abstraction process, descriptions' accuracy and relevance handling, and amount of synthesis. We provide a critical analysis of the xAI features that should be implemented and a working proof of concept that addresses crucial aspects in the abstraction of data. In particular we discuss: how to model and output the abstraction accuracy of a concept w.r.t. data; how to identify what to say with controlled synthesis level: i.e., the key descriptive elements to be addressed in the data; how to represent abstracted information by means of visual annotation to charts. The main advantages of such approach are a trustworthy and reliable description, a transparent methodology, logically provable output, and measured accuracy that can control natural language modulation of descriptions.
2023
Towards explainable data-to-text generation / Dal Palù, A.; Dovier, A.; Formisano, A.. - 3428:(2023). (Intervento presentato al convegno 38th Italian Conference on Computational Logic, CILC 2023 tenutosi a ita nel 2023).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11381/3008357
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact