One of the inherent problems of large-scale, open multiagent systems is the lack of mechanisms and tools to guarantee legally valid interactions. Agents are supposed to perform crucial tasks autonomously and on behalf of humans; however, (i) they are not legal persons on their own, and (ii) of a full legal corpus for the virtual world and its inhabitants is yet to come. Therefore, the ultimate responsible for the actions of an agent is its developer. In this paper we address an innovative model of interaction between agents that leads to an increase of the level of security and trust in privacy-aware, interaction-intensive multiagent systems. In particular, after a brief introduction, we focus in Section II on some common problems related to trust and security in real-world, liable interactions. In Section III, we address these problems and outline some abstractions that we use to guarantee a sound level of security and privacy-awareness in interactions with third-party (possibly unknown) agents, whether human or not. Then, in Section IV we describe the design of an API that we implemented to provide developers with a generalpurpose, reusable means to realize secure, trusted and privacyaware multiagent systems. To conclude, in Section V we briefly discuss our model and outline directions of future development.
Secure, Trusted and Privacy-Aware Interactions in Large-Scale Multiagent Systems / Bergenti, Federico. - STAMPA. - 6:(2005), pp. 144-150. (Intervento presentato al convegno WOA 2005 "Dagli Oggetti agli Agenti").
Secure, Trusted and Privacy-Aware Interactions in Large-Scale Multiagent Systems
BERGENTI, Federico
2005-01-01
Abstract
One of the inherent problems of large-scale, open multiagent systems is the lack of mechanisms and tools to guarantee legally valid interactions. Agents are supposed to perform crucial tasks autonomously and on behalf of humans; however, (i) they are not legal persons on their own, and (ii) of a full legal corpus for the virtual world and its inhabitants is yet to come. Therefore, the ultimate responsible for the actions of an agent is its developer. In this paper we address an innovative model of interaction between agents that leads to an increase of the level of security and trust in privacy-aware, interaction-intensive multiagent systems. In particular, after a brief introduction, we focus in Section II on some common problems related to trust and security in real-world, liable interactions. In Section III, we address these problems and outline some abstractions that we use to guarantee a sound level of security and privacy-awareness in interactions with third-party (possibly unknown) agents, whether human or not. Then, in Section IV we describe the design of an API that we implemented to provide developers with a generalpurpose, reusable means to realize secure, trusted and privacyaware multiagent systems. To conclude, in Section V we briefly discuss our model and outline directions of future development.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.