Our technological landscape is ever-changing. Interconnected devices interact with other devices as well as people
in an increasingly autonomous fashion. This core idea manifests itself in several aspiring areas of technology – from the "Internet of Things"
to "Industry 4.0". It seems all too obvious that these entities cannot be controlled by individuals or even organisations
but rather require sophisticated self-organisation mechanisms to implement various self-* properties without centralised control.
This scientific challenge led to initiatives such as Autonomic Computing or Organic Computing that
proposed important basic architectures, models, and algorithms. Particularly in terms of robustness towards failures, these systems show the potential
of outperforming conventional, rigid systems. When widening the scope of application of self-organising
systems to critical domains that are more open and consist of heterogeneous participants, an essential question accompanies the more widespread adoption: How can we make these systems trustworthy?
More specifically, in 2009 the DFG (German Research Foundation) research unit "Trustworthiness of Organic Computing Systems" (OC-Trust) set out to develop methods to
construct self-organising multi-agent systems
that are deemed trustworthy by their users, by other systems
interacting with them, and by authorities and even organisations that certify and deploy
systems in safety- or mission-critical environments. Positive aspects of self-organisation, such as
increased robustness or other positive emergent effects, shall however not be sacrificed. The common
denominator of the bundled research efforts is the scientific treatment of various
facets of trust in technical systems. Trust manifests itself in the system design, e.g. by
countermeasures against ill-behaving or little predictable agents, and helps to reduce the impact of
such entities on the overall system performance. Among technical systems benefiting from
trust management, one particular system class has been selected in the OC-Trust project to serve as a prominent representative. It can be roughly categorised as open, heterogeneous, self-organising, multi-agent systems.
Systems in this class share several features that require individual attention:
These diverse challenges have been investigated in the OC-Trust project by five research groups.
In the Special Track on Trustworthy Open Self-Organising Systems, these groups present their results of six years of research in the form of interactive presentations in the application domains of smart grids, multi-user multi-display environments, desktop grid computing, and middlewares.
Moreover, the special track will feature invited talks by Jeremy Pitt, Cristiano Castelfranchi and Rino Falcone, as well as Natasha Dwyer and Stephen Marsh.
We are very happy to share and discuss our outcomes with the community, and look forward to your visit in Augsburg.
At this point, I want to take the opportunity to thank all present and former members of the involved research groups:
Wolfgang Reif (head of OC-Trust)
The results of six years of research conducted in the course of the OC-Trust project have been summarised in the book on "Trustworthy Open Self-Organising Systems". Furthermore, the book includes three invited contributions by selected experts from the domain of trust in multi-agent systems. Jeremy Pitt discusses formal models of several social processes for open distributed systems, and, in a sense, removes the restriction on the social concept of trust otherwise so prominent in this book. Cristiano Castelfranchi and Rino Falcone add various other factors to the discussion on trust in self-organising, socio-technical systems. Natasha Dwyer and Stephen Marsh conclude the book by asking the interesting and relevant question whether a digital environment empowers users to proceed on their own terms.
The Special Track on Trustworthy Open Self-Organising Systems offers an innovative and varied programme. In the morning, we start with invited talks by Jeremy Pitt, Cristiano Castelfranchi and Rino Falcone, and Natasha Dwyer and Stephen Marsh. Starting at 02:30 pm, we present the outcome of the research unit OC-Trust in the form of interactive presentations.
Please feel free to stop by at our demonstrators and posters during the coffee break in the afternoon.
Click on an item if you want to read the abstract.
The special track takes place in room 1005T.
Unfortunately, Cristiano Castelfranchi and Rino Falcone had to cancel their invited talk about "Trust & Self-Organising Socio-Technical Systems", which is why the special track now starts at 10:00 am.
10:00 am | Wolfgang Reif Opening Statement and Presentation of the Research Unit "Trustworthiness of Organic Computing Systems" (OC-Trust) |
10:30 am | Coffee Break |
11:00 am |
Invited Talk by Jeremy Pitt
"From Trust and Forgiveness to Social Capital and Justice: Formal Models of Social Processes in Open Distributed Systems" |
11:45 am |
Invited Talk by Natasha Dwyer and Stephen Marsh
"Is This Thing Working? A look at trust environments from the person's perspective." |
12:30 pm | Lunch Break |
02:00 pm |
Interactive Presentation 1
"Modelling Trust and Trust-Based Decision-Making in Ubiquitous and Self-Adaptive Environments" Presenters: Stephan Hammer, Sarah Edenhofer, and Alexander Schiendorfer |
02:30 pm |
Interactive Presentation 2
"Building Robust Open Distributed Systems" Presenters: Jan Kantert and Christian Müller-Schloer |
03:00 pm |
Interactive Presentation 3
"Middleware Support for Trust-Based Self-Organisation in Open Distributed Environments" Presenters: Nizar Msadek |
03:30 pm | Coffee Break
*** Please feel free to stop by at our demonstrators and posters. *** |
04:00 pm |
Interactive Presentation 4
"Smarter Grid – Self-Organised Robust Power Manangement" Presenters: Gerrit Anders and Florian Siefert |
04:30 pm | Closing Statement |
Open systems typically occur in a wide range of applications, from virtual organisations and vehicular networks to cloud/grid computing and reconfigurable manufacturing. All these applications encounter a similar problem: how does a system component reliably complete its own tasks, when successful task completion depends on interaction and interoperation with other, potentially unreliable and conflicting, components? One solution to this problem is trust: depending on a second party requires a willingness to expose oneself to risk, and to the extent that this 'willingness' can be quantified or qualified, it can be used to inform a binary trust decision. Therefore, a formal model of the social relationship underpinning such trust decisions is essential for conditioning bipartite interactions between components in an open system. However, there are a number of issues that follow from this. For example: what is to be done when the outcome of the trust decision is contrary to expectation? Are there positive externalities that can be derived from a successful trust decision? And how can we ensure that outcomes of collective decision-making in such circumstances are, in some sense, 'correct' and/or 'fair'? Our answers to these question have been found in the formalisation of other social relations, respectively forgiveness, social capital and justice. In this talk, we will present a survey of this development of formal models of social relations, from trust to justice via forgiveness and social capital, all of which collectively approach the fundamental issue of reliable interoperation in open systems.
We present our socio-cognitive theory of what is Trust and its components and dimensions, and apply that to Trust in complex dynamic socio-technical systems and their self-organising emergent results. In particular to ICT-based systems where the “Social Order” is no longer fully “spontaneous” (due to the “invisible hand” impinging on individual and selfish-decisions) but will be based on programmed interactions, algorithmic procedures, and big data. So the issue is: How can we build Trust in this complex dynamic system and its outcomes; since it cannot be fully programmable and predictable? Some of our questions will be: Is it necessary that folks “understand” the underlying mechanism they are relying on? Which kind of information about forecasts or “future projections” should be provided, and adjusted? And what about simulation, serious games, for learning to understand and predict? There will be algorithms working on the micro-processes and producing the emergent organisation; how effective and reliable? There are at least two different levels of Trust in complex systems and their “working”; Trust in the emergent order, and Trust in the micro-layer rules. Are the systems rules and resulting equilibriums fair, equity inspired, in relation to the interests of the involved groups/subjects? A complex and cognitive model of Trust is needed for this analysis.
There are a great many trust models, not a few of them deployed in various environments, from dating to (other forms of) commerce. Much of the time models are deployed and we are content - after all, trust is a Good Thing, so clearly the trust model embedded in an environment to 'help' people is likewise. But, how do we know, who should we ask, and how? More the point, is it working, why, and what can we do to make it 'better?' In this talk we'll examine some of the ways in which this can be done and what we have done with Device Comfort and trust in the foreground to that end.
In the following interactive presentations, the research groups involved in the OC-Trust project give an overview of selected research results.
Presenters: Stephan Hammer, Sarah Edenhofer, and Alexander Schiendorfer
Ubiquitous environments are able to support users during their daily life by appropriately adapting themselves to changing contexts. However, allowing autonomous system behaviour might lead to a decrease of transparency and controllability. This affects the interaction of system units as well as the users’ experience with such an environment. Thus the aim is to find an adequate balance between autonomy and trustworthiness to ensure desirable system behaviour. This could be done by enhancing the system’s decision making processes with the concept of trust.
To demonstrate trust-enhanced approaches that were developed in the OC-Trust project, we will present a public display environment, which is able to adapt appropriately to the current context. It tailors the presented content on specific displays to the spectators, taking into account their respective position, interests, viewing history, and more.
To address the users’ experience within this environment, a generic User Trust Model (UTM) is utilised which selects system actions based on their utility concerning the users’ trust. In order to enable the selection of the most suitable content for a specific display, the UTM was combined with a constraint-based approach.
To guarantee a robust and fast tracking of the users’ positions, the environment is equipped with cameras that are connected to calculation units which form a distributed grid system. In this grid, units with a lot of data to process can distribute their work among others. Since the units prefer to give their work to trustworthy other entities, so-called trust communities can be formed emergently.
Presenters: Jan Kantert and Christian Müller-Schloer
Future technical systems will be increasingly characterised by openness and heterogeneity of participating elements. Within the OC-TRUST project we have developed solutions to handle anomalies, disturbances, and malicious behaviour of single agents by making use of trust and reliability metrics. Based on mutual trust and reliability ratings agents can form self-organized communities, which allow for a more effective cooperation within the community while keeping out non-trustworthy agents. The overall goal is to increase the robustness of open distributed systems with low overhead.
We have introduced a novel self-organized multi-agent organization – the Trust Community – in several variants: as implicit and as explicit self-structuring society of semi-autonomous agents. Finally we have extended this mechanism by the introduction of social norms in order to prevent these systems from getting caught in so-called Negative Emergent Behaviour states.
For evaluation purposes, we use an open Desktop Computing Grid, a distributed rendering farm for movies, and wireless sensor networks.
In the demonstration, we show the benefits of Trust Communities, which result in significantly lower overhead, higher robustness, and more reliable relations among agents compared to other forms of agent societies. We use the three application scenarios and their corresponding utility functions and apply attacks, measure their effect on the utility and determine the robustness of the different self-organization control mechanisms. In the sensor network scenario, we present implicit trust communities. In the trusted desktop grid, we show the self-organization effects of explicit trust communities. Finally, we demonstrate normative control in a distributed rendering scenario.
Presenters: Nizar Msadek
Self-* systems are becoming increasingly complex in their organisational structures, especially when unknown heterogeneous entities might arbitrarily enter and leave the network at any time. Therefore, new ways have to be found to develop and manage them. One way to overcome this issue is trust. Using appropriate trust mechanisms, entities in the system can have a clue about which entities to cooperate with. This is very important to improve the robustness of self-* systems, which depends on a cooperation of autonomous entities. The contributions of our work are trustworthy concepts and generic self-* algorithms with the ability to self-configure, self-optimise, and self-heal that work in a distributed manner and with no central control to ensure robustness. In this demonstration, we report some results that we experimentally evolved from our TEM middleware. We address the feasibility of applying trust to the self-configuration property. This aims on the one hand to equally distribute the load of services on nodes as in a typical load balancing scenario and on the other hand to assign services with different importance levels to nodes so that the more important services are assigned to more trustworthy nodes. We also discuss the future application of our TEM middleware.
Presenters: Gerrit Anders and Florian Siefert
In open multi-agent systems, we can make only little assumptions about the system's scale, the behaviour of participating agents, and its environment.
Especially with regard to mission-critical systems, the ability to deal with a large number of heterogeneous agents that are exposed to an uncertain environment becomes a major concern.
A prominent example are power management systems whose paramount goal is to balance production and consumption in the presence of uncertain weather forecasts and variable demand.
In this context, we tackled challenges comprising how to specify and design these systems to allow for their efficient and robust operation. At the heart of our approach, we introduced constraint-based specification techniques to tackle heterogeneity and trust models that allow the agents to measure, anticipate, and deal with uncertainties. On this basis, we presented algorithms for self-organisation and self-optimisation that enable the formation of scalable system structures at runtime and allow for efficient and robust resource allocation under adverse conditions.
This interactive presentation will highlight the core results combining uncertainty modelling and self-organisation using a physical demonstrator that is connected to a larger, virtual system running in an agent-based simulation environment for smart grids that we developed in the project. Attendees may experience how interaction with the physical environment (e.g. simulating a sudden change in solar exposure) triggers self-organisation processes that guide the overall system into a more robust state.
For all enquiries about the special track, please write an e-mail to saso2016@isse.de.