Public defense: Koen van de Merwe

Koen van de Merwe will defend his PhD degree. The dissertation explores the role of transparency in making autonomous systems understandable and predictable for humans.


28 Aug

Practical information

  • Date: 28 August 2024
  • Time: 15.00 - 20.00
  • Location: DNV – Veritasveien 1, Høvik
  • Download calendar file
  • Teams link for digital participation.

    For physical participation, please contact PhD coordinator Lene Vallestad in advance.

    Program 

    3 PM Trial Lecture: “Role of explainable artificial intelligence (AI) for situational awareness of autonomous ships with varying degree of autonomy”

    4.40 PM Public defense: Agent Transparency and Human Performance in Supervisory Control

    Evaluation committee

    • First opponent: Mica Endsley, president, SA Techologies, USA.
    • Second opponent: Ingrid Utne, professor, NTNU.
    • Administrator: Ziaul Haque Munim, associate professor, Universitety of Southeastern Norway. 

    Supervisors

    • Main supervisor: Salman Nazir, professor, Universitety of Southeastern Norway.
    • Co supervisor: Steven Mallam, associate professor, Universitety of Southeastern Norway and Memorial University of Newfoundland, Canada.
    • Co supervisor: Øystein Engelhardtsen, DNV, Norway.
Any questions?

Koen ved de Merwe is defending his thesis for the degree philosophiae doctor (PhD) at the University of South-Eastern Norway.

The doctoral work has been carried out at the Faculty of Technology, Natural Sciences and Maritime Sciences in the program nautical operations.

portrett av doktorgradskandidat Koen van de Merwe

You are invited to follow the trial lecture and the public defence.
The event will be available to attend digitally: Teams link for digital participation.

Summary

By making autonomous systems transparent, humans become better at understanding what the system is doing, why it is doing it, and what it will do next. This supports operators in supervising autonomous systems and makes it easier for them to decide if manual intervention is needed. These are the results of a PhD dissertation exploring how maritime autonomous collision and grounding avoidance systems can be made “transparent” to their users. 

As Artificial Intelligence (AI)-enabled systems are expected to play a central role in supporting ship navigators in critical decision making, the novelty of this technology calls for a careful approach to deploying these in high-risk domains. This means that, until it is proven that such systems are sufficiently reliable and robust, the presence of human operators is needed to oversee the operations of the ship.

Unfortunately, humans are not very good at monitoring automated systems. We tend to become complacent, biased, bored, or have too much-, or too little faith in the system. This is especially relevant for AI-enabled systems which, by their nature, are difficult to interpret and predict. Therefore, novel ideas are needed to overcome these issues and to support human operators in their new role of supervisors of such systems.

The main contribution of this PhD is the evidence that transparency is a promising design principle for systems requiring human supervision. That is, by providing insight into the system’s analyses, decisions, and planned actions, the operator’s understanding and awareness of the system is enhanced. This means that developers have an incentive to create transparent designs for their users knowing that, by following a set of human-centred design principles and processes, their efforts will have a positive effect in terms of human supervision. Furthermore, this dissertation has made explicit the potential role-change that may be anticipated when introducing collision- and grounding avoidance systems.

Based on this information, ship owners may better understand what may be expected, in terms of change in mental activities, when introducing such systems to their ships. Finally, the results from this dissertation provide insights into the anticipated human performance effects of transparency when applied to autonomous systems.

With these new insights, meaningful human work may be created where the combined capabilities of human-autonomy teams can be optimised. Ultimately, this dissertation advocates the relevance of affording human operators with insight into autonomous systems and establishes transparency as an important prerequisite on the path towards safe and effective human-supervisory control.