Neural networks in context: challenges and opportunities : a critical inquiry into prerequisites for user trust in decisions promoted by neural networks

Abstract: Artificial intelligence and machine learning (ML) in particular increasingly impact human life by creating value from collected data. This assetisation affects all aspectsof human life, from choosing a significant other to recommending a product for us to consume. This type of ML-based system thrives because it predicts human behaviour based on average case performance metrics (like accuracy). However, its usefulnessis more limited when it comes to being transparent about its internal knowledge representations for singular decisions, for example, it is not good at explaining why ithas suggested a particular decision in a specific context.The goal of this work is to let end users be in command of how ML systems are used and thereby combine the strengths of humans and machines – machines which can propose transparent decisions. Artificial neural networks are an interesting candidate for a setting of this type, given that this technology has been successful in building knowledge representations from raw data. A neural network can be trained by exposing it to data from the target domain. It can then internalise knowledge representations from the domain and perform contextual tasks. In these situations, the fragment of the actual world internalised in an ML system has to be contextualised by a human to beuseful and trustworthy in non-static settings.This setting is explored through the overarching research question: What challenges and opportunities can emerge when an end user uses neural networks in context to support singular decision-making? To address this question, Research through Design is used as the central methodology, as this research approach matches the openness of the research question. Through six design experiments, I explore and expand on challenges and opportunities in settings where singular contextual decisions matter. The initial design experiments focus on opportunities in settings that augment human cognitive abilities. Thereafter, the experiments explore challenges related to settings where neural networks can enhance human cognitive abilities. This part concerns approaches intended to explain promoted decisions.This work contributes in three ways: 1) exploring learning related to neural networks in context to put forward a core terminology for contextual decision-making using ML systems, wherein the terminology includes the generative notions of true-to-the-domain, concept, out-of-distribution and generalisation; 2) presenting a number of design guidelines; and 3) showing the need to align internal knowledge representations with concepts if neural networks are to produce explainable decisions. I also argue that training neural networks to generalise basic concepts like shapes and colours, concepts easily understandable by humans, is a path forward. This research direction leads towards neural network-based systems that can produce more complex explanations that build on basic generalisable concepts.

  This dissertation MIGHT be available in PDF-format. Check this page to see if it is available for download.