Eliciting Knowledge from Experts in Modeling of Complex Systems : Managing Variation and Interactions

Abstract: The thematic core of the thesis is about how to manage modeling procedures in real settings. The view taken in this thesis is that modeling is a heuristic tool to outline a problem, often conducted in a context of a larger development process. Examples of applications, in which modeling are used, include development of software and business solutions, design of experiments etc. As modeling often is used in the initial phase of such processes, then there is every possibility of failure, if initial models are false or inaccurate. Modeling often calls for eliciting knowledge from experts. Access to relevant expertise is limited, and consequently, efficient use of time and sampling of experts is crucial. The process is highly interactive, and data are often of qualitative nature rather than quantitative. Data from different experts often vary, even if the task is to describe the same phenomenon. As with quantitative data, this variation between data sources can be treated as a source of error as well as a source of information. Irrespective of specific modeling technique, variation and interaction during the model development process should be possible to characterize in order to estimate the elicited knowledge in terms of correctness and comprehensiveness. The aim of the thesis is to explore a methodological approach on how to manage such variations and interactions. Analytical methods tailored for this purpose have the potential to impact on the quality of modeling in the fields of application. Three studies have been conducted, in which principles for eliciting, controlling, and judging the modeling procedures were explored. The first one addressed the problem of how to characterize and handle qualitative variations between different experts, describing the same modeling object. The judgment approach, based on a subjective comparison between different expert descriptions, was contrasted with a criterion-based approach, using a predefined structure to explicitly estimate the degree of agreement. The results showed that much of the basis for the amalgamation of models used in the judgment-approach was concealed, even if a structured method was used to elicit the criteria for the independent experts’ judgment. In contrast, by using the criterion-based approach the nature of the variation was possible to characterize explicitly. In the second study, the same approach was used to characterize variation between, as well as within, different modeling objects, analogical to a one-way statistical analysis of variance. The results of the criterion-based approach indicated a substantial difference between the two modeling subjects. Variances within each of the modeling tasks were about the same and lower than the variance between modeling tasks. The result supports the findings from the first study and indicates that the approach can be generalized as a way of comparing modeling tasks. The third study addressed the problem of how to manage the interaction between experts in team modeling. The aim was to explore the usability of an analytical method with on-line monitoring of the team communication. Could the basic factors of task, participants, knowledge domains, communication form, and time be used to characterize and manipulate team modeling? Two contrasting case studies of team modeling were conducted. The results indicated that the taxonomy of the suggested analytical method was sensitive enough to capture the distinctive communication patterns for the given task conditions. The results also indicate that an analytical approach can be based on the relatively straightforward task of counting occurrences, instead of the relatively more complex task of establish sequences of occurrence.

  CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)