Combining User Feedback and Monitoring Data to Support Evidence-based Software Evolution

Abstract: Context. Companies continuously explore their software systems to acquire evidence for software evolution, such as bugs in the system and new functional or quality requirements. So far, managers have made decisions about software evolution based on evidence gathered from interpreting user feedback and monitoring data collected separately from software in use. These evidence-collection processes are usually unmethodical, lack a systematic guide, and have practical issues. This lack of a systematic approach leaves unexploited opportunities for detecting evidence for system evolution. Objective. The main research objective is to improve evidence collection from software in use and guide software practitioners in decision-making about system evolution. Understanding useful approaches to collect user feedback and monitoring data, two important sources of evidence, and combining them are key objectives as well. Method. We proposed a method for gathering evidence from software in use (GESU) using design-science research. We designed the method over three iterations and validated it in the European case studies FI-Start, Supersede, and Wise-IoT. To acquire knowledge for the design, we conducted further research using surveys and systematic mapping methods. Results. The results show that GESU is not only successful in industrial environments but also yields new evidence for software evolution by bringing user feedback and monitoring data together. This combination helps software practitioners improve their understanding of end-user needs and system drawbacks, ultimately supporting continuous requirements elicitation and product evolution. GESU suggests monitoring a software system based on its goals to filter relevant data (i.e., goal-driven monitoring) and gathering user feedback when the system requests feedback about the software in use (i.e., system-triggered user feedback). The system identifies interesting situations of system use and issues automated requests for user feedback to interpret the evidence from user perspectives. We justified using goal-driven monitoring and system-triggered user feedback with complementary findings of the thesis. That showed the goals and characteristics of software systems constrain monitoring data. We thus narrowed the monitoring and observational focus on data aligned with goals instead of a massive amount of potentially useless data. Finally, we found that requesting feedback from users with a simple feedback form is a useful approach for motivating users to provide feedback. Conclusion. Combining user feedback and monitoring data is helpful to acquire insights into the success of a software system and guide decision-making regarding its evolution. This work can be extended in the future by implementing an adaptive system for gathering evidence from combined user feedback and monitoring data

  CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)