Design and Analysis of Self-protection : Adaptive Security for Software-Intensive Systems
Abstract: Today’s software landscape features a high degree of complexity, frequent changes in requirements and stakeholder goals, and uncertainty. Uncertainty and high complexity imply a threat landscape where cybersecurity attacks are a common occurrence, while their consequences are often severe. Self-adaptive systems have been proposed to mitigate the complexity and frequent degree of change by adapting at run-time to deal with situations not known at design time. They, however, are not immune to attacks, as they themselves suffer from high degrees of complexity and uncertainty. Therefore, systems that can dynamically defend themselves from adversaries are required. Such systems are called self-protecting systems and aim to identify, analyse and mitigate threats autonomously. This thesis contributes two approaches towards the goal of providing systems with self-protection capabilities.The first approach aims to enhance the security of architecture-based selfadaptive systems and equip them with (proactive) self-protection capabilities that reduce the exposed attack surface. We target systems where information about the system components and its adaptation decisions is available, and control over its adaptation is also possible. We formally model the security of the system and provide two methods to analyze its security that help us rank adaptations in terms of their security level: a method based on quantitative risk assessment and a method based on probabilistic verification. The results indicate an improvement to the system security when either of our solutions is employed. However, only the second method can provide self-protecting capabilities. We have identified a direct relationship between security and performance overhead, i.e., higher security guarantees impose analogously higher performance overhead.The second approach targets open decentralized systems where we have limited information about and control over the system entities. Therefore, we attempt to employ decentralized information flow control mechanisms to enforce security by controlling interactions among the system elements. We extend a classical decentralized information flow control model by incorporating trust and adding adaptation capabilities that allow the system to identify security threats and self-organize to maximize the average trust between the system entities. We arrange entities of the system in trust hierarchies that enforce security policies among their elements and can mitigate security issues raised by the openness and uncertainty in the context and environment, without the need for a trusted central controller. The experiment results show that a reasonable level of trust can be achieved and at the same time confidentiality and integrity can be enforced with a low impact on the throughput and latency of messages exchanged in the system.
CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)