On the Metric-based Approach to Supervised Concept Learning

University dissertation from Ronneby : Blekinge Institute of Technology

Abstract: A classifier is a piece of software that is able to categorize objects for which the class is unknown. The task of automatically generating classifiers by generalizing from examples is an important problem in many practical applications. This problem is often referred to as supervised concept learning, and has been shown to be relevant in e.g. medical diagnosis, speech and handwriting recognition, stock market analysis, and other data mining applications. The main purpose of this thesis is to analyze current approaches to evaluate classifiers as well as supervised concept learners and to explore possible improvements in terms of alternative or complementary approaches. In particular, we investigate the metric-based approach to evaluation as well as how it can be used when learning. Any supervised concept learning algorithm can be viewed as trying to generate a classifier that optimizes a specific, often implicit, metric (this is sometimes also referred to as the inductive bias of the algorithm). In addition, different metrics are suitable for different learning tasks, i.e., the requirements vary between application domains. The idea of metric-based learning is to both make the metric explicit and let it be defined by the user based on the learning task at hand. The thesis contains seven studies, each with its own focus and scope. First, we present an analysis of current evaluation methods and contribute with a formalization of the problems of learning, classification and evaluation. We then present two quality attributes, sensitivity and classification performance, that can be used to evaluate learning algorithms. To demonstrate their usefulness, two metrics for these attributes are defined and used to quantify the impact of parameter tuning and the overall performance. Next, we refine an approach to multi-criteria classifier evaluation, based on the combination of three metrics and present algorithms for calculating these metrics. In the fourth study, we present a new method for multi-criteria evaluation, which is generic in the sense that it only dictates how to combine metrics. The actual choice of metrics is application-specific. The fifth study investigates whether or not the performance according to an arbitrary application-specific metric can be boosted by using that metric as the one that the learning algorithm aims to optimize. The subsequent study presents a novel data mining application for preventing spyware by classifying End User License Agreements. A number of state-of-the-art learning algorithms are compared using the generic multi-criteria method. Finally, in the last study we describe how methods from the area of software engineering can be used to solve the problem of selecting relevant evaluation metrics for the application at hand.

  CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)