Parallelism in Constraint Programming

Abstract: Popular Abstract in English At our workplace, at home, and on the road, we rely on software. It has become one of the central technologies on which we base our society. Yet, software is perhaps the major technology that is understood the least by the general population. Most software today is written in languages that requires the programmer to specify not only what is to be achieved, but also how. This is a major limitation, since the best way to do things depends on the hardware. The older the software code is, the more the computer architectures are likely to have changed. Today, many companies still run software written in the seventies. While this code is likely to be all but bug free, the performance of these systems is lacking. In the past decade, the hardware development has moved more and more towards parallelism. The reason is that processors can no longer become faster while still remaining reasonably easy to cool. This situation will continue unless some drastically different materials or technologies emerge. Every day, we become more reliant on constant access to high performance software and software based services, such as online banks. This increases the performance requirements of almost all software. Previously this was not a major problem, as faster processors would automatically run all programs faster. However, today, the increase in hardware capability comes in the form of more potential parallelism. Thus, the performance needs of tomorrow's users can only be satisfied by embracing parallelism. Writing efficient parallel programs is the biggest challenge of the software industry for the foreseeable future. We are currently in a time when parallel computers are the norm, not the exception. Soon, parallel processors will be standard in cell phones. Without drastic changes in hardware development, all software must be parallelized to its fullest extent. Parallelism can increase performance and reduce power consumption at the same time. Many programs will execute faster on a dual-core processor than a single core processor running at twice the speed. Halving the speed of a processor can reduce the power consumption up to four times. Hence, parallelism can give more performance per unit of power. In the high-performance computing industry, energy efficiency is a primary concern. The majority of the total cost of a supercomputer today, during its lifetime, often comes from cooling and power consumption. More parallelism can both reduce the cost and the environmental impact of the software services we rely on. In order to make use of parallel hardware, we need to overcome the difficulties of parallel programming. To many programmers, it is easier to learn a handful of small domain-specific programming languages than to learn efficient parallel programming. The frameworks for these languages can then automatically parallelize the program. Automatically parallelizing traditional programs is usually much more difficult. There are many programming paradigms in use today for solving difficult problems, such as scheduling. For instance, constraint programming (CP), integer programming (IP), and satisfiability programming (SAT). Of all these paradigms, CP is usually considered to be closest to the holy grail of programming: the user states the problem, and the computer solves it. In this thesis, we study and present parallelism in constraint programming. We have developed the first constraint framework that automatically parallelizes both the consistency and the search of the solving process. This allows programmers to avoid the difficult issues of parallel programming. We also study distributed CP with independent agents and propose solutions to this problem. Our results show that automatic parallelism in CP can provide very good performance. Our parallel consistency scales very well for problems with many large constraints. We also manage to combine parallel consistency and parallel search with a performance increase. The communication and load-balancing schemes we developed increase the scalability of parallel search. Our model for distributed CP is orders of magnitude faster than traditional approaches. As far as we know, it is the first to solve standard benchmark scheduling problems. One of the main uses of CP today is air-crew scheduling, that is, creating work schedules for pilots and air hostesses. Providing even slightly better schedules can lead to savings in the order of millions of dollars annually. By using parallel solving, better schedules can be found for many problems. Depending on the industry, this can lead to major reductions in fuel use, increased production speed, or less over-production, all of which are important for competitiveness and the environment.

  CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)