Advanced

Parallelism in Constraint Programming

Rolf, Carl Christian LU (2011)
Abstract (Swedish)
Popular Abstract in English

At our workplace, at home, and on the road, we rely on software. It has become one of the central technologies on which we base our society. Yet, software is perhaps the

major technology that is understood the least by the general population.



Most software today is written in languages that requires the programmer to specify not only what is to be achieved, but also how. This is a major limitation,

since the best way to do things depends on the hardware.



The older the software code is, the more the computer architectures are likely to have changed. Today, many companies still run software written in the seventies. While this

code is... (More)
Popular Abstract in English

At our workplace, at home, and on the road, we rely on software. It has become one of the central technologies on which we base our society. Yet, software is perhaps the

major technology that is understood the least by the general population.



Most software today is written in languages that requires the programmer to specify not only what is to be achieved, but also how. This is a major limitation,

since the best way to do things depends on the hardware.



The older the software code is, the more the computer architectures are likely to have changed. Today, many companies still run software written in the seventies. While this

code is likely to be all but bug free, the performance of these systems is lacking.



In the past decade, the hardware development has moved more and more towards parallelism. The reason is that processors can no longer become faster while still remaining

reasonably easy to cool. This situation will continue unless some drastically different materials or technologies emerge.



Every day, we become more reliant on constant access to high performance software and software based services, such as online banks. This increases the performance

requirements of almost all software. Previously this was not a major problem, as faster processors would automatically run all programs faster. However, today, the increase

in hardware capability comes in the form of more potential parallelism.



Thus, the performance needs of tomorrow's users can only be satisfied by embracing parallelism.



Writing efficient parallel programs is the biggest challenge of the software industry for the foreseeable future. We are currently in a time when parallel computers are the

norm, not the exception. Soon, parallel processors will be standard in cell phones. Without drastic changes in hardware development, all software must be parallelized to its

fullest extent.



Parallelism can increase performance and reduce power consumption at the same time. Many programs will execute faster on a dual-core processor than a single core processor

running at twice the speed. Halving the speed of a processor can reduce the power consumption up to four times. Hence, parallelism can give more performance per unit of

power.



In the high-performance computing industry, energy efficiency is a primary concern. The majority of the total cost of a supercomputer today, during its lifetime, often comes

from cooling and power consumption. More parallelism can both reduce the cost and the environmental impact of the software services we rely on.



In order to make use of parallel hardware, we need to overcome the difficulties of parallel programming. To many programmers, it is easier to learn a handful of small

domain-specific programming languages than to learn efficient parallel programming. The frameworks for these languages can then automatically parallelize the program.

Automatically parallelizing traditional programs is usually much more difficult.



There are many programming paradigms in use today for solving difficult problems, such as scheduling. For instance, constraint programming (CP), integer programming (IP),

and satisfiability programming (SAT). Of all these paradigms, CP is usually considered to be closest to the holy grail of programming: the user states the problem, and

the computer solves it.



In this thesis, we study and present parallelism in constraint programming. We have developed the first constraint framework that automatically parallelizes both the

consistency and the search of the solving process. This allows programmers to avoid the difficult issues of parallel programming. We also study distributed CP with

independent agents and propose solutions to this problem.



Our results show that automatic parallelism in CP can provide very good performance. Our parallel consistency scales very well for problems with many large constraints. We

also manage to combine parallel consistency and parallel search with a performance increase. The communication and load-balancing schemes we developed increase the

scalability of parallel search. Our model for distributed CP is orders of magnitude faster than traditional approaches. As far as we know, it is the first to solve standard

benchmark scheduling problems.



One of the main uses of CP today is air-crew scheduling, that is, creating work schedules for pilots and air hostesses. Providing even slightly better schedules can lead to

savings in the order of millions of dollars annually.



By using parallel solving, better schedules can be found for many problems. Depending on the industry, this can lead to major reductions in fuel use, increased production

speed, or less over-production, all of which are important for competitiveness and the environment. (Less)
Abstract
Writing efficient parallel programs is the biggest challenge of the software industry for the foreseeable future. We are currently in a time when parallel computers are the norm, not the exception. Soon, parallel processors will be standard even in cell phones. Without drastic changes in hardware development, all software must be parallelized to its fullest extent.



Parallelism can increase performance and reduce power consumption at the same time. Many programs will execute faster on a dual-core processor than a single core processor running at twice the speed. Halving the speed of a processor can reduce the power consumption up to four times. Hence, parallelism gives more performance per unit of power to efficient... (More)
Writing efficient parallel programs is the biggest challenge of the software industry for the foreseeable future. We are currently in a time when parallel computers are the norm, not the exception. Soon, parallel processors will be standard even in cell phones. Without drastic changes in hardware development, all software must be parallelized to its fullest extent.



Parallelism can increase performance and reduce power consumption at the same time. Many programs will execute faster on a dual-core processor than a single core processor running at twice the speed. Halving the speed of a processor can reduce the power consumption up to four times. Hence, parallelism gives more performance per unit of power to efficient programs.



In order to make use of parallel hardware, we need to overcome the difficulties of parallel programming. To many programmers, it is easier to learn a handful of small domain-specific programming languages than to learn efficient parallel programming. The frameworks for these languages can then automatically parallelize the program. Automatically parallelizing traditional programs is usually much more difficult.



In this thesis, we study and present parallelism in constraint programming (CP). We have developed the first constraint framework that automatically parallelizes both the consistency and the search of the solving process. This allows programmers to avoid the difficult issues of parallel programming. We also study distributed CP with independent agents and propose solutions to this problem.



Our results show that automatic parallelism in CP can provide very good performance. Our parallel consistency scales very well for problems with many large constraints. We also manage to combine parallel consistency and parallel search with a performance increase. The communication and load-balancing schemes we developed increase the scalability of parallel search. Our model for distributed CP is orders of magnitude faster than traditional approaches. As far as we know, it is the first to solve standard benchmark scheduling problems. (Less)
Please use this url to cite or link to this publication:
author
supervisor
opponent
  • Professor Schulte, Christian, School of Information and Communication Technology, KTH, Stockholm
organization
publishing date
type
Thesis
publication status
published
subject
keywords
Parallelism, Constraint Programming, Parallel Consistency, Parallel Search, Distributed Constraint Programming
pages
202 pages
defense location
Lecture hall E:1406, E-building, Ole Römers väg 3, Lund University Faculty of Engineering
defense date
2011-10-03 13:00
ISBN
978-91-7473-154-5
language
English
LU publication?
yes
id
8ec4717e-5583-4c0d-bc2d-3dd919ceebe2 (old id 2152966)
date added to LUP
2011-09-06 08:56:39
date last changed
2016-09-19 08:45:15
@phdthesis{8ec4717e-5583-4c0d-bc2d-3dd919ceebe2,
  abstract     = {Writing efficient parallel programs is the biggest challenge of the software industry for the foreseeable future. We are currently in a time when parallel computers are the norm, not the exception. Soon, parallel processors will be standard even in cell phones. Without drastic changes in hardware development, all software must be parallelized to its fullest extent.<br/><br>
<br/><br>
Parallelism can increase performance and reduce power consumption at the same time. Many programs will execute faster on a dual-core processor than a single core processor running at twice the speed. Halving the speed of a processor can reduce the power consumption up to four times. Hence, parallelism gives more performance per unit of power to efficient programs.<br/><br>
<br/><br>
In order to make use of parallel hardware, we need to overcome the difficulties of parallel programming. To many programmers, it is easier to learn a handful of small domain-specific programming languages than to learn efficient parallel programming. The frameworks for these languages can then automatically parallelize the program. Automatically parallelizing traditional programs is usually much more difficult.<br/><br>
<br/><br>
In this thesis, we study and present parallelism in constraint programming (CP). We have developed the first constraint framework that automatically parallelizes both the consistency and the search of the solving process. This allows programmers to avoid the difficult issues of parallel programming. We also study distributed CP with independent agents and propose solutions to this problem.<br/><br>
<br/><br>
Our results show that automatic parallelism in CP can provide very good performance. Our parallel consistency scales very well for problems with many large constraints. We also manage to combine parallel consistency and parallel search with a performance increase. The communication and load-balancing schemes we developed increase the scalability of parallel search. Our model for distributed CP is orders of magnitude faster than traditional approaches. As far as we know, it is the first to solve standard benchmark scheduling problems.},
  author       = {Rolf, Carl Christian},
  isbn         = {978-91-7473-154-5},
  keyword      = {Parallelism,Constraint Programming,Parallel Consistency,Parallel Search,Distributed Constraint Programming},
  language     = {eng},
  pages        = {202},
  school       = {Lund University},
  title        = {Parallelism in Constraint Programming},
  year         = {2011},
}