Seminars & Colloquia
"Algorithm Composition using Domain-Specific Libraries and Program Decomposition for Thread-Level Speculation"
Monday April 09, 2007 09:30 AM
Location: 3211, EB2 NCSU Centennial Campus
(Visitor parking instructions)
Abstract: In the first part of my talk, I present an overview of algorithm composition from my paper at the 2006 ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI). I explain how a compiler for a domain-specific language or library (DSL) can use a domain-independent AI planner to implement algorithm composition as a language feature. Composition addresses a common DSL problem: good library designers tend to minimize redundancy by including only fundamental procedures that users must chain together into call sequences. Novice users are confounded by not knowing an appropriate sequence to achieve their goal. Composition allows the programmer to define and call an abstract algorithm (AA) like a procedure. The compiler replaces an AA call with a sequence of library calls, while considering the calling context. In the second part of my talk, I cover highlights of my research on program decomposition for thread-level speculation. Chip multiprocessors (CMPs), or multi-core processors, have become a common way of reducing chip complexity and power consumption while maintaining high performance. Speculative CMPs use hardware to enforce dependence, allowing a parallelizing compiler to generate multithreaded code without needing to prove independence. In these systems, a sequential program is decomposed into threads to be executed in parallel; dependent threads cause performance degradation, but do not affect correctness. Static (compile-time) decomposition attempts to reduce the run-time overheads of data dependence, thread sequence misprediction, and load imbalance. I cover two static techniques, one from my 2004 PLDI paper based on the min-cut algorithm and one from my paper at the 2007 ACM SIGPLAN Symposium on the Principles and Practice of Parallel Programming (PPoPP) that embeds a search algorithm into the program to identify a good decomposition during a profile run.
Short Bio: Troy A. Johnson will receive his PhD in Electrical and Computer Engineering from Purdue University in May 2007 with a specialization in computational science and engineering. He received his Master of Science degree in Electrical and Computer Engineering from Purdue University and his Bachelor of Science degree in Computer Engineering summa cum laude from the University of Pittsburgh. Troy's research covers several aspects of compilers and computer systems, including designing and implementing new language features, optimizing programs for thread-level speculation on multi-core architectures, extending the network file system (NFS) with features of peer-to-peer file systems, and designing and implementing source-to-source compiler infrastructures. His most recent compiler research overlaps with the areas of artificial intelligence, computer architecture, and parallel computing.
Host: Frank Mueller, Computer Science, NCSU