Seminars & Colloquia
"Parallel Programming with (almost) No Parallel Programming"
Friday February 13, 2009 02:20 PM
Location: 1231, EB2 NCSU Centennial Campus
(Visitor parking instructions)
This talk is part of the System Research Seminar series
Abstract: Multicore processors have fundamentally changed the way software is developed. In particular, no longer can features be added to software with increases in processor clock speeds covering overheads, and no longer to increases in clock speed allow increasingly larger problems to be solved for free. Rather, programs must be parallel to see performance gains with new processors. This forces the average programmer to perform duties previously only expected of heroic programmers. A partial solution to this problem is domain specic languages. Two examples will be discussed: (1) a compiler for polymer chemistry, and (2) Aspen, a language for network services. We show these languages raise the level of abstraction high enough that respective domain experts can achieve good performance on parallel machines while remaining oblivious to the parallel structure of their application. Morever, we show that domain specific compilers must be concerned about more than syntax -- optimizations tuned to the domain specific code are sometimes necessary. Finally, we discuss briefly the economics of domain specific languages.
Short Bio: Samuel Midkiff is an Associate Professor of Computer and Electrical Engineering at Purdue University since 2002. He received his PhD degree from the University of Illinois at Urbana-Champaign in 1992 where he was a member of the Cedar project. In 1991 he became a Research Staff Member at the IBM T.J. Watson Research Center, where he was a key member of the xlHPF compiler team and the Ninja project. His research has focused on parallelism and high performance computing, and in particular compiler and language support for the development of correct and efficient programs. To this end his research has covered dependence analysis and automatic synchronization of explicitly parallel programs, compilation under different memory models, automatic parallelization, high performance computing in Java and other high-level languages, and tools to help in the detection and localization of program errors.
Host: Frank Mueller, Computer Science, NCSU