Seminars & Colloquia
Computer Science, U. Illinois at Urbana-Champaign
"Rethinking Parallel Languages and Hardware"
Monday April 11, 2011 04:00 PM
Location: 3211, EB-2 NCSU Centennial Campus
(Visitor parking instructions)
This talk is part of the Triangle Computer Science Distinguished Lecturer Series
The era of parallel computing for the masses is here, but writing correct parallel programs remains difficult. Aside from a few domains, most parallel programs are written using shared-memory. The memory model, which specifies the meaning of shared variables, is at the heart of this programming model. Unfortunately, it has involved a tradeoff between programmability and performance, and has arguably been one of the most challenging and contentious areas in both hardware architecture and programming language specification. Recent broad community-scale efforts have finally led to a convergence in this debate, with popular languages such as Java and C++ and most hardware vendors publishing compatible memory model specifications. Although this convergence is a dramatic improvement, it has exposed fundamental shortcomings in current popular languages and systems that thwart safe and efficient parallel computing.
I will discuss the path to the above convergence, the hard lessons learned, and their implications. A cornerstone of this convergence has been the view that the memory model should be a contract between the programmer and the system - if the programmer writes disciplined (data-race-free) programs, the system will provide high programmability (sequential consistency) and performance. I will discuss why this view is the best we can do with current popular languages, and why it is inadequate moving forward, requiring rethinking popular parallel languages and hardware. In particular, I will argue that (1) parallel languages should not only promote high-level disciplined models, but they should also enforce the discipline, and (2) for scalable and efficient performance, hardware should be co-designed to take advantage of and support such disciplined models. I will describe the Deterministic Parallel Java language and DeNovo hardware projects at Illinois as examples of such an approach.
This talk draws on collaborations with many colleagues over the last two decades on memory models (in particular, a CACM'10 paper with Hans-J. Boehm) and with faculty, researchers, and students from the DPJ and DeNovo projects.
Sarita Adve is Professor of Computer Science at the University of Illinois at Urbana-Champaign. Her research interests are in computer architecture and systems, parallel computing, and power and reliability-aware systems. Most recently, she co-developed the memory models for the C++ and Java programming languages based on her early work on data-race-free models, and co-invented the concept of lifetime reliability aware processors and dynamic reliability management. She was named an ACM fellow in 2010, received the ACM SIGARCH Maurice Wilkes award in 2008, was named a University Scholar by the University of Illinois in 2004, and received an Alfred P. Sloan Research Fellowship in 1998. She serves on the boards of the Computing Research Association board and ACM SIGARCH. She received the Ph.D. in Computer Science from Wisconsin in 1993.
Host: Alvin Lebeck, Computer Science, Duke U.
To access the video of this talk, click here.