Seminars & Colloquia
Computer Science, University of Wisconsin
"Binary Code Patching: An Ancient Art Refined for the 21st Century"
Monday October 30, 2006 04:00 PM
Location: 313, MRC NCSU Centennial Campus
(Visitor parking instructions)
This talk is part of the Triangle Computer Science Distinguished Lecturer Series
Abstract: Patching binary code dates back to some of the earliest computer systems. Binary code patching allows access to a program without having access to the source code, obviating the need to recompile, re-link, and, in the dynamic case, re-execute.
In the early days, it was a bold technique used by serious programmers to avoid the long recompile/reassemble and link steps. Code patching required an intimate knowledge of the instruction set and its binary representation. Great advances have been made in simplifying the use of code patching, making it less error prone and more flexible. "Binary rewriters" were a great advance in the technology for modifying a binary before its execution. These tools, such as OM, EEL, and Vulcun, have enabled the building of tools for tracing, simulation, testing, and sandboxing.
Moving beyond static patching, we developed "dynamic instrumentation," the ability to patch code into a running program. Dynamic instrumentation provided the ability to adapt the code to the immediate need, dynamically control overhead costs. This technology has been applied to both user programs and operating system kernels. Example of dynamic instrumenters include Dyninst and Kerninst. This technology forms foundation of the Paradyn Performance Tools.
Dynamic code patching continues to get more aggressive. The latest work in the area, which we call "self-propelled instrumentation," inserts instrumentation code that propagates itself along the program's control flow as the program executes. At its best, this technique can provide very low overhead, detailed instrumentation. Examples of such systems include systems such as spTrace, DIOTA, FIT, and DynamoRIO.
Key to both static and dynamic patching are the interfaces. There is difficult balance between providing an interface that abstracts the details of the code, often using control- and data-flow graphs and instruction categories, and an interface that exposes the details of the instruction set.
In this talk, I will discuss the development of code patching over the years, with examples from the various technologies (including our tools) and present results from our latest work in self-propelled instrumentation. I will also discuss interface abstractions and our work towards the goal of multi-platform interfaces and tools.
Some related readings from our project, including
the original Dyninst paper:
Instrumenting threaded programs: ftp://ftp.cs.wisc.edu/paradyn/papers/Xu99Dynamic.pdf
Details of dynamic instrumentation:
Dynamic kernel instrumentation (Kerninst):
Short Bio: Barton Miller is Professor of Computer Sciences at the University of Wisconsin, Madison. He directs the Paradyn Parallel Performance Tool project, which is investigating performance and instrumentation technologies for parallel and distributed applications and systems. He also co-directs the WiSA security project. His research interests include tools for high-performance computing systems, binary code analysis and instrumentation, computer security, and scalable distributed systems.
Miller co-chaired the SC2003 Technical Papers program, was Program co-Chair of the 1998 ACM/SIGMETRICS Symposium on Parallel and Distributed Tools, and General Chair of the 1996 ACM/SIGMETRICS Symposium on Parallel and Distributed Tools. He also twice chaired the ACM/ONR Workshop on Parallel and Distributed Debugging. Miller was on the editorial board of IEEE Transactions on Parallel and Distributed Systems, and the Int'l Journal of Parallel Processing. He is currently on the Boards of Concurrency and Computation Practice and Experience, Computing Systems, and the Miller has chaired numerous workshops and has been on numerous conference program committees. He is also a member of the IEEE Technical Committee on Parallel Processing.
Miller is a member of the Los Alamos National Laboratory Computing, Communications and Networking Division Review Committee, IDA Center for Computing Sciences Program Review Committee, and has been on the U.S. Secret Service Electronic Crimes Task Force (Chicago Area), the Advisory Committee for Tuskegee University's High Performance Computing Program, and the Advisory Board for the International Summer Institute on Parallel Computer Architectures, Languages, and Algorithms in Prague. Miller is an active participant in the European Union APART performance tools initiative.
Miller received his Ph.D. degree in Computer Science from the University of California, Berkeley in 1984. He is a Fellow of the ACM.
Host: Frank Mueller, Computer Science, NCSU
No media files available at this time
Host is responsible for requesting video recording by filling out this Web form. For other technical issues, contact us at email@example.com.