Seminars & Colloquia
"Human Graphics: Imagery That Works For Its Users"
Thursday March 10, 2005 03:30 PM
Location: 402-A, Withers NCSU Historical Campus
(Visitor parking instructions)
The first component is a new graphics renderer that is extremely adaptive to the user's view. While previous renderers were spatially adaptive, our renderer is both spatially and temporally adaptive. Closed loop feedback guides sampling to image regions that change significantly over space or time. Adaptive reconstruction emphasizes older samples in static settings, resulting in sharper images; and new samples in dynamic settings, resulting in images that may be blurred but are up-to-date. Compared to a standard full-resolution 60-Hz rendering, our renderer's output is of similar quality (as measured by root-mean-squared error), but is generated using an order of magnitude fewer samples. The renderer already adapts immediately to view and object motion, and is an ideal platform for other perceptual adaptations such as fidelity reduction in the view periphery.
The second is a new system for automatically modeling urban land use. A wide range of systems exist for modeling natural objects and phenomena, but there are very few systems that model human artifacts, despite the great need for such content in digital entertainment and simulation. We begin meeting this demand with a system that models cities, the largest and most complex of human artifacts. Our system uses agent-based simulation to model residential, commercial, industrial and transportation land use, producing 2D maps that automate the placement of buildings for artists creating virtual cities. Each city we create is unique but exhibits real-world development patterns, according to standard measures used by urban geographers. This work has been developed in collaboration with Electronic Arts, creators of SimCity.
Host: Dr. Christopher Healey, Associate Professor, Computer Science Dept, NCSU