The goal of this project is to design an intelligent assistant that collaborates with a user to partially automate the construction of visualizations that support rapid and accurate exploration and analysis.
Visualization is the conversion of collections of strings and numbers into pictures that a viewer can use to "see" values, relationships, and structure inherent in their datasets. Multidimensional data visualization presents the dual problems of size and dimensionality. The number of sample points within the dataset is very large; moreover, each sample point contains multiple independent readings or attributes. Viewers want to visualize some or all of this information simultaneously in a single display.
A separate project in our laboratory is studying how the low-level visual system perceives fundamental visual features during visualization. Multidimensional data is visualized using simple visual elements (or glyphs) that encode multiple attribute values simultaneously in a single display. Each glyph maximizes its information content by harnessing the perceptual abilities of the human visual system.
Results from our psychophysical studies form a collection of perceptual guidelines that describe how to combine color, texture, and motion patterns to represent information in an underlying dataset. A "visualization assistant" (ViA) based on mixed-initiative search algorithms from artifical intelligence will be constructed on top of these guidelines. This assistant will help viewers choose perceptually optimal methods of converting their data into effective visualizations, by allowing them to:
Data will be displayed in ways that harness the strengths and avoid the limitations of the viewer's visual system. The result is a set of images that allow viewers to perform rapid, accurate, and effective exploration and analysis.
Jiae Chang (MS, 2001)