CSC News

October 13, 2020

Machine Learning Predicts How Long Museum Visitors Will Engage With Exhibits

For Immediate Release

 

Matt Shipman | News Services | matt_shipman@ncsu.edu

 

Jonathan Rowe | jprowe@ncsu.edu

 

Andrew Emerson | ajemerso@ncsu.edu

 

 

News Releases

 

In a proof-of-concept study, education and artificial intelligence researchers have demonstrated the use of a machine-learning model to predict how long individual museum visitors will engage with a given exhibit. The finding opens the door to a host of new work on improving user engagement with informal learning tools.

 

“Education is an important part of the mission statement for most museums,” says Jonathan Rowe, co-author of the study and a research scientist in North Carolina State University’s Center for Educational Informatics (CEI). “The amount of time people spend engaging with an exhibit is used as a proxy for engagement and helps us assess the quality of learning experiences in a museum setting. It’s not like school – you can’t make visitors take a test.”

 

“If we can determine how long people will spend at an exhibit, or when an exhibit begins to lose their attention, we can use that information to develop and implement adaptive exhibits that respond to user behavior in order to keep visitors engaged,” says Andrew Emerson, first author of the study and a Ph.D. student at NC State.

 

“We could also feed relevant data to museum staff on what is working and what people aren’t responding to,” Rowe says. “That can help them allocate personnel or other resources to shape the museum experience based on which visitors are on the floor at any given time.”

 

To determine how machine-learning programs might be able to predict user interaction times, the researchers closely monitored 85 museum visitors as they engaged with an interactive exhibit on environmental science. Specifically, the researchers collected data on study participants’ facial expressions, posture, where they looked on the exhibit’s screen and which parts of the screen they touched.

 

The data were fed into five different machine-learning models to determine which combinations of data and models resulted in the most accurate predictions.

 

“We found that a particular machine-learning method called ‘random forests’ worked quite well, even using only posture and facial expression data,” Emerson says.

 

The researchers also found that the models worked better the longer people interacted with the exhibit, since that gave them more data to work with. For example, a prediction made after a few minutes would be more accurate than a prediction made after 30 seconds. For context, user interactions with the exhibit lasted as long as 12 minutes.

 

“We’re excited about this, because it paves the way for new approaches to study how visitors learn in museums,” says Rowe. “Ultimately, we want to use technology to make learning more effective and more engaging.”

 

The paper, “Early Prediction of Visitor Engagement in Science Museums with Multimodal Learning Analytics,” will be presented at the 22nd ACM International Conference on Multimodal Interaction (ICMI ’20), being held online Oct. 25-29. The paper was co-authored by Nathan Henderson, a Ph.D. student at NC State; Wookhee Min and Seung Lee, research scientists at NC State’s CEI; James Minogue, an associate professor of teacher education and learning sciences at NC State; and James Lester, Distinguished University Professor of Computer Science and the director of CEI at NC State.

 

The work was done with support from the National Science Foundation under grant 1713545.

 

-shipman-

 

Note to Editors: The study abstract follows.

 

“Early Prediction of Visitor Engagement in Science Museums with Multimodal Learning Analytics”

 

Authors: Andrew Emerson, Nathan Henderson, Jonathan Rowe, Wookhee Min, Seung Lee, James Minogue and James Lester, North Carolina State University

 

Presented: Oct. 25-29, 22nd ACM International Conference on Multimodal Interaction (online)

 

DOI: 10.1145/3382507.3418890

 

Abstract: Modeling visitor engagement is a key challenge in informal learning environments, such as museums and science centers. Devising predictive models of visitor engagement that accurately forecast salient features of visitor behavior, such as dwell time, holds significant potential for enabling adaptive learning environments and visitor analytics for museums and science centers. In this paper, we introduce a multimodal early prediction approach to modeling visitor engagement with interactive science museum exhibits. We utilize multimodal sensor data—including eye gaze, facial expression, posture, and interaction log data—captured during visitor interactions with an interactive museum exhibit for environmental science education, to induce predictive models of visitor dwell time. We investigate machine learning techniques (random forest, support vector machine, Lasso regression, gradient boosting trees, and multi-layer perceptron) to induce multimodal predictive models of visitor engagement with data from 85 museum visitors. Results from a series of ablation experiments suggest that incorporating additional modalities into predictive models of visitor engagement improves model accuracy. In addition, the models show improved predictive performance over time, demonstrating that increasingly accurate predictions of visitor dwell time can be achieved as more evidence becomes available from visitor interactions with interactive science museum exhibits. These findings highlight the efficacy of multimodal data for modeling museum exhibit visitor engagement.


Return To News Homepage