CSC News

October 11, 2022

Adaptive Experimentation Accelerator Named a Finalist in $1M XPRIZE Digital Learning Challenge

Adaptive Experimentation Accelerator, a partnership of researchers from NC State University, University of Toronto and Carnegie Mellon University, is one of the three teams selected as finalists in the XPRIZE Digital Learning Challenge, a global competition to to modernize, accelerate and improve the technology and processes for studying effective learning and education. 

 

With teams competing for a $1 million grand prize, the Digital Learning Challenge incentivizes the use of AI methods, big data, and machine learning to better understand practices that support educators, parents, policymakers, researchers, and the tens of millions of Americans enrolled in formal education every year. Competitors are in pursuit of a deeper understanding of which educational processes are working well and which should be improved to achieve better outcomes. The competition is sponsored by the Institute of Education Sciences (IES), the independent and nonpartisan statistics, research, and evaluation arm of the U.S. Department of Education.  

 

For the pilot study phase, ten selected teams had six months to demonstrate the capabilities of their systems in an accredited education institutional setting and then 30 days to launch a replication study with at least one unique learner demographic. Teams built systems to demonstrate the educational infrastructure through conducting rapid, reproducible experiments in a formal learning context. 

 

A judging panel of experts reviewed the pilot study technical submissions and based on the submissions and data selected three teams to move on to the final round in competition. A variety of education and technology experts are serving as competition judges

 

Finalist teams that will share $250,000 are: 

 

Adaptive Experimentation Accelerator is a collaborative effort between Dr. Thomas Price, Assistant Professor in Computer Science and Director of the HINTS lab at NC State University; Joseph Jay Williams, Assistant Professor in the Computer Science Department at University of Toronto; Norman Bier, Director of the Open Learning Initiative (OLI) and Executive Director of the Simon Initiative at Carnegie Mellon University; and John Stamper, Associate Professor at the Human-Computer Interaction Institute at Carnegie Mellon University. 

 

Price said his team addresses the disadvantages of utilizing a traditional experiment to test the effectiveness of an instructional practice. In a traditional experiment, researchers would apply the teaching strategy to one group, and the other group — the control group — would not experience this strategy throughout the experiment. However, Price said, this experimental structure isn’t always ideal in a classroom setting because the control group hasn’t received the potential benefits of the instructional method if it proves to be favorable, and his team uses adaptive experimentation to enhance and personalize learning for students.

 

“What we want to do is get the help to as many students as possible while still learning from the experiment,” Price said. “An adaptive experiment uses various methods, often based on machine learning, to actually move students to different conditions depending on how the experimental study is playing out.”

 

An adaptive experiment is advantageous because it tailors the educational experience of students throughout the duration of the experiment based on evidence, Price said. 

 

“One example is self-explaining — trying to formulate an explanation of why something worked,” Price said. “If you're really confident in the subject matter, then an explanation might help you reflect more deeply on what you already know. But if you aren't really sure you've mastered the material in the first place, trying to create that explanation might actually make you feel worse; it might make you feel nervous that you're not able to articulate it, it might make you feel frustrated because you don't actually have an answer. 

 

“In an adaptive experiment, the machine learning model that's assigning students to conditions can also be aware of those contextual factors, and it might pick up on that, especially with larger experiments where we have more data. So it might start to assign students who don't have as much experience not to self-explain, and students who have more experience to self-explain.”

 

The team employed a software architecture called MOOClet, which supports adaptive experimentation on educational platforms and uses machine learning to develop data-driven solutions for personalized learning. Price said this idea has the long-term potential to help build a learning experience for students that best suits their needs.

 

“One vision of this is the idea of a continually improving classroom: The course that you are taking today is collecting data that will inform what that course looks like, not just for the next semester, but even in the next lesson,” Price said. “So if we have tools that allow instructors to adapt their course as the semester progresses based on what they're learning from their students, to gain insights about what is working and what is not, for teachers to feel some freedom to try different things and choose the one that's working best or even assign the one that's working best to different groups depending on their needs, I think that's a vision that is really powerful; it's a vision of personalization that people have been talking about for many years.”

For the next phase of the competition, the final experiments must be replicated at least five times with three or more distinct learner demographics. The XPRIZE Digital Learning Challenge Grand Prize Winner will be announced in March 2023.

 

“This new phase of the competition will further evaluate each teams’ technology, data, and user experience, but also review the strength of their business plans,” said IES Director Dr. Mark Schneider. “At this junction, the ability to test team’s effectiveness will offer valuable insights into how to help close the learning gap and offer a more in-depth look at how technology is used in measuring educational methods.” 

 

“We’re thrilled to be in a position to continue to demonstrate our experimental design in a formal education setting,” stated XPRIZE Digital Learning Challenge Technical Lead Dr. Monique Golden. “Since the pilot, we’ve identified three teams that will move on to the finals in the hopes of better understanding how we can harness big data to impact the education experience for millions of Americans across the country.” 

 

For more information on the Digital Learning Challenge and the teams involved, please visit xprize.org/challenge/digitallearning.  

 

~vespa~


Return To News Homepage