CSC News

January 26, 2022

Labcorp Speaker Series Proudly Presents Software Expert Gary McGraw

Please join us on February 24th at 6:30 p.m. in room 1231 in Engineering Building 2 for the second lecture in the Spring Labcorp “Leadership in Technology” Speakers Series

 

Our special guest speaker will be Dr. Gary McGraw, Co-Founder of the Berryville Institute of Machine Learning.  McGraw’s topic will be “Security Engineering for Machine Learning.”

 

McGraw is a globally recognized authority on software security and the author of eight best-selling books on this topic. His titles include Software Security, Exploiting Software, Building Secure Software, Java Security, Exploiting Online Games, and six other books; and he is editor of the Addison-Wesley Software Security series. McGraw has also written over 100 peer-reviewed scientific publications, and he serves on the Advisory Boards of Irius Risk, Maxmyinterest, Runsafe Security, and Secure Code Warrior.  He has also served as a Board member of Cigital and Codiscope (acquired by Synopsys) and as Advisor to CodeDX (acquired by Synopsys), Black Duck (acquired by Synopsys), Dasient (acquired by Twitter), Fortify Software (acquired by HP), and Invotas (acquired by FireEye). McGraw produced the monthly Silver Bullet Security Podcast for IEEE Security & Privacy magazine for thirteen years. His dual PhD is in Cognitive Science and Computer Science from Indiana University where he serves on the Dean’s Advisory Council for the Luddy School of Informatics, Computing, and Engineering.

 

Abstract:  Machine Learning appears to have made impressive progress on many tasks including image classification, machine translation, autonomous vehicle control, playing complex games including chess, Go, and Atari video games, and more. This has led to much breathless popular press coverage of Artificial Intelligence, and has elevated deep learning to an almost magical status in the eyes of the public. ML, especially of the deep learning sort, is not magic, however.  ML has become so popular that its application, though often poorly understood and partially motivated by hype, is exploding. In my view, this is not necessarily a good thing. I am concerned with the systematic risk invoked by adopting ML in a haphazard fashion. Our research at the Berryville Institute of Machine Learning (BIIML) is focused on understanding and categorizing security engineering risks introduced by ML at the design level.  Though the idea of addressing security risk in ML is not a new one, most previous work has focused on either particular attacks against running ML systems (a kind of dynamic analysis) or on operational security issues surrounding ML. This talk focuses on the results of an architectural risk analysis (sometimes called a threat model) of ML systems in general.  A list of the top five (of 78 known) ML security risks will be presented.

 

The event is free and open to the public.  Ample free parking is available on Centennial Campus after 5 pm.  For directions and more information, click here.

 

***Masks are required for the 'in person' talk. Consult University COVID guidelines for the current directives. If you are unable or uncomfortable joining us in person, the talk will be broadcasted live at https://go.ncsu.edu/labcorpspeakerseries. As always, talks will be videotaped and made available (pending speaker approval) on a dedicated YouTube channel.***

 

These lectures have been approved by the CSC Graduate Oversight Committee to count toward the required lectures for graduate students.

 

~coates~

 


Return To News Homepage