소식

공지사항

학부 11월 24일 정기 수요 세미나 (온라인) : Building Secure and Reliable Deep Learning S…

페이지 정보

작성자 최고관리자 댓글 조회 작성일 21-11-19 18:37

본문

공지 내용
제목: Building Secure and Reliable Deep Learning Systems

from A Systems Security Perspective

강사: 홍상현 교수님 (Oregon State University)

일시: 2021년 11월 24일 (수요일) 오후 5시

링크: https://yonsei.zoom.us/j/82585746182?pwd=TWpxaG44RklGUkZDWmZLOVlKMVlsUT09


Abstract

As deep learning is becoming a key component in many business and safety-critical systems, e.g., self-driving cars or AI-assisted robotic surgery, adversaries have started placing them on their radar. To understand their potential threats, recent work studied the worst-case behaviors of deep neural networks (DNNs), such as mispredictions caused by adversarial examples or models altered by data poisoning. However, most of the prior work narrowly considers DNNs as an isolated mathematical concept, and it overlooks a holistic picture—leaving out the security threats caused by practical hardware or system-level attacks.


In this talk, on three separate projects, I will present my research on how deep learning systems, owing to the computational properties of DNNs, are particularly vulnerable to existing well-studied attacks. First, I will show how over-parameterization hurts a system's resilience to fault-injection attacks. Even with a single bit-flip, when chosen carefully, an attacker can inflict an accuracy drop up to 100%, and half of a DNN's parameters have at least one-bit that degrades its accuracy over 10%. An adversary who wields Rowhammer, a fault attack that flips random or targeted bits in the physical memory (DRAM), can exploit this graceless degradation in practice. Second, I will how computational regularities can compromise the confidentiality of a system. Leveraging the information leaked by a DNN processing a single sample, an adversary can steal the DNN's often proprietary architecture. An attacker armed with Flush+Reload, a remote side-channel attack, can accurately perform this reconstruction against a DNN deployed in the cloud. Third, I will show how input-adaptive DNNs, e.g., multi-exit networks, fail to promise computational efficiency in an adversarial setting. By adding imperceptible input perturbations, an attacker can significantly increase a multi-exit network's computations to have predictions on an input. This vulnerability also leads to exploitation in resource-constrained settings such as an IoT scenario, where input-adaptive networks are gaining traction. Finally, building on the lessons learned from my projects, I will conclude my talk by outlining future research directions for designing secure and reliable deep learning systems


Bio:

Sanghyun Hong is an Assistant Professor of Computer Science at Oregon State University. His research interests lie at the intersection of computer security and machine learning. His current research focus is to study the computational properties of DNNs from a systems security perspective. He also works on identifying distinct computational behaviors of DNNs, such as network confusion or gradient-level disparity, whose quantification led to defenses against backdooring or data poisoning. He was invited as a speaker at USENIX Enigma'21, where he talked about practical hardware attacks on deep learning, and he is also a recipient of the Ann G. Wylie Dissertation Fellowship. He received his PhD from the University of Maryland, College Park, and his BS in EECS at Seoul National University in South Korea.


You can find more about Sanghyun at https://sanghyun-hong.com.

댓글목록

등록된 댓글이 없습니다.