Publications


* indicates equal contribution


2022 IEEE International Symposium on Measurements & Networking (M&N)

Won Park*, Nicolas Ferland, Wenting Sun

In modern network and telecommunication systems, hundreds of thousands of nodes are interconnected by telecommunication links to exchange information between nodes. The complexity of the system and the stringent requirements on service level agreement makes it necessary to monitor network performance intelligently and enable preventative measures to ensure network performance. Anomaly detection - the task of identifying events that deviate from the normal behavior - continues to be an important task. However, techniques traditionally employed by industry on real-world data - DBSCAN and MAD - have severe limitations, such as the need to manually tune and calibrate the algorithms frequently and limited capacity to capture past history in the model. Lately, there has been much progression in applying machine learning techniques, specifically autoencoders to the problem of AD. However, thus far, few of these techniques have been tested for use in scenarios involving multivariate timeseries data that would be faced by telecommunication companies. We propose a novel auto encoder based deep learning framework called ERICA including a new pipeline to address these shortcomings. Our approach has been demonstrated to achieve better performance (an increase in F-score by over 10%) and significantly enhance the scalability.


NOMS 2022 (IEEE/IFIP Network Operations and Management Symposium)

Chia-Cheng Yen, Wenting Sun, Hakimeh Purmehdi, Won Park*, Kunal Rajan Deshmukh, Nishank Thakrar, Omar Nassef, Adam Jacobs

Due to the rapid adoption of 5G networks and the increasing number of devices and base stations (gNBs) connected to it, manually identifying malfunctioning machines or devices that cause a part of the networks to fail becomes more challenging. Furthermore, data collected from the networks are not always sufficient. To overcome these two issues, we proposed a novel root cause analysis (RCA) framework that integrates graph neural networks (GNNs) with graph structure learning (GSL) to infer hidden dependencies from available data. The learned dependencies are the graph structure utilized to predict the root cause machines or devices. We found that despite the fact that the data is often incomplete, the GSL model can infer fairly accurate hidden dependencies from data with a large number of nodes and generate informative graph representation for GNNs to identify the root cause. Our experimental results showed that higher accuracy of identifying a root cause and victim nodes can be achieved when the number of nodes in an environment is increased.


ICLR 2022

Yi Zeng, Si Chen, Won Park*, Z. Morley Mao, Ming Jin, Ruoxi Jia

We propose a minimax formulation for removing backdoors from a given poisoned model based on a small set of clean data. This formulation encompasses much of prior work on backdoor removal. We propose the Implicit Backdoor Adversarial Unlearning (I-BAU) algorithm to solve the minimax. Unlike previous work, which breaks down the minimax into separate inner and outer problems, our algorithm utilizes the implicit hypergradient to account for the interdependence between inner and outer optimization. We theoretically analyze its convergence and the generalizability of the robustness gained by solving minimax on clean data to unseen test data. In our evaluation, we compare I-BAU with six state-ofart backdoor defenses on eleven backdoor attacks over two datasets and various attack settings, including the common setting where the attacker targets one class as well as important but underexplored settings where multiple classes are targeted. I-BAU's performance is comparable to and most often significantly better than the best baseline. Particularly, its performance is more robust to the variation on triggers, attack settings, poison ratio, and clean data size. Moreover, I-BAU requires less computation to take effect; particularly, it is more than 13x faster than the most efficient baseline in the single-target attack setting. Furthermore, it can remain effective in the extreme case where the defender can only access 100 clean samples—a setting where all the baselines fail to produce acceptable results.


ICCV 2021

Yi Zeng*, Won Park*, Z. Morley Mao, Ruoxi Jia

Backdoor attacks have been considered a severe security threat to deep learning. Such attacks can make models perform abnormally on inputs with predefined triggers and still retain state-of-the-art performance on clean data. While backdoor attacks have been thoroughly investigated in the image domain from both attackers' and defenders' sides, an analysis in the frequency domain has been missing thus far. This paper first revisits existing backdoor triggers from a frequency perspective and performs a comprehensive analysis. Our results show that many current backdoor attacks exhibit severe high-frequency artifacts, which persist across different datasets and resolutions. We further demonstrate these high-frequency artifacts enable a simple way to detect existing backdoor triggers at a detection rate of 98.50% without prior knowledge of the attack details and the target model. Acknowledging previous attacks' weaknesses, we propose a practical way to create smooth backdoor triggers without high-frequency artifacts and study their detectability. We show that existing defense works can benefit by incorporating these smooth triggers into their design consideration. Moreover, we show that the detector tuned over stronger smooth triggers can generalize well to unseen weak smooth triggers. In short, our work emphasizes the importance of considering frequency analysis when designing both backdoor attacks and defenses in deep learning.


ICIP 2021

Won Park, Nan Liu, Qi Alfred Chen, Z. Morley Mao

A critical aspect of autonomous vehicles (AVs) is the object detection stage, which is increasingly being performed with sensor fusion models: multimodal 3D object detection models which utilize both 2D RGB image data and 3D data from a LIDAR sensor as inputs. In this work, we perform the first study to analyze the robustness of a high-performance, open source sensor fusion model architecture towards adversarial attacks and challenge the popular belief that the use of additional sensors automatically mitigate the risk of adversarial attacks. We find that despite the use of a LIDAR sensor, the model is vulnerable to our purposefully crafted image-based adversarial attacks including disappearance, universal patch, and spoofing. After identifying the underlying reason, we explore some potential defenses and provide some recommendations for improved sensor fusion models.


ACNS 2020

Michael McCoyd, Won Park, Steven Chen, Neil Shah, Ryan Roggenkemper, Minjune Hwang, Jason Xinyu Liu, David Wagner

Deep learning image classification is vulnerable to adversarial attack, even if the attacker changes just a small patch of the image. We propose a defense against patch attacks based on partially occluding the image around each candidate patch location, so that a few occlusions each completely hide the patch. We demonstrate on CIFAR-10, Fashion MNIST, and MNIST that our defense provides certified security against patch attacks of a certain size.


CSS 2019

Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, Z. Morley Mao

In Autonomous Vehicles (AVs), one fundamental pillar is perception, which leverages sensors like cameras and LiDARs (Light Detection and Ranging) to understand the driving environment. Due to its direct impact on road safety, multiple prior efforts have been made to study its the security of perception systems. In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored. We consider LiDAR spoofing attacks as the threat model and set the attack goal as spoofing obstacles close to the front of a victim AV. We find that blindly applying LiDAR spoofing is insufficient to achieve this goal due to the machine learning-based object detection process. Thus, we then explore the possibility of strategically controlling the spoofed attack to fool the machine learning model. We formulate this task as an optimization problem and design modeling methods for the input perturbation function and the objective function. We also identify the inherent limitations of directly solving the problem using optimization and design an algorithm that combines optimization and global sampling, which improves the attack success rates to around 75%. As a case study to understand the attack impact at the AV driving decision level, we construct and evaluate two attack scenarios that may damage road safety and mobility. We also discuss defense directions at the AV system, sensor, and machine learning model levels.


STWiMob 2018

Steven Chen*, Won Park*, Joanna Yang*, David Wagner

Smartphone sensors are becoming more universal and more accurate. In this paper, we aim to distinguish between four common positions or states a phone can be in: in the hand, pocket, backpack, or on a table. Using a uniquely designed neural network and data from the accelerometer and the screen state, we achieve a 92% accuracy on the same phone. We also explore extending this to different phones and propose an acceleration calibration technique to do so.