Session 4A

ML and security (III)

10:30 AM — 11:50 AM HKT
Jun 8 Tue, 10:30 PM — 11:50 PM EDT

REFIT: a Unified Watermark Removal Framework for Deep Learning Systems with Limited Data

Xinyun Chen (University of California, Berkeley, USA), Wenxiao Wang (Tsinghua University, China), Chris Bender (University of California, Berkeley, USA), Yiming Ding (University of California, Berkeley, USA), Ruoxi Jia (Virginia Tech, USA), Bo Li (University of Illinois at Urbana-Champaign, USA), Dawn Song (University of California, Berkeley, USA)

Training deep neural networks from scratch could be computationally expensive and requires a lot of training data. Recent work has explored different watermarking techniques to protect the pretrained deep neural networks from potential copyright infringements. However, these techniques could be vulnerable to watermark removal attacks. In this work, we propose REFIT, a unified watermark removal framework based on fine-tuning, which does not rely on the knowledge of the watermarks, and is effective against a wide range of watermarking schemes. In particular, we conduct a comprehensive study of a realistic attack scenario where the adversary has limited training data, which has not been emphasized in prior work on attacks against watermarking schemes. To effectively remove the watermarks without compromising the model functionality under this weak threat model, we propose two techniques that are incorporated into our fine-tuning framework: (1) an adaption of the elastic weight consolidation (EWC) algorithm, which is originally proposed for mitigating the catastrophic forgetting phenomenon; and (2) unlabeled data augmentation (AU), where we leverage auxiliary unlabeled data from other sources. Our extensive evaluation shows the effectiveness of REFIT against diverse watermark embedding schemes. The experimental results demonstrate that our fine-tuning-based watermark removal attacks could pose real threats to the copyright of pre-trained models, and thus highlight the importance of further investigating the watermarking problem and proposing more robust watermark embedding schemes against the attacks.

Recompose Event Sequences vs. Predict Next Events: A Novel Anomaly Detection Approach for Discrete Event Logs

Lun-Pin Yuan (Penn State University, USA), Peng Liu (Information Sciences and Technology, Pennsylvania State University, USA), Sencun Zhu (The Pennsylvania State University, USA)

One of the most challenging problems in the field of intrusion detection is anomaly detection for discrete event logs. While most earlier work focused on applying unsupervised learning upon engineered features, most recent work has started to resolve this challenge by applying deep learning methodology to abstraction of discrete event entries. Inspired by natural language processing, LSTM-based anomaly detection models were proposed. They try to predict upcoming events, and raise an anomaly alert when a prediction fails to meet a certain criterion. However, such a predict-next-event methodology has a fundamental limitation: event predictions may not be able to fully exploit the distinctive characteristics of sequences. This limitation leads to high false positives (FPs). It is also critical to examine the structure of sequences and the bi-directional causality among individual events. To this end, we propose a new methodology: Recomposing event sequences as anomaly detection.We propose DabLog, a LSTM-based Deep Autoencoder-Based anomaly detection method for discrete event Logs. The fundamental difference is that, rather than predicting upcoming events, our approach determines whether a sequence is normal or abnormal by analyzing (encoding) and reconstructing (decoding) the given sequence. Our evaluation results show that our new methodology can significantly reduce the numbers of FPs, hence achieving a higher 𝐹1 score.

Robust Roadside Physical Adversarial Attack Against Deep Learning in Lidar Perception Modules

Kaichen Yang (University of Florida, USA), Tzungyu Tsai (National Tsing Hua University, Taiwan), Honggang Yu (University of Florida, USA), Max Panoff (University of Florida, USA), Tsung-Yi Ho (National Tsing Hua University, Taiwan), Yier Jin (University of Florida, USA)

As Autonomous Vehicles (AVs) mature into viable transportation solutions, mitigating potential vehicle control security risks becomes increasingly important. Perception modules in AVs combine multiple sensors to perceive the surrounding environment. As such, they have been the focus of efforts to exploit the aforementioned risks due to their critical role in controlling autonomous driving technology. Despite extensive and thorough research into the vulnerability of camera-based sensors, vulnerabilities originating from Lidar sensors and their corresponding deep learning models in AVs remain comparatively untouched. Being aware that small roadside objects can be occasionally incorrectly identified as vehicles through on-board deep learning models, we propose a novel adversarial attack inspired by this phenomenon in both white-box and black-box scenarios. The adversarial attacks proposed in this paper are launched against deep learning models that perform object detection tasks through raw 3D points collected by a Lidar sensor in an autonomous driving scenario. In comparison to existing works, our attack creates not only adversarial point clouds in simulated environments, but also robust adversarial objects that can cause behavioral reactions in state of the art autonomous driving systems. Defense methods are then proposed and evaluated against this type of adversarial objects.

DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation

Yi Zeng (University of California San Diego, USA), Han Qiu (Telecom Paris, France), Tianwei Zhang (Nanyang Technological University, Singapore), Shangwei Guo (Nanyang Technological University, Singapore), Meikang Qiu (Texas A&M University Commerce, USA), Bhavani Thuraisingham (University of Texas Dallas, USA)

Public resources and services (e.g., datasets, training platforms, pretrained models) have been widely adopted to ease the development of Deep Learning-based applications. However, if the third-party providers are untrusted, they can inject poisoned samples into the datasets or embed backdoors in those models. Such an integrity breach can cause severe consequences, especially in safety- and security-critical applications. Various backdoor attack techniques have been proposed for higher effectiveness and stealthiness. Unfortunately, existing defense solutions are not practical to thwart those attacks in a comprehensive way. In this paper, we investigate the effectiveness of data augmentation techniques in mitigating backdoor attacks and enhancing DL models’ robustness. An evaluation framework is introduced to achieve this goal. Specifically, we consider a unified defense solution, which (1) adopts a data augmentation policy to fine-tune the infected model and eliminate the effects of the embedded backdoor; (2) uses another augmentation policy to preprocess input samples and invalidate the triggers during inference. We propose a systematic approach to discover the optimal policies for defending against different backdoor attacks by comprehensively evaluating 71 state-of-the-art data augmentation functions. Extensive experiments show that our identified policy can effectively mitigate eight different kinds of backdoor attacks and outperform five existing defense methods.We envision this framework can be a good benchmark tool to advance future DNN backdoor studies.

Session Chair

Neil Gong

Session 5A

Network and Web Security (II)

2:00 PM — 3:20 PM HKT
Jun 9 Wed, 2:00 AM — 3:20 AM EDT

Filtering DDoS Attacks from Unlabeled Network Traffic Data Using Online Deep Learning

Wesley Joon-Wie Tann (National University of Singapore, Singapore), Jackie Tan Jin Wei (National University of Singapore, Singapore), Joanna Purba (National University of Singapore, Singapore), Ee-Chien Chang (National University of Singapore, Singapore)

DDoS attacks are simple, effective, and still pose a significant threat even after more than two decades. Given the recent success in machine learning, it is interesting to investigate how we can leverage deep learning to filter out application layer attack requests. There are challenges in adopting deep learning solutions due to the ever-changing profiles, the lack of labeled data, and constraints in the online setting. Offline unsupervised learning methods can sidestep these hurdles by learning an anomaly detector 𝑁 from the normal-day traffic N. However, anomaly detection does not exploit information acquired during attacks, and their performance typically is not satisfactory. In this paper, we propose two approaches that utilize both the historic N and the mixtureM traffic obtained during attacks, consisting of unlabeled requests. First, our proposed approach, inspired by statistical methods, extends an unsupervised anomaly detector 𝑁 to solve the problem using estimated conditional probability distributions.We adopt transfer learning to apply 𝑁 on N andM separately and efficiently, combining the results to obtain an online learner. Second, we formulate a specific loss function more suited for deep learning and use iterative training to solve it in the online setting. On publicly available datasets, such as the CICIDS2017, our online learners achieve an average of 90.6% accuracy rates compared to the baseline detection method, which achieves around 60.0% accuracy. In the offline setting, our approaches on unlabeled data achieve competitive accuracy compared to classifiers trained on labeled data.

Bypassing Push-based Second Factor and Passwordless Authentication with Human-Indistinguishable Notifications

Mohammed Jubur (The University of Alabama at Birmingham, USA), Prakash Shrestha (University of Florida, USA), Nitesh Saxena (The University of Alabama at Birmingham, USA), Jay Prakash (SUTD, Singapore)

Second factor (2FA) or passwordless authentication based on notifications pushed to a user’s personal device (e.g., a phone) that the user can simply approve (or deny) has become widely popular due to its convenience. In this paper, we show that the effortlessness of this approach gives rise to a fundamental design vulnerability. The vulnerability stems from the fact that the notification, as shown to the user, is not uniquely bound to the user’s login session running through the browser, and thus if two notifications are sent around the same time (one for the user’s session and one for an attacker’s session), the user may not be able to distinguish between the two, likely ending up accepting the notification of the attacker’s session. Exploiting this vulnerability, we present HIENA1, a simple yet devastating attack against such “one-push” 2FA or passwordless schemes, which can allow a malicious actor to login soon after the victim user attempts to login triggering multiple near-concurrent notifications that seem indistinguishable to the user. To further deceive the user into accepting the attacker-triggered notification, HIENA can optionally spoof/mimic the victim’s client machine information (e.g., the city from which the victim logs in, by being in the same city) and even issue other third-party notifications (e.g., email or social media) for obfuscation purposes. In case of 2FA schemes, we assume that the attacker knows the victim’s password (e.g., obtained via breached password databases), a standard methodology to evaluate the security of any 2FA scheme. To evaluate the effectiveness of HIENA, we carefully designed and ran a human factors lab study where we tested benign and adversarial settings mimicking the user interface designs of well-known one-push 2FA and passwordless schemes. Our results show that users are prone to accepting attacker’s notification in HIENA with high rates, about 83% overall and about 99% upon using spoofed information, which is almost similar to the rates of acceptance of benign login sessions. Even for the non-spoofed sessions (our primary attack), the attack success rates are about 68%, which go up to about 90-97% if the attack attempt is repeated 2-3 times. While we did not see a statistically significant effect of using third-party notifications on attack success rate, in real-life, the use of such obfuscation can be quite effective as users may only see one single 2FA notification (corresponding to attacker’s session) on top of the notifications list which is most likely to be accepted. We have verified that many widely deployed one-push 2FA schemes (e.g., Duo Push, Authy OneTouch, LastPass, Facebook’s and OpenOTP) seem directly vulnerable to our attack.

Click This, Not That: Extending Web Authentication with Deception

Timothy Barron (Yale University, USA), Johnny So (Stony Brook University, USA), Nick Nikiforakis (Stony Brook University, USA)

With phishing attacks, password breaches, and brute-force login attacks presenting constant threats, it is clear that passwords alone are inadequate for protecting the web applications entrusted with our personal data. Instead, web applications should practice defense in depth and give users multiple ways to secure their accounts. In this paper we propose login rituals, which define actions that a user must take to authenticate, and web tripwires, which define actions that a user must not take to remain authenticated. These actions outline expected behavior of users familiar with their individual setups on applications they use often. We show how we can detect and prevent intrusions from web attackers lacking this familiarity with their victim’s behavior. We design a modular and application-agnostic system that incorporates these two mechanisms, allowing us to add an additional layer of deception-based security to existing web applications without modifying the applications themselves. Next to testing our system and evaluating its performance when applied to￿ ve popular open-source web applications, we demonstrate the promising nature of these mechanisms through a user study. Specifically, we evaluate the detection rate of tripwires against simulated attackers, 88% of whom clicked on at least one tripwire. We also observe web users’ creation of personalized login rituals and evaluate the practicality and memorability of these rituals over time. Out of 39 user-created rituals, all of them are unique and 79% of users were able to reproduce their rituals even a week after creation.

Analyzing Spatial Differences in the TLS Security of Delegated Web Services

Joonhee Lee (Seoul National University, South Korea), Hyunwoo Lee (Purdue University, USA), Jongheon Jeong (Seoul National University, South Korea), Doowon Kim (University of Tennessee, Knoxville, USA), Taekyoung "Ted" Kwon (Seoul National University, South Korea)

To provide secure content delivery, Transport Layer Security (TLS) has become a de facto standard over a couple of decades. However, TLS has a long history of security weaknesses and drawbacks. Thus, the security of TLS has been enhanced by addressing security problems through continuous version upgrades. Meanwhile, to provide fast content delivery globally, websites (or origin web servers) need to deploy and administer many machines in globally distributed environments. They often delegate the management of machines to web hosting services or content delivery networks (CDNs), where the security configurations of distributed servers may vary spatially depending on the managing entities or locations. Based on these spatial differences in TLS security, we find that the security level of TLS connections (and their web services) can be lowered. After collecting the information of (web) domains that exhibit different TLS versions and cryptographic options depending on clients’ locations, we show that it is possible to redirect TLS handshake messages to weak TLS servers, which both the origin server and the client may not be aware of. We investigate 7M domains with these spatial differences of security levels in the wild and conduct the analyses to better understand the root causes of this phenomenon. We also measure redirection delays at various locations in the world to see whether there are noticeable delays in redirections.

Session Chair

Ding Wang

Session 6A

Software Security and Vulnerability Analysis (I)

3:40 PM — 5:00 PM HKT
Jun 9 Wed, 3:40 AM — 5:00 AM EDT

How to Take Over Drones

Sebastian Plotz (University of Applied Sciences Stralsund, Germany), Frederik Armknecht (University of Mannheim, Germany), Christian Bunse (University of Applied Sciences Stralsund, Germany)

The number of unmanned aerial vehicles (UAV) (hereinafter referred to as drone) is rising in both, private and commercial applications. This makes it necessary that a drone remains under full control of the owner at any time. Most drones are controlled wirelessly by protocols in the 2.4 GHz band. The most commonly used protocols are DSMX (Spektrum), ACCST D16 EU-LBT (FrSky), DEVO (Walkera) and S-FHSS (Futaba). While it has been known that the DSMX protocol is vulnerable to attacks, the security of the other protocols was an open question. In this paper, we give a negative answer: all these protocols are insecure as well. More precisely, we show that it is practically possible to seize control over the drone in all cases. All presented attacks were implemented and validated under real conditions.

Localizing Vulnerabilities Statistically From One Exploit

Shiqi Shen (National University of Singapore, Singapore), Aashish Kolluri (National University of Singapore, Singapore), Zhen Dong (National University of Singapore, Singapore), Prateek Saxena (National University of Singapore, Singapore), Abhik Roychoudhury (National University of Singapore, Singapore)

Automatic vulnerability diagnosis can help security analysts identify and, therefore, quickly patch disclosed vulnerabilities. The vulnerability localization problem is to automatically find a program point at which the “root cause” of the bug can be fixed. This paper employs a statistical localization approach to analyze a given exploit. Our main technical contribution is a novel procedure to systematically construct a test-suite which enables high-fidelity localization. We build our techniques in a tool called VulnLoc which automatically pinpoints vulnerability locations, given just one exploit, with high accuracy. VulnLoc does not make any assumptions about the availability of source code, test suites, or specialized knowledge of the type of vulnerability. It identifies actionable locations in its Top-5 outputs, where a correct patch can be applied, for about 88% of 43 CVEs arising in large real-world applications we study. These include 6 different classes of security flaws. Our results highlight the under-explored power of statistical analyses, when combined with suitable test-generation techniques.

Cali: Compiler-Assisted Library Isolation

Markus Bauer (CISPA – Helmholtz Center for Information Security, Germany), Christian Rossow (CISPA – Helmholtz Center for Information Security, Germany)

Software libraries can freely access the program’s entire address space, and also inherit its system-level privileges. This lack of separation regularly leads to security-critical incidents once libraries contain vulnerabilities or turn rogue. We present Cali, a compilerassisted library isolation system that fully automatically shields a program from a given library. Cali is fully compatible with mainline Linux and does not require supervisor privileges to execute.We compartmentalize libraries into their own process with well-defined security policies. To preserve the functionality of the interactions between program and library, Cali uses a Program Dependence Graph to track data flow between the program and the library during link time. We evaluate our open-source prototype against three popular libraries: Ghostscript, OpenSSL, and SQLite. Cali successfully reduced the amount of memory that is shared between the program and library to 0.08% (ImageMagick) – 0.4% (Socat), while retaining an acceptable program performance.

Privilege-Escalation Vulnerability Discovery for Large-scale RPC Services: Principle, Design, and Deployment

Zhuotao Liu (Tsinghua University, China), Hao Zhao (Ant Group, China), Sainan Li (Tsinghua University, China), Qi Li (Tsinghua University, China), Tao Wei (Ant Group, China), Yu Wang (Ant Group, China)

RPCs are fundamental to our large-scale distributed system. From a security perspective, the blast radius of RPCs is worryingly big since each RPC often interacts with tens of internal system components. Thus, discovering RPC vulnerabilities is often a top priority in the software quality assurance process for production systems. In this paper, we present the design, implementation, and deployment experiences of PAIR, a fully automated system for privilegeescalation vulnerability discovery in Ant Group’s large-scale RPC system. The design of PAIR centers around the live replay design principle where the vulnerability discovery is driven by the live RPC requests collected from production, rather than relying on any engineered testing requests. This ensures that PAIR is able to provide complete coverage to our production RPC requests in a privacy-preserving manner, despite the manifest of scale (billions of daily requests), complexity (hundreds of system-services involved) and heterogeneity (RPC protocols are highly customized). However, the live replay design principle is not a panacea. We made a couple of critical design decisions (and addressed their corresponding challenges) along the way to realize the principle in production. First, to avoid inspecting the responses of user-facing RPCs (due to privacy concerns), PAIR designs a universal and privacypreserving mechanism, via profiling the end-to-end system invocation, to represent the RPC handling logic. Second, to ensure that PAIR provides proactive defense (rather than reactive defense that is often limited by known vulnerabilities), PAIR designs an empirical vulnerability labeling mechanism to effectively identify a group of potentially insecure RPCs while safely excluding other RPCs. During the course of three-year production development, PAIR in total helped locate 133 truly insecure RPCs, from billions of requests, while maintaining a zero false negative rate per our production observations.

Session Chair

Kehuan Zhang

Made with in Toronto · Privacy Policy · © 2021 Duetone Corp.