Session Opening

Opening

Conference
9:00 AM — 9:30 AM HKT
Local
Jun 7 Mon, 9:00 PM — 9:30 PM EDT

Opening Remarks and Announcement of Best Paper Award Winner

Conference Chairs

5
This talk does not have an abstract.

Session Chair

Conference Chairs

Session Keynote-1

Keynote Session 1

Conference
9:30 AM — 10:30 AM HKT
Local
Jun 7 Mon, 9:30 PM — 10:30 PM EDT

Trustworthy Machine Learning: Past, Present, and Future

Somesh Jha (Lubar Professor, Computer Sciences Department, University of Wisconsin, Madison, USA)

5
Fueled by massive amounts of data, models produced by machine-learning (ML) algorithms, especially deep neural networks (DNNs), are being used in diverse domains where trustworthiness is a concern, including automotive systems, finance, healthcare, natural language processing, and malware detection. Of particular concern is the use of ML algorithms in cyber-physical systems (CPS), such as self-driving cars and aviation, where an adversary can cause serious consequences. Interest in this area of research has simply exploded. In this work, we will cover the state-of-the-art in trustworthy machine learning, and then cover some interesting future trends.

Session Chair

Man Ho Au

Session 1A

ML and Security (I)

Conference
11:00 AM — 12:20 PM HKT
Local
Jun 7 Mon, 11:00 PM — 12:20 AM EDT

Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes

Jinyuan Jia (Duke University, USA), Binghui Wang (Duke University, USA), Neil Gong (Duke University, USA)

0
In the era of deep learning, a user often leverages a third-party machine learning tool to train a deep neural network (DNN) classifier and then deploys the classifier as an end-user software product (e.g., a mobile app) or a cloud service. In an information embedding attack, an attacker is the provider of a malicious third-party machine learning tool. The attacker embeds a message into the DNN classifier during training and recovers the message via querying the API of the black-box classifier after the user deploys it. Information embedding attacks have attracted growing attention because of various applications such as watermarking DNN classifiers and compromising user privacy. State-of-the-art information embedding attacks have two key limitations: 1) they cannot verify the correctness of the recovered message, and 2) they are not robust against post-processing (e.g., compression) of the classifier.

In this work, we aim to design information embedding attacks that are verifiable and robust against popular post-processing methods. Specifically, we leverage Cyclic Redundancy Check to verify the correctness of the recovered message. Moreover, to be robust against post-processing, we leverage Turbo codes, a type of errorcorrecting codes, to encode the message before embedding it to the DNN classifier. In order to save queries to the deployed classifier, we propose to recover the message via adaptively querying the classifier. Our adaptive recovery strategy leverages the property of Turbo codes that supports error correcting with a partial code. We evaluate our information embedding attacks using simulated messages and apply them to three applications (i.e., training data inference, property inference, DNN architecture inference), where messages have semantic interpretations. We consider 8 popular methods to post-process the classifier. Our results show that our attacks can accurately and verifiably recover the messages in all considered scenarios, while state-of-the-art attacks cannot accurately recover the messages in many scenarios.

IPGuard: Protecting Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary

Xiaoyu Cao (Duke University, USA), Jinyuan Jia (Duke University, USA), Neil Gong (Duke University, USA)

1
A deep neural network (DNN) classifier represents a model owner’s intellectual property as training a DNN classifier often requires lots of resource.Watermarking was recently proposed to protect the intellectual property of DNN classifiers.However, watermarking suffers from a key limitation: it sacrifices the utility/accuracy of the model owner’s classifier because it tampers the classifier’s training or fine-tuning process. In this work, we propose IPGuard, the first method to protect intellectual property of DNN classifiers that provably incurs no accuracy loss for the classifiers. Our key observation is that a DNN classifier can be uniquely represented by its classification boundary. Based on this observation, IPGuard extracts some data points near the classification boundary of the model owner’s classifier and uses them to fingerprint the classifier. A DNN classifier is said to be a pirated version of the model owner’s classifier if they predict the same labels for most fingerprinting data points. IPGuard is qualitatively different from watermarking. Specifically, IPGuard extracts fingerprinting data points near the classification boundary of a classifier that is already trained, while watermarking embeds watermarks into a classifier during its training or fine-tuning process. We extensively evaluate IPGuard on CIFAR-10, CIFAR-100, and ImageNet datasets. Our results show that IPGuard can robustly identify post-processed versions of the model owner’s classifier as pirated versions of the classifier, and IPGuard can identify classifiers, which are not the model owner’s classifier nor its post-processed versions, as non-pirated versions of the classifier.

A Diversity Index based Scoring Framework for Identifying Smart Meters Launching Stealthy Data Falsification Attacks

Shameek Bhattacharjee (Western Michigan University, USA), Praveen Madhavarapu (Missouri University of Science and Technology, USA), Sajal K. Das (Missouri University of Science and Technology, USA)

0
A challenging problem in Advanced Metering Infrastructure (AMI) of smart grids is the identification of smart meters under the control of a stealthy adversary, that inject very low margins of stealthy data falsification. The problem is challenging due to wide legitimate variation in both individual and aggregate trends in real world power consumption data, making such stealthy attacks unrecognizable by existing approaches. In this paper, via proposed modified diversity index scoring metric, we propose a novel information-theory inspired data driven device anomaly classification framework to identify compromised meters launching low margins of stealthy data falsification attacks. Specifically, we draw a parallelism between the effects of data falsification attacks and ecological balance disruptions and identify required mathematical modifications in existing Renyi Entropy and Hill’s Diversity Entropy measures. These modifications such as expected self-similarity with weighted abundance shifts across various temporal scales, and diversity order are appropriately embedded in our resulting framework. The resulting diversity index score is used to classify smart meters launching additive, deductive, and alternating switching attack types with high sensitivity (as low as 100W) compared to the existing works that perform poorly at margins of false data below 400W. Our proposed theory is validated with two different real smart meter datasets from USA and Ireland. Experimental results demonstrate successful detection sensitivity from very low to high margins of false data, thus reducing undetectable strategy space of attacks in AMI for an adversary having complete knowledge of our method.

Exploiting the Sensitivity of L2 Adversarial Examples to Erase and Restore

Fei Zuo (University of South Carolina, USA), Qiang Zeng (University of South Carolina, USA)

0
By adding carefully crafted perturbations to input images, adversarial examples (AEs) can be generated to mislead neural-networkbased image classifiers. 𝐿2 adversarial perturbations by Carlini and Wagner (CW) are among the most effective but difficult-to-detect attacks. While many countermeasures against AEs have been proposed, detection of adaptive CW-𝐿2 AEs is still an open question. We find that, by randomly erasing some pixels in an 𝐿2 AE and then restoring it with an inpainting technique, the AE, before and after the steps, tends to have different classification results, while a benign sample does not show this symptom. We thus propose a novel AE detection technique, Erase-and-Restore (E&R), that exploits the intriguing sensitivity of 𝐿2 attacks. Experiments conducted on two popular image datasets, CIFAR-10 and ImageNet, show that the proposed technique is able to detect over 98% of 𝐿2 AEs and has a very low false positive rate on benign images. The detection technique exhibits high transferability: a detection system trained using CW-𝐿2 AEs can accurately detect AEs generated using another 𝐿2 attack method. More importantly, our approach demonstrates strong resilience to adaptive 𝐿2 attacks, filling a critical gap in AE detection. Finally, we interpret the detection technique through both visualization and quantification.

Session Chair

Tianwei Zhang

Session 1B

Cyber-Physical Systems

Conference
11:00 AM — 12:20 PM HKT
Local
Jun 7 Mon, 11:00 PM — 12:20 AM EDT

ConAML: Constrained Adversarial Machine Learning for Cyber-Physical Systems

Jiangnan Li (University of Tennessee, Knoxville, USA), Yingyuan Yang (University of Illinois Springfield, USA), Jinyuan Sun (University of Tennessee, Knoxville, USA), Kevin Tomsovic (University of Tennessee, Knoxville, USA), Hairong Qi (University of Tennessee, Knoxville, USA)

0
Recent research demonstrated that the superficially well-trained machine learning (ML) models are highly vulnerable to adversarial examples. As ML techniques are becoming a popular solution for cyber-physical systems (CPSs) applications in research literatures, the security of these applications is of concern. However, current studies on adversarial machine learning (AML) mainly focus on pure cyberspace domains. The risks the adversarial examples can bring to the CPS applications have not been well investigated. In particular, due to the distributed property of data sources and the inherent physical constraints imposed by CPSs, the widely-used threat models and the state-of-the-art AML algorithms in previous cyberspace research become infeasible. We study the potential vulnerabilities of ML applied in CPSs by proposing Constrained Adversarial Machine Learning (ConAML), which generates adversarial examples that satisfy the intrinsic constraints of the physical systems. We first summarize the difference between AML in CPSs and AML in existing cyberspace systems and propose a general threat model for ConAML.We then design a best effort search algorithm to iteratively generate adversarial examples with linear physical constraints. We evaluate our algorithms with simulations of two typical CPSs, the power grids and the water treatment system. The results show that our ConAML algorithms can effectively generate adversarial examples which significantly decrease the performance of the ML models even under practical constraints.

EchoVib: Exploring Voice Authentication via Unique Non-Linear Vibrations of Short Replayed Speech

S Abhishek Anand (The University of Alabama at Birmingham, USA), Jian Liu (University of Tennessee, Knoxville, USA), Chen Wang (Louisiana State University, USA), Maliheh Shirvanian (Visa Research, USA), Nitesh Saxena (The University of Alabama at Birmingham, USA), Yingying Chen (Rutgers University, USA)

0
Recent advances in speaker verification and speech processing technology have seen voice authentication being adopted on a wide scale in commercial applications like online banking and customer care support and on devices such as smartphones and IoT voice assistant systems. However, it has been shown that the current voice authentication systems can be ineffective against voice synthesis attacks that mimic a user’s voice to high precision. In this work, we suggest a paradigm shift from the traditional voice authentication systems operating in the audio domain but susceptible to speech synthesis attacks (in the same audio domain).We leverage a motion sensor’s capability to pick up phonatory vibrations, that can help to uniquely identify a user via voice signatures in the vibration domain. The user’s speech is played/echoed back by a device’s speaker for a short duration (hence our method is termed EchoVib) and the resulting non-linear phonatory vibrations are picked up by the motion sensor for speaker recognition. The uniqueness of the device’s speaker and its accelerometer results in a device-specific fingerprint in response to the echoed speech. The use of the vibration domain and its non-linear relationship with audio allows EchoVib to resist the state-of-the-art voice synthesis attacks, shown to be successful in the audio domain. We develop an instance of EchoVib using the onboard loudspeaker and the accelerometer embedded in smartphones, as the authenticator, based on machine learning techniques. Our evaluation shows that even with the low-quality loudspeaker and the low-sampling rate of accelerometer recordings, EchoVib can identify users with an accuracy of over 90%.We also analyze our system against state-of-art-voice synthesis attacks and show that it can distinguish between the morphed and the original speaker’s voice samples, correctly rejecting the morphed samples with a success rate of 85% for voice conversion and voice modeling attacks.We believe that using the vibration domain to detect synthesized speech attacks is effective due to the hardness of preserving the unique phonatory vibration signatures and is difficult to mimic due to the non-linear mapping of the unique speaker and accelerometer response in the vibration domain to the voice in the audio domain.

HVAC: Evading Classifier-based Defenses in Hidden Voice Attacks

Yi Wu (University of Tennessee, Knoxville, USA), Xiangyu Xu (Shanghai Jiao Tong University, China), Payton R. Walker (University of Alabama at Birmingham, USA), Jian Liu (University of Tennessee, Knoxville, USA), Nitesh Saxena (University of Alabama at Birmingham, USA), Yingying Chen (Rutgers University, USA), Jiadi Yu (Shanghai Jiao Tong University, China)

1
Recent years have witnessed the rapid development of automatic speech recognition (ASR) systems, providing a practical voice-user interface for widely deployed smart devices. With the ever-growing deployment of such an interface, several voice-based attack schemes have been proposed towards current ASR systems to exploit certain vulnerabilities. Posing one of the more serious threats, hidden voice attack uses the human-machine perception gap to generate obfuscated/hidden voice commands that are unintelligible to human listeners but can be interpreted as commands by machines. However, due to the nature of hidden voice commands (i.e., normal and obfuscated samples exhibit a significant difference in their acoustic features), recent studies show that they can be easily detected and defended by a pre-trained classifier, thereby making it less threatening. In this paper, we validate that such a defense strategy can be circumvented with a more advanced type of hidden voice attack called HVAC1. Our proposed HVAC attack can easily bypass the existing learning-based defense classifiers while preserving all the essential characteristics of hidden voice attacks (i.e., unintelligible to humans and recognizable to machines). Specifically, we find that all classifier-based defenses build on top of classification models that are trained with acoustic features extracted from the entire audio of normal and obfuscated samples. However, only speech parts (i.e., human voice parts) of these samples contain the useful linguistic information needed for machine transcription. We thus propose a fusion-based method to combine the normal sample and corresponding obfuscated sample as a hybrid HVAC command, which can effectively cheat the defense classifiers. Moreover, to make the command more unintelligible to humans, we tune the speed and pitch of the sample and make it even more distorted in the time domain while ensuring it can still be recognized by machines. Extensive physical over-the-air experiments demonstrate the robustness and generalizability of our HVAC attack under different realistic attack scenarios. Results show that our HVAC commands can achieve an average 94.1% success rate of bypassing machine-learning-based defense approaches under various realistic settings.

Conware: Automated Modeling of Hardware Peripherals

Chad Spensky (University of California, Santa Barbara, USA), Aravind Machiry (University of California, Santa Barbara, USA), Nilo Redini (University of California, Santa Barbara, USA), Colin Unger (University of California, Santa Barbara, USA), Graham Foster (University of California, Santa Barbara, USA), Evan Blasband (University of California, Santa Barbara, USA), Hamed Okhravi (MIT Lincoln Laboratory, USA), Christopher Kruegel (University of California, Santa Barbara, USA), Giovanni Vigna (University of California, Santa Barbara, USA)

1
Emulation is at the core of many security analyses. However, emulating embedded systems is still not possible in most cases. To facilitate this critical analysis, we present Conware, a hardware emulation framework that can automatically generate models for hardware peripherals, which alleviates one of the major challenges currently hindering embedded systems emulation. Conware enables individual peripherals to be modeled, exported, and combined with other peripherals in a pluggable fashion. Conware achieves this by first obtaining a recording of the low-level hardware interactions between the firmware and the peripheral, using either existing methods or our source-code instrumentation technique. These recordings are then used to create high-fidelity automata representations of the peripheral using novel automata-generation techniques. The various models can then be merged to facilitate full-system emulation of any embedded firmware that uses any of the modeled peripherals, even if that specific firmware or its target hardware was never directly instrumented. Indeed, we demonstrate that Conware is able to successfully emulate a peripheral-heavy firmware binary that was never instrumented, by merging the models of six unique peripherals that were trained on a development board using only the vendor-provided example code.

Session Chair

Mu Zhang

Session 2A

Network and Web Security (I)

Conference
2:00 PM — 3:20 PM HKT
Local
Jun 8 Tue, 2:00 AM — 3:20 AM EDT

Careful Who You Trust: Studying the Pitfalls of Cross-Origin Communication

Gordon Meiser (CISPA Helmholtz Center for Information Security, Germany), Pierre Laperdrix (CNRS, Univ Lille, Inria Lille, France), Ben Stock (CISPA Helmholtz Center for Information Security, Germany)

1
In the past, Web applications were mostly static and most of the content was provided by the site itself. Nowadays, they have turned into rich client-side experiences customized for the user where third parties supply a considerable amount of content, e.g., analytics, advertisements, or integration with social media platforms and external services. By default, any exchange of data between documents is governed by the Same-Origin Policy, which only permits to exchange data with other documents sharing the same protocol, host, and port. Given the move to a more interconnectedWeb, standard bodies and browser vendors have added new mechanisms to enable cross-origin communication, primarily domain relaxation, postMessages, and CORS. While prior work has already shown the pitfalls of not using these mechanisms securely (e.g., omitting origin checks for incoming postMessages), we instead focus on the increased attack surface created by the trust that is necessarily put into the communication partners. We report on a study of the Tranco Top 5,000 to measure the prevalence of cross-origin communication. By analyzing the interactions between sites, we build an interconnected graph of the trust relations necessary to run the Web. Subsequently, based on this graph, we estimate the damage caused through exploitation of existing XSS flaws on trusted sites.

Oversharing Is Not Caring: How CNAME Cloaking Can Expose Your Session Cookies

Assel Aliyeva (Boston University, USA), Manuel Egele (Boston University, USA)

1
In modern web ecosystem, online businesses often leverage thirdparty web analytics services to gain insights into the behavior of their users. Due to the recent privacy enhancements in major browsers that restrict third-party cookie usage for tracking, these businesses were urged to disguise third-party analytics infrastructure as regular subdomains of their websites [3]. The integration technique referred to as CNAME cloaking allows the businesses to continue monitoring user activity on their websites. However, it also opens up the possibility for severe security infractions as the businesses often share their session cookies with the analytics providers, thus putting online user accounts in danger. Previouswork has raised privacy concerns with regards to subdomain tracking and extensively studied the drawbacks of widely used privacy-enhancing browser extensions. In this work, we demonstrate the impact of deploying CNAME cloaking along with lax cookie access control settings on web user security. To this end, we built a system that automatically detects the presence of the disguised third-party domains as well as the leakage of the firstparty cookies. Using our system, we identified 2,139 web analytics domains that can be conveniently added to commonly deployed host-based blacklists. Concerningly, we also found that 27 out of 90 highly sensitive web services (e.g., banks) that we analyzed expose session cookies to the web analytics services.

P2DPI: Practical and Privacy-Preserving Deep Packet Inspection

Jongkil Kim (University of Wollongong, Australia), Seyit Camtepe (CSIRO Data61, Australia), Joonsang Baek (University of Wollongong, Australia), Willy Susilo (University of Wollongong, Australia), Josef Pieprzyk (CSIRO Data61, Australia), Nepal Surya (CSIRO Data61, Australia)

0
The amount of encrypted Internet traffic almost doubles every year thanks to the wide adoption of end-to-end traffic encryption solutions such as IPSec, TLS and SSH. Despite all the benefits of user privacy the end-to-end encryption provides, the encrypted internet traffic blinds intrusion detection system (IDS) and makes detecting malicious traffic hugely difficult. The resulting conflict between the user’s privacy and security has demanded solutions for deep packet inspection (DPI) over encrypted traffic. The approach of those solutions proposed to date is still restricted in that they require intensive computations during connection setup or detection. For example, BlindBox, introduced by Sherry et al. (SIGCOMM 2015) enables inspection over the TLS-encrypted traffic without compromising users’ privacy, but its usage is limited due to a significant delay on establishing an inspected channel. PrivDPI, proposed more recently by Ning et al. (ACM CCS 2019), improves the overallefficiency of BlindBox and makes the inspection scenario more viable. Despite the improvement, we show in this paper that the user privacy of Ning et al.’s PrivDPI can be compromised entirely by the rule generator without involving any other parties, including the middlebox. Having observed the difficulties of realizing efficiency and security in the previous work, we propose a new DPI system for encrypted traffic, named “Practical and Privacy-Preserving Deep Packet Inspection (P2DPI)”. P2DPI enjoys the same level of security and privacy that BlindBox provides. At the same time, P2DPI offers fast setup and encryption and outperforms PrivDPI. Our results are supported by formal security analysis. We implemented our P2DPI and comparable PrivDPI and performed extensive experimentation for performance analysis and comparison.

Camoufler: Accessing The Censored Web By Utilizing Instant Messaging Channels

Piyush Kumar Sharma (IIIT-Delhi, India), Devashish Gosain (IIIT-Delhi, India), Sambuddho Chakraborty (IIIT-Delhi, India)

1
Free and open communication over the Internet is considered a fundamental human right, essential to prevent repressions from silencing voices of dissent. This has led to the development of various anti-censorship systems. Recent systems have relied on a common blocking resistance strategy i.e., incurring collateral damage to the censoring regimes, if they attempt to restrict such systems. However, despite being promising, systems built on such strategies pose additional challenges, viz., deployment limitations, poor QoS etc. These challenges prevent their wide scale adoption. Thus, we propose a new anti-censorship system, Camoufler, that overcomes aforementioned challenges, while still maintaining similar blocking resistance. Camoufler leverages Instant Messaging (IM) platforms to tunnel client’s censored content. This content (encapsulated inside IM traffic) is transported to the Camoufler server (hosted in a free country), which proxies it to the censored website. However, the eavesdropping censor would still observe regular IM traffic being exchanged between the IM peers. Thus, utilizing IM channels as-is for transporting traffic provides unobservability, while also ensuring good QoS, due to its inherent properties such as low-latency message transports. Moreover, it does not pose new deployment challenges. Performance evaluation of Camoufler, implemented on five popular IM apps indicate that it provides sufficient QoS for web browsing. E.g., the median time to render the homepages of Alexa top-1k sites was recorded to be about 3.6s, when using Camoufler implemented over Signal IM application.

Session Chair

Xavier de Carné de Carnavalet

Session 2B

Hardware Security (I)

Conference
2:00 PM — 3:20 PM HKT
Local
Jun 8 Tue, 2:00 AM — 3:20 AM EDT

Red Alert for Power Leakage: Exploiting Intel RAPL-Induced Side Channels

Zhenkai Zhang (Texas Tech University, USA), Sisheng Liang (Texas Tech University, USA), Fan Yao (University of Central Florida, USA), Xing Gao (University of Delaware, USA)

0
RAPL (Running Average Power Limit) is a hardware feature introduced by Intel to facilitate power management. Even though RAPL and its supporting software interfaces can benefit power management significantly, they are unfortunately designed without taking certain security issues into careful consideration. In this paper, we demonstrate that information leaked through RAPL-induced side channels can be exploited to mount realistic attacks. Specifically, we have constructed a new RAPL-based covert channel using a single AVX instruction, which can exfiltrate data across different boundaries (e.g., those established by containers in software or even CPUs in hardware); and, we have investigated the first RAPL-based website fingerprinting technique that can identify visited webpages with a high accuracy (up to 99% in the case of the regular network using a browser like Chrome or Safari, and up to 81% in the case of the anonymity network using Tor). These two studies form a preliminary examination into RAPL-imposed security implications. In addition, we discuss some possible countermeasures.

PLI-TDC: Super Fine Delay-Time Based Physical-Layer Identification with Time-to-Digital Converter for In-Vehicle Networks

Shuji Ohira (Nara Institute of Science and Technology, Japan), Araya Kibrom Desta (Nara Institute of Science and Technology, Japan), Ismail Arai (Nara Institute of Science and Technology, Japan), Kazutoshi Fujikawa (Nara Institute of Science and Technology, Japan)

0
Recently, cyberattacks on Controller Area Network (CAN) which is one of the automotive networks are becoming a severe problem. CAN is a protocol for communicating among Electronic Control Units (ECUs) and it is a de-facto standard of automotive networks. Some security researchers point out several vulnerabilities in CAN such as unable to distinguish spoofing messages due to no authentication and no sender identification. To prevent a malicious message injection, at least we should identify the malicious senders by analyzing live messages. In previous work, a delay-time based method called Divider to identify the sender node has been proposed. However, Divider could not identify ECUs which have similar variations because Divider’s measurement clock has coarse time-resolution. In addition, Divider cannot adapt a drift of delay-time caused by the temperature drift at the ambient buses. In this paper, we propose a super fine delay-time based sender identification method with Time-to-Digital Converter (TDC). The proposed method achieves an accuracy rate of 99.67% in the CAN bus prototype and 97.04% in a real-vehicle. Besides, in an environment of drifting temperature, the proposed method can achieve a mean accuracy of over 99%.

HECTOR-V: A Heterogeneous CPU Architecture for a Secure RISC-V Execution Environment

Pascal Nasahl (Graz University of Technology, Austria), Robert Schilling (Graz University of Technology, Austria), Mario Werner (Graz University of Technology, Austria), Stefan Mangard (Graz University of Technology, Austria)

0
To ensure secure and trustworthy execution of applications in potentially insecure environments, vendors frequently embed trusted execution environments (TEE) into their systems. Applications executed in this safe, isolated space are protected from adversaries, including a malicious operating system. TEEs are usually build by integrating protection mechanisms directly into the processor or by using dedicated external secure elements. However, both of these approaches only cover a narrow threat model resulting in limited security guarantees. Enclaves nested into the application processor typically provide weak isolation between the secure and non-secure domain, especially when considering side-channel attacks. Although external secure elements do provide strong isolation, the slow communication interface to the application processor is exposed to adversaries and restricts the use cases. Independently of the used approach, TEEs often lack the possibility to establish secure communication to peripherals, and most operating systems executed inside TEEs do not provide state-of-the-art defense strategies, making them vulnerable to various attacks. We argue that TEEs, such as Intel SGX or ARM TrustZone, implemented on the main application processor, are insecure, especially when considering side-channel attacks. In this paper, we demonstrate how a heterogeneous multicore architecture can be utilized to realize a secure TEE design. We directly embed a secure processor into our HECTOR-V architecture to provide strong isolation between the secure and non-secure domain. The tight coupling of the TEE and the application processor enables HECTOR-V to provide mechanisms for establishing secure communication channels between different devices. We further introduce RISC-V Secure Co-Processor (RVSCP), a security-hardened processor tailored for TEEs. To secure applications executed inside the TEE, RVSCP provides hardware enforced control-flow integrity and rigorously restricts I/O accesses to certain execution states. RVSCP reduces the trusted computing base to a minimum by providing operating system services directly in hardware.

CrypTag: Thwarting Physical and Logical Memory Vulnerabilities using Cryptographically Colored Memory

Pascal Nasahl (Graz University of Technology, Austria), Robert Schilling (Graz University of Technology, Austria), Mario Werner (Graz University of Technology, Austria), Jan Hoogerbrugge (NXP Semiconductors Eindhoven, Netherlands), Marcel Medwed (NXP Semiconductors, Austria), Stefan Mangard (Graz University of Technology, Austria)

0
Memory vulnerabilities are a major threat to many computing systems. To effectively thwart spatial and temporal memory vulnerabilities, full logical memory safety is required. However, current mitigation techniques for memory safety are either too expensive or trade security against efficiency. One promising attempt to detect memory safety vulnerabilities in hardware is memory coloring, a security policy deployed on top of tagged memory architectures. However, due to the memory storage and bandwidth overhead of large tags, commodity tagged memory architectures usually only provide small tag sizes, thus limiting their use for security applications. Irrespective of logical memory safety, physical memory safety is a necessity in hostile environments prevalent for modern cloud computing and IoT devices. Architectures from Intel and AMD already implement transparent memory encryption to maintain confidentiality and integrity of all off-chip data. Surprisingly, the combination of both, logical and physical memory safety, has not yet been extensively studied in previous research, and a naïve combination of both security strategies would accumulate both overheads. In this paper, we propose CrypTag, an efficient hardware/software co-design mitigating a large class of logical memory safety issues and providing full physical memory safety. At its core, CrypTag utilizes a transparent memory encryption engine not only for physical memory safety, but also for memory coloring at hardly any additional costs. The design avoids any overhead for tag storage by embedding memory colors in the upper bits of a pointer and using these bits as an additional input for the memory encryption. A custom compiler extension automatically leverages CrypTag to detect logical memory safety issues for commodity programs and is fully backward compatible. For evaluating the design, we extended a RISC-V processor with memory encryption with CrypTag. Furthermore, we developed a LLVM-based toolchain automatically protecting all dynamic, local, and global data. Our evaluation shows a hardware overhead of less than 1 % and an average runtime overhead between 1.5 % and 6.1 % for thwarting logical memory safety vulnerabilities on a system already featuring memory encryption. Enhancing a system with memory encryption typically induces a runtime overhead between 5 % and 109.8 % for commercial and open-source encryption units.

Session Chair

Fengwei Zhang

Session 3A

Applied Cryptography (I)

Conference
3:40 PM — 5:00 PM HKT
Local
Jun 8 Tue, 3:40 AM — 5:00 AM EDT

Secure Role and Rights Management for Automotive Access and Feature Activation

Christian Plappert (Fraunhofer-Institut für Sichere Informationstechnologie, Germany), Lukas Jäger (Fraunhofer-Institut für Sichere Informationstechnologie, Germany), Andreas Fuchs (Fraunhofer-Institut für Sichere Informationstechnologie, Germany)

0
The trend towards fully autonomous vehicles changes the concept of car ownership drastically. Purchasing a personal car becomes obsolete. Thus, business models related to feature activation are gaining even higher importance for car manufacturers in order to retain their customers. Various recent security incidents demonstrated however that vehicles are a valuable attack goal ranging from illegal access to car features to the theft of the whole vehicles. In this paper, we present a secure access and feature activation system for automotive scenarios that uses a TPM 2.0 as trust anchor within the vehicle to mitigate potential security threats. Our system enables a fine-granular authorization mechanism by utilizing TPM 2.0 enhanced authorization constructs to implement usage restrictions and revocation policies as well as offline rights delegation. The TPM 2.0 acts as a communication end point to the vehicles’ environment and integrates seamlessly with already deployed security features of the in-vehicle network. We implemented our concept on a Raspberry Pi as a lightweight equivalent to hardware used in the automotive domain and evaluate our solution by performance measurements.

Pipa: Privacy-preserving Password Checkup via Homomorphic Encryption

Jie Li (Huawei Technologies, China), Yamin Liu (Huawei Technologies, China), Shuang Wu (Huawei Technologies, China)

0
Data-breach is not rare on Internet, and once it happens, web users may suffer from privacy leakage and property loss. In order to enable web users to conveniently check whether their confidential information is compromised in data-breach events while preserving their privacy, we design Pipa, which is essentially a special case of private set intersection protocol instantiated with homomorphic encryption. We choose the password checkup scenario for an entry point. In the architecture of Pipa, a server is needed to maintain the database of leaked accounts, namely usernames and passwords, and a homomorphic encryption (HE) module is deployed at the user-end. Once the user issues a checkup query to the server, the HE model encrypts the hash of the user’s account information and sends the ciphertexts to the server. The server then evaluates a compare-add-multiply circuit on the ciphertexts and the database, and sends a result ciphertext back to the user-end. Finally the HE module decrypts the result and informs the user if the account information is leaked. The server will never know the username or password, or whether the user’s information has matched some entry in the database. We have tested the prototype implementation of Pipa with the Brakerski-Fan-Vercauteren (BFV) HE scheme. By choosing proper parameters, the implementation is pretty practical on PC. For the most light-weight parameter settings in the paper, the total communication volume can be as low as about 2.2MB, and it only takes the server 0.17 seconds to finish the homomorphic computation on encrypted data.

Multi-User Collusion-Resistant Searchable Encryption with Optimal Search Time

Yun Wang (Hong Kong University of Science and Technology, Hong Kong), Dimitrios Papadopoulos (Hong Kong University of Science and Technology, Hong Kong)

0
The continued development of cloud computing requires technologies that protect users’ data privacy even from the cloud providers themselves. Multi-user searchable encryption is one such kind of technology. It allows a data owner to selectively enable users to perform keyword searches over her encrypted documents that are stored at a cloud server. For privacy purposes, it is important to limit what an adversarial server can infer about the encrypted documents, even if it colludes with some of the users. Clearly, in this case it can learn the content of documents shared with this subset of “corrupted” users, however, it is important to ensure that this collusion does not reveal information about parts of the dataset that are only shared with the remaining “uncorrupted” users via cross-user leakage. In this work, we propose three novel multi-user searchable encryption schemes for this setting that achieve different trade-offs between performance and leakage. Compared to previous ones, our first two schemes are the first to achieve asymptotically optimal search time. Our third scheme achieves minimal user storage and forward privacy with respect to document sharing, but slightly slower search performance. We formally prove the security of our schemes under reasonable assumptions. Moreover, we implement and evaluate their performance both on a single machine and over WAN. Our experimental results are encouraging, e.g., the search computation time is in the order of a few milliseconds.

Efficient Verifiable Image Redacting based on zk-SNARKs

Hankyung Ko (Hanyang University, South Korea), Ingeun Lee (Kookmin Universitiy, South Korea), Seunghwa Lee (Kookmin Universitiy, South Korea), Jihye Kim (Kookmin Universitiy, South Korea), Hyunok Oh (Hanyang University, South Korea)

0
Image is a visual representation of a certain fact and can be used as proof of events. As the utilization of the image increases, it is required to prove its authenticity with the protection of its sensitive personal information. In this paper, we propose a new efficient verifiable image redacting scheme based on zk-SNARKs, a commitment, and a digital signature scheme.We adopt a commit-and-prove SNARK scheme which takes commitments as inputs, in which the authenticity can be quickly verified outside the circuit.We also specify relations between the original and redacted images to guarantee the redacting correctness. Our experimental results show that the proposed scheme is superior to the existing works in terms of the key size and proving time without sacrificing the other parameters. The security of the proposed scheme is proven formally.

Session Chair

Sherman S. M. Chow

Session 3B

ML and Security (II)

Conference
3:40 PM — 5:00 PM HKT
Local
Jun 8 Tue, 3:40 AM — 5:00 AM EDT

HoneyGen: Generating Honeywords Using Representation Learning

Antreas Dionysiou (University of Cyprus, Cyprus), Vassilis Vassiliades (Research Centre on Interactive Media, Smart Systems and Emerging Technologies, Cyprus), Elias Athanasopoulos (University of Cyprus, Cyprus)

3
Honeywords are false passwords injected in a database for detecting password leakage. Generating honeywords is a challenging problem due to the various assumptions about the adversary’s knowledge as well as users’ password-selection behaviour. The success of a Honeywords Generation Technique (HGT) lies on the resulting honeywords; the method fails if an adversary can easily distinguish the real password. In this paper, we propose HoneyGen, a practical and highly robust HGT that produces realistic looking honeywords. We do this by leveraging representation learning techniques to learn useful and explanatory representations from a massive collection of unstructured data, i.e., each operator’s password database. We perform both a quantitative and qualitative evaluation of our framework using the state-of-the-art metrics. Our results suggest that HoneyGen generates high-quality honeywords that cause sophisticated attackers to achieve low distinguishing success rates.

On Detecting Deception in Space Situational Awareness

James Pavur (Oxford University, United Kingdom), Ivan Martinovic (Oxford University, United Kingdom)

2
Space Situational Awareness (SSA) data is critical to the safe piloting of satellites through an ever-growing field of orbital debris. However, measurement complexity means that most satellite operators cannot independently acquire SSA data and must rely on a handful of centralized repositories operated by major space powers. As interstate competition in orbit increases, so does the threat of attacks abusing these information-sharing relationships. This paper offers one of the first considerations of defense techniques against SSA deceptions. Building on historical precedent and real-world SSA data, we simulate an attack whereby an SSA operator seeks to disguise spy satellites as pieces of debris. We further develop and evaluate a machine-learning based anomaly detection tool which allows defenders to detect 90-98% of deception attempts with little to no in-house astrometry hardware. Beyond the direct contribution of this system, the paper takes a unique interdisciplinary approach, drawing connections between cyber-security, astrophysics, and international security studies. It presents the general case that systems security methods can tackle many novel and complex problems in an historically neglected domain and provides methods and techniques for doing so.

AMEBA: An Adaptive Approach to the Black-Box Evasion of Machine Learning Models

Stefano Calzavara (Università Ca' Foscari Venezia, Italy), Lorenzo Cazzaro (Università Ca' Foscari Venezia, Italy), Claudio Lucchese (Università Ca' Foscari Venezia, Italy)

2
Machine learning models are vulnerable to evasion attacks, where the attacker starts from a correctly classified instance and perturbs it so as to induce a misclassification. In the black-box setting where the attacker only has query access to the target model, traditional attack strategies exploit a property known as transferability, i.e., the empirical observation that evasion attacks often generalize across different models. The attacker can thus rely on the following twostep attack strategy: (i) query the target model to learn how to train a surrogate model approximating it; and (ii) craft evasion attacks against the surrogate model, hoping that they “transfer” to the target model. This attack strategy is sub-optimal, because it assumes a strict separation of the two steps and under-approximates the possible actions that a real attacker might take. In this work we propose AMEBA, the first adaptive approach to the black-box evasion of machine learning models. AMEBA builds on a well-known optimization problem, known as Multi-Armed Bandit, to infer the best alternation of actions spent for surrogate model training and evasion attack crafting.We experimentally show on public datasets that AMEBA outperforms traditional two-step attack strategies.

Stealing Deep Reinforcement Learning Modelsfor Fun and Profit

Kangjie Chen (Nanyang Technological University, Singapore), Shangwei Guo (Nanyang Technological University, Singapore), Tianwei Zhang (Nanyang Technological University, Singapore), Xiaofei Xie (Nanyang Technological University, Singapore), Yang Liu (Nanyang Technological University, Singapore)

1
This paper presents the first model extraction attack against Deep Reinforcement Learning (DRL), which enables an adversary to precisely recover a black-box DRL model only from its interaction with the environment. Model extraction attacks against supervised Deep Learning models have been widely studied. However, those techniques cannot be applied to the reinforcement learning scenario due to DRL models’ high complexity, stochasticity and limited observable information. We propose a novel methodology to overcome the above challenges. The key insight of our approach is that the process of DRL model extraction is equivalent to imitation learning, a well-established solution to learn sequential decision-making policies. Based on this observation, our method first builds a classifier to reveal the training algorithm family of the targeted DRL model only from its predicted actions, and then leverages state-of-the-art imitation learning techniques to replicate the model from the identified algorithm family. Experimental results indicate that our methodology can effectively recover the DRL models with high fidelity and accuracy. We also demonstrate two use cases to show that our model extraction attack can (1) significantly improve the success rate of adversarial attacks, and (2) steal DRL models stealthily even they are protected by DNN watermarks. These pose a severe threat to the intellectual property protection of DRL applications.

Session Chair

Pino Caballero-Gil

Session Poster-1

Social Event + Poster Session 1

Conference
5:00 PM — 7:00 PM HKT
Local
Jun 8 Tue, 5:00 AM — 7:00 AM EDT

Please follow the link to join the virtual social event

0
This talk does not have an abstract.

Session Chair

Poster Chair

Made with in Toronto · Privacy Policy · © 2021 Duetone Corp.