Session 1A

ML and Security (I)

11:00 AM — 12:20 PM HKT
Jun 7 Mon, 11:00 PM — 12:20 AM EDT

Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes

Jinyuan Jia (Duke University, USA), Binghui Wang (Duke University, USA), Neil Gong (Duke University, USA)

In the era of deep learning, a user often leverages a third-party machine learning tool to train a deep neural network (DNN) classifier and then deploys the classifier as an end-user software product (e.g., a mobile app) or a cloud service. In an information embedding attack, an attacker is the provider of a malicious third-party machine learning tool. The attacker embeds a message into the DNN classifier during training and recovers the message via querying the API of the black-box classifier after the user deploys it. Information embedding attacks have attracted growing attention because of various applications such as watermarking DNN classifiers and compromising user privacy. State-of-the-art information embedding attacks have two key limitations: 1) they cannot verify the correctness of the recovered message, and 2) they are not robust against post-processing (e.g., compression) of the classifier.

In this work, we aim to design information embedding attacks that are verifiable and robust against popular post-processing methods. Specifically, we leverage Cyclic Redundancy Check to verify the correctness of the recovered message. Moreover, to be robust against post-processing, we leverage Turbo codes, a type of errorcorrecting codes, to encode the message before embedding it to the DNN classifier. In order to save queries to the deployed classifier, we propose to recover the message via adaptively querying the classifier. Our adaptive recovery strategy leverages the property of Turbo codes that supports error correcting with a partial code. We evaluate our information embedding attacks using simulated messages and apply them to three applications (i.e., training data inference, property inference, DNN architecture inference), where messages have semantic interpretations. We consider 8 popular methods to post-process the classifier. Our results show that our attacks can accurately and verifiably recover the messages in all considered scenarios, while state-of-the-art attacks cannot accurately recover the messages in many scenarios.

IPGuard: Protecting Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary

Xiaoyu Cao (Duke University, USA), Jinyuan Jia (Duke University, USA), Neil Gong (Duke University, USA)

A deep neural network (DNN) classifier represents a model owner’s intellectual property as training a DNN classifier often requires lots of resource.Watermarking was recently proposed to protect the intellectual property of DNN classifiers.However, watermarking suffers from a key limitation: it sacrifices the utility/accuracy of the model owner’s classifier because it tampers the classifier’s training or fine-tuning process. In this work, we propose IPGuard, the first method to protect intellectual property of DNN classifiers that provably incurs no accuracy loss for the classifiers. Our key observation is that a DNN classifier can be uniquely represented by its classification boundary. Based on this observation, IPGuard extracts some data points near the classification boundary of the model owner’s classifier and uses them to fingerprint the classifier. A DNN classifier is said to be a pirated version of the model owner’s classifier if they predict the same labels for most fingerprinting data points. IPGuard is qualitatively different from watermarking. Specifically, IPGuard extracts fingerprinting data points near the classification boundary of a classifier that is already trained, while watermarking embeds watermarks into a classifier during its training or fine-tuning process. We extensively evaluate IPGuard on CIFAR-10, CIFAR-100, and ImageNet datasets. Our results show that IPGuard can robustly identify post-processed versions of the model owner’s classifier as pirated versions of the classifier, and IPGuard can identify classifiers, which are not the model owner’s classifier nor its post-processed versions, as non-pirated versions of the classifier.

A Diversity Index based Scoring Framework for Identifying Smart Meters Launching Stealthy Data Falsification Attacks

Shameek Bhattacharjee (Western Michigan University, USA), Praveen Madhavarapu (Missouri University of Science and Technology, USA), Sajal K. Das (Missouri University of Science and Technology, USA)

A challenging problem in Advanced Metering Infrastructure (AMI) of smart grids is the identification of smart meters under the control of a stealthy adversary, that inject very low margins of stealthy data falsification. The problem is challenging due to wide legitimate variation in both individual and aggregate trends in real world power consumption data, making such stealthy attacks unrecognizable by existing approaches. In this paper, via proposed modified diversity index scoring metric, we propose a novel information-theory inspired data driven device anomaly classification framework to identify compromised meters launching low margins of stealthy data falsification attacks. Specifically, we draw a parallelism between the effects of data falsification attacks and ecological balance disruptions and identify required mathematical modifications in existing Renyi Entropy and Hill’s Diversity Entropy measures. These modifications such as expected self-similarity with weighted abundance shifts across various temporal scales, and diversity order are appropriately embedded in our resulting framework. The resulting diversity index score is used to classify smart meters launching additive, deductive, and alternating switching attack types with high sensitivity (as low as 100W) compared to the existing works that perform poorly at margins of false data below 400W. Our proposed theory is validated with two different real smart meter datasets from USA and Ireland. Experimental results demonstrate successful detection sensitivity from very low to high margins of false data, thus reducing undetectable strategy space of attacks in AMI for an adversary having complete knowledge of our method.

Exploiting the Sensitivity of L2 Adversarial Examples to Erase and Restore

Fei Zuo (University of South Carolina, USA), Qiang Zeng (University of South Carolina, USA)

By adding carefully crafted perturbations to input images, adversarial examples (AEs) can be generated to mislead neural-networkbased image classifiers. 𝐿2 adversarial perturbations by Carlini and Wagner (CW) are among the most effective but difficult-to-detect attacks. While many countermeasures against AEs have been proposed, detection of adaptive CW-𝐿2 AEs is still an open question. We find that, by randomly erasing some pixels in an 𝐿2 AE and then restoring it with an inpainting technique, the AE, before and after the steps, tends to have different classification results, while a benign sample does not show this symptom. We thus propose a novel AE detection technique, Erase-and-Restore (E&R), that exploits the intriguing sensitivity of 𝐿2 attacks. Experiments conducted on two popular image datasets, CIFAR-10 and ImageNet, show that the proposed technique is able to detect over 98% of 𝐿2 AEs and has a very low false positive rate on benign images. The detection technique exhibits high transferability: a detection system trained using CW-𝐿2 AEs can accurately detect AEs generated using another 𝐿2 attack method. More importantly, our approach demonstrates strong resilience to adaptive 𝐿2 attacks, filling a critical gap in AE detection. Finally, we interpret the detection technique through both visualization and quantification.

Session Chair

Tianwei Zhang

Session 2A

Network and Web Security (I)

2:00 PM — 3:20 PM HKT
Jun 8 Tue, 2:00 AM — 3:20 AM EDT

Careful Who You Trust: Studying the Pitfalls of Cross-Origin Communication

Gordon Meiser (CISPA Helmholtz Center for Information Security, Germany), Pierre Laperdrix (CNRS, Univ Lille, Inria Lille, France), Ben Stock (CISPA Helmholtz Center for Information Security, Germany)

In the past, Web applications were mostly static and most of the content was provided by the site itself. Nowadays, they have turned into rich client-side experiences customized for the user where third parties supply a considerable amount of content, e.g., analytics, advertisements, or integration with social media platforms and external services. By default, any exchange of data between documents is governed by the Same-Origin Policy, which only permits to exchange data with other documents sharing the same protocol, host, and port. Given the move to a more interconnectedWeb, standard bodies and browser vendors have added new mechanisms to enable cross-origin communication, primarily domain relaxation, postMessages, and CORS. While prior work has already shown the pitfalls of not using these mechanisms securely (e.g., omitting origin checks for incoming postMessages), we instead focus on the increased attack surface created by the trust that is necessarily put into the communication partners. We report on a study of the Tranco Top 5,000 to measure the prevalence of cross-origin communication. By analyzing the interactions between sites, we build an interconnected graph of the trust relations necessary to run the Web. Subsequently, based on this graph, we estimate the damage caused through exploitation of existing XSS flaws on trusted sites.

Oversharing Is Not Caring: How CNAME Cloaking Can Expose Your Session Cookies

Assel Aliyeva (Boston University, USA), Manuel Egele (Boston University, USA)

In modern web ecosystem, online businesses often leverage thirdparty web analytics services to gain insights into the behavior of their users. Due to the recent privacy enhancements in major browsers that restrict third-party cookie usage for tracking, these businesses were urged to disguise third-party analytics infrastructure as regular subdomains of their websites [3]. The integration technique referred to as CNAME cloaking allows the businesses to continue monitoring user activity on their websites. However, it also opens up the possibility for severe security infractions as the businesses often share their session cookies with the analytics providers, thus putting online user accounts in danger. Previouswork has raised privacy concerns with regards to subdomain tracking and extensively studied the drawbacks of widely used privacy-enhancing browser extensions. In this work, we demonstrate the impact of deploying CNAME cloaking along with lax cookie access control settings on web user security. To this end, we built a system that automatically detects the presence of the disguised third-party domains as well as the leakage of the firstparty cookies. Using our system, we identified 2,139 web analytics domains that can be conveniently added to commonly deployed host-based blacklists. Concerningly, we also found that 27 out of 90 highly sensitive web services (e.g., banks) that we analyzed expose session cookies to the web analytics services.

P2DPI: Practical and Privacy-Preserving Deep Packet Inspection

Jongkil Kim (University of Wollongong, Australia), Seyit Camtepe (CSIRO Data61, Australia), Joonsang Baek (University of Wollongong, Australia), Willy Susilo (University of Wollongong, Australia), Josef Pieprzyk (CSIRO Data61, Australia), Nepal Surya (CSIRO Data61, Australia)

The amount of encrypted Internet traffic almost doubles every year thanks to the wide adoption of end-to-end traffic encryption solutions such as IPSec, TLS and SSH. Despite all the benefits of user privacy the end-to-end encryption provides, the encrypted internet traffic blinds intrusion detection system (IDS) and makes detecting malicious traffic hugely difficult. The resulting conflict between the user’s privacy and security has demanded solutions for deep packet inspection (DPI) over encrypted traffic. The approach of those solutions proposed to date is still restricted in that they require intensive computations during connection setup or detection. For example, BlindBox, introduced by Sherry et al. (SIGCOMM 2015) enables inspection over the TLS-encrypted traffic without compromising users’ privacy, but its usage is limited due to a significant delay on establishing an inspected channel. PrivDPI, proposed more recently by Ning et al. (ACM CCS 2019), improves the overallefficiency of BlindBox and makes the inspection scenario more viable. Despite the improvement, we show in this paper that the user privacy of Ning et al.’s PrivDPI can be compromised entirely by the rule generator without involving any other parties, including the middlebox. Having observed the difficulties of realizing efficiency and security in the previous work, we propose a new DPI system for encrypted traffic, named “Practical and Privacy-Preserving Deep Packet Inspection (P2DPI)”. P2DPI enjoys the same level of security and privacy that BlindBox provides. At the same time, P2DPI offers fast setup and encryption and outperforms PrivDPI. Our results are supported by formal security analysis. We implemented our P2DPI and comparable PrivDPI and performed extensive experimentation for performance analysis and comparison.

Camoufler: Accessing The Censored Web By Utilizing Instant Messaging Channels

Piyush Kumar Sharma (IIIT-Delhi, India), Devashish Gosain (IIIT-Delhi, India), Sambuddho Chakraborty (IIIT-Delhi, India)

Free and open communication over the Internet is considered a fundamental human right, essential to prevent repressions from silencing voices of dissent. This has led to the development of various anti-censorship systems. Recent systems have relied on a common blocking resistance strategy i.e., incurring collateral damage to the censoring regimes, if they attempt to restrict such systems. However, despite being promising, systems built on such strategies pose additional challenges, viz., deployment limitations, poor QoS etc. These challenges prevent their wide scale adoption. Thus, we propose a new anti-censorship system, Camoufler, that overcomes aforementioned challenges, while still maintaining similar blocking resistance. Camoufler leverages Instant Messaging (IM) platforms to tunnel client’s censored content. This content (encapsulated inside IM traffic) is transported to the Camoufler server (hosted in a free country), which proxies it to the censored website. However, the eavesdropping censor would still observe regular IM traffic being exchanged between the IM peers. Thus, utilizing IM channels as-is for transporting traffic provides unobservability, while also ensuring good QoS, due to its inherent properties such as low-latency message transports. Moreover, it does not pose new deployment challenges. Performance evaluation of Camoufler, implemented on five popular IM apps indicate that it provides sufficient QoS for web browsing. E.g., the median time to render the homepages of Alexa top-1k sites was recorded to be about 3.6s, when using Camoufler implemented over Signal IM application.

Session Chair

Xavier de Carné de Carnavalet

Session 3A

Applied Cryptography (I)

3:40 PM — 5:00 PM HKT
Jun 8 Tue, 3:40 AM — 5:00 AM EDT

Secure Role and Rights Management for Automotive Access and Feature Activation

Christian Plappert (Fraunhofer-Institut für Sichere Informationstechnologie, Germany), Lukas Jäger (Fraunhofer-Institut für Sichere Informationstechnologie, Germany), Andreas Fuchs (Fraunhofer-Institut für Sichere Informationstechnologie, Germany)

The trend towards fully autonomous vehicles changes the concept of car ownership drastically. Purchasing a personal car becomes obsolete. Thus, business models related to feature activation are gaining even higher importance for car manufacturers in order to retain their customers. Various recent security incidents demonstrated however that vehicles are a valuable attack goal ranging from illegal access to car features to the theft of the whole vehicles. In this paper, we present a secure access and feature activation system for automotive scenarios that uses a TPM 2.0 as trust anchor within the vehicle to mitigate potential security threats. Our system enables a fine-granular authorization mechanism by utilizing TPM 2.0 enhanced authorization constructs to implement usage restrictions and revocation policies as well as offline rights delegation. The TPM 2.0 acts as a communication end point to the vehicles’ environment and integrates seamlessly with already deployed security features of the in-vehicle network. We implemented our concept on a Raspberry Pi as a lightweight equivalent to hardware used in the automotive domain and evaluate our solution by performance measurements.

Pipa: Privacy-preserving Password Checkup via Homomorphic Encryption

Jie Li (Huawei Technologies, China), Yamin Liu (Huawei Technologies, China), Shuang Wu (Huawei Technologies, China)

Data-breach is not rare on Internet, and once it happens, web users may suffer from privacy leakage and property loss. In order to enable web users to conveniently check whether their confidential information is compromised in data-breach events while preserving their privacy, we design Pipa, which is essentially a special case of private set intersection protocol instantiated with homomorphic encryption. We choose the password checkup scenario for an entry point. In the architecture of Pipa, a server is needed to maintain the database of leaked accounts, namely usernames and passwords, and a homomorphic encryption (HE) module is deployed at the user-end. Once the user issues a checkup query to the server, the HE model encrypts the hash of the user’s account information and sends the ciphertexts to the server. The server then evaluates a compare-add-multiply circuit on the ciphertexts and the database, and sends a result ciphertext back to the user-end. Finally the HE module decrypts the result and informs the user if the account information is leaked. The server will never know the username or password, or whether the user’s information has matched some entry in the database. We have tested the prototype implementation of Pipa with the Brakerski-Fan-Vercauteren (BFV) HE scheme. By choosing proper parameters, the implementation is pretty practical on PC. For the most light-weight parameter settings in the paper, the total communication volume can be as low as about 2.2MB, and it only takes the server 0.17 seconds to finish the homomorphic computation on encrypted data.

Multi-User Collusion-Resistant Searchable Encryption with Optimal Search Time

Yun Wang (Hong Kong University of Science and Technology, Hong Kong), Dimitrios Papadopoulos (Hong Kong University of Science and Technology, Hong Kong)

The continued development of cloud computing requires technologies that protect users’ data privacy even from the cloud providers themselves. Multi-user searchable encryption is one such kind of technology. It allows a data owner to selectively enable users to perform keyword searches over her encrypted documents that are stored at a cloud server. For privacy purposes, it is important to limit what an adversarial server can infer about the encrypted documents, even if it colludes with some of the users. Clearly, in this case it can learn the content of documents shared with this subset of “corrupted” users, however, it is important to ensure that this collusion does not reveal information about parts of the dataset that are only shared with the remaining “uncorrupted” users via cross-user leakage. In this work, we propose three novel multi-user searchable encryption schemes for this setting that achieve different trade-offs between performance and leakage. Compared to previous ones, our first two schemes are the first to achieve asymptotically optimal search time. Our third scheme achieves minimal user storage and forward privacy with respect to document sharing, but slightly slower search performance. We formally prove the security of our schemes under reasonable assumptions. Moreover, we implement and evaluate their performance both on a single machine and over WAN. Our experimental results are encouraging, e.g., the search computation time is in the order of a few milliseconds.

Efficient Verifiable Image Redacting based on zk-SNARKs

Hankyung Ko (Hanyang University, South Korea), Ingeun Lee (Kookmin Universitiy, South Korea), Seunghwa Lee (Kookmin Universitiy, South Korea), Jihye Kim (Kookmin Universitiy, South Korea), Hyunok Oh (Hanyang University, South Korea)

Image is a visual representation of a certain fact and can be used as proof of events. As the utilization of the image increases, it is required to prove its authenticity with the protection of its sensitive personal information. In this paper, we propose a new efficient verifiable image redacting scheme based on zk-SNARKs, a commitment, and a digital signature scheme.We adopt a commit-and-prove SNARK scheme which takes commitments as inputs, in which the authenticity can be quickly verified outside the circuit.We also specify relations between the original and redacted images to guarantee the redacting correctness. Our experimental results show that the proposed scheme is superior to the existing works in terms of the key size and proving time without sacrificing the other parameters. The security of the proposed scheme is proven formally.

Session Chair

Sherman S. M. Chow

Made with in Toronto · Privacy Policy · © 2021 Duetone Corp.