Session Keynote-3

Keynote Session 3

Conference
9:00 AM — 10:00 AM HKT
Local
Jun 9 Wed, 9:00 PM — 10:00 PM EDT

Encrypted Databases: Progresses and Challenges

Kui Ren (Professor and Associate Dean, College of Computer Science and Technology, Zhejiang University, Hangzhou, CHINA)

3
In recent years, we have witnessed an upsurge in cyber-attacks and data breach incidents that put tremendous data at risk, affect millions of users, and cause severe economic losses. As an in-depth defence to counter the persistent and pervasive security threats, maintaining data in always encrypted form is becoming a trend and even a regulatory requirement. Satisfying the demand is particularly challenging in the context of databases, which, as a pillar in modern computing infrastructure, provide indispensable means to organize, store and retrieve data at different scales. The difficulty lies in how to perform the database query processing over encrypted data while meeting the requirements of security, performance, and complex query functions. This field has grown tremendously over the past two decades, though there is no dominant solution that is universally applicable. Solutions based on cryptographic techniques, e.g., searchable encryption or property-preserving encryption, can efficiently provide certain primitive operations for database queries. But studies have shown that their allowed leakage profiles can be (sometimes highly) exploitable. The recent advent of secure hardware enclaves opens up new opportunities. Yet, the first few enclave-based proposals mostly explore extreme design points that rest on strong assumptions (e.g., huge enclave) or result in weak security (e.g., leaking relations of ciphertexts). In this talk, we will overview these latest advancements and the potential challenges, respectively, and discuss the possible roadmap ahead towards practically more secure, efficient and functional encrypted databases.

Session Chair

Zhiqiang Lin

Enter Zoom
Session 7A

Privacy (II)

Conference
10:30 AM — 11:50 AM HKT
Local
Jun 9 Wed, 10:30 PM — 11:50 PM EDT

Cryptographic Key Derivation from Biometric Inferences for Remote Authentication

Erkam Uzun (Georgia Institute of Technology, USA), Carter Yagemann (Georgia Institute of Technology, USA), Simon Chung (Georgia Institute of Technology, USA), Vladimir Kolesnikov (Georgia Institute of Technology, USA), Wenke Lee (Georgia Institute of Technology, USA)

2
Biometric authentication is getting increasingly popular because of its appealing usability and improvements in biometric sensors. At the same time, it raises serious privacy concerns since the common deployment involves storing bio-templates in remote servers. Current solutions propose to keep these templates on the client’s device, outside the server’s reach. This binds the client to the initial device. A more attractive solution is to have the server authenticate the client, thereby decoupling them from the device. Unfortunately, existing biometric template protection schemes either suffer from the practicality or accuracy. The state-of-the-art deep learning (DL) solutions solve the accuracy problem in faceand voice-based verification. However, existing privacy-preserving methods do not accommodate the DL methods, as they are tailored to hand-crafted feature space of specific modalities in general. In this work, we propose a novel pipeline, Justitia, that makes DL-inferences of face and voice biometrics compatible with the standard privacy-preserving primitives, like fuzzy extractors (FE). For this, we first form a bridge between Euclidean (or cosine) space of DL and Hamming space of FE, while maintaining the accuracy and privacy of underlying schemes.We also introduce efficient noise handling methods to keep the FE scheme practically applicable. We implement an end-to-end prototype to evaluate our design, then show how to improve the security for sensitive authentications and usability for non-sensitive, day-to-day, authentications. Justitia achieves the same, 0.33% false rejection at zero false acceptance, errors as the plaintext baseline does on the YouTube Faces benchmark. Moreover, combining face and voice achieves 1.32% false rejection at zero false acceptance. According to our systematical security assessments conducted through prior approaches and our novel black-box method, Justitia achieves ~25 bits and ~33 bits of security guarantees for face- and face&voice-based pipelines, respectively.

Understanding the Privacy Implications of Adblock Plus’s Acceptable Ads

Ahsan Zafar (North Carolina State University, USA), Aafaq Sabir (North Carolina State University, USA), Dilawer Ahmed (North Carolina State University, USA), Anupam Das (North Carolina State University, USA)

0
Targeted advertisement is prevalent on the Web. Many privacyenhancing tools have been developed to thwart targeted advertisement. Adblock Plus is one such popular tool, used by millions of users on a daily basis, to block unwanted ads and trackers. Adblock Plus uses EasyList and EasyPrivacy, the most prominent and widely used open-source filters, to block unwanted web contents. However, Adblock Plus, by default, also enables an exception list to unblock web requests that comply with specific guidelines defined by the Acceptable Ads Committee. Any publisher can enroll into the Acceptable Ads initiative to request the unblocking of web contents. Adblock Plus in return charges a licensing fee from large entities, who gain a significant amount of ad impressions per month due to participation in the Acceptable Ads initiative. However, the privacy implications of the default inclusion of the exception list has not been well studied, especially as it can unblock not only ads, but also trackers (e.g., unblocking contents otherwise blocked by EasyPrivacy). In this paper, we take a data-driven approach, where we collect historical updates made to Adblock Plus’s exception list and real-world web traffic by visiting the top 10k websites listed by Tranco. Using such data we analyze not only how the exception list has evolved over the years in terms of both contents unblocked and partners/entities enrolled into the Acceptable Ads initiative, but also the privacy implications of enabling the exception list by default. We found that Google not only unblocks the most number of unique domains, but is also unblocked by the most number of unique partners. From our traffic analysis, we see that of the 42,210 Google bound web requests, originally blocked by EasyPrivacy, around 80% of such requests are unblocked by the exception list. More worryingly, many of the requests enable 1-by-1 tracking pixel images.We, therefore, question exception rules that negate EasyPrivacy filtering rules by default and advocate for a better vetting process.

Privacy-preserving Density-based Clustering

Beyza Bozdemir (Eurecom, France), Sébastien Canard (Orange Labs, France), Orhan Ermis (Eurecom, France), Helen Möllering (Technical University of Darmstadt, Germany), Melek Önen (Eurecom, France), Thomas Schneider (Technical University of Darmstadt, Germany)

1
Clustering is an unsupervised machine learning technique that outputs clusters containing similar data items. In this work, we investigate privacy-preserving density-based clustering which is, for example, used in financial analytics and medical diagnosis. When (multiple) data owners collaborate or outsource the computation, privacy concerns arise. To address this problem, we design, implement, and evaluate the first practical and fully private density-based clustering scheme based on secure two-party computation. Our protocol privately executes the DBSCAN algorithm without disclosing any information (including the number and size of clusters). It can be used for private clustering between two parties as well as for private outsourcing of an arbitrary number of data owners to two non-colluding servers. Our implementation of the DBSCAN algorithm privately clusters data sets with 400 elements in 7 minutes on commodity hardware. Thereby, it flexibly determines the number of required clusters and is insensitive to outliers, while being only factor 19x slower than today’s fastest private K-means protocol (Mohassel et al., PETS’20) which can only be used for specific data sets. We then show how to transfer our newly designed protocol to related clustering algorithms by introducing a private approximation of the TRACLUS algorithm for trajectory clustering which has interesting real-world applications like financial time series forecasts and the investigation of the spread of a disease like COVID-19.

DySan: Dynamically sanitizing motion sensor data against sensitive inferences through adversarial networks

Theo Jourdan (Insa-Lyon, CITI, Inria, France), Antoine Boutet (Insa-Lyon, CITI, Inria, France), Carole Frindel (Insa-Lyon, Creatis, Inserm, France), Claude Rosin Ngueveu (UQAM, Canada), Sebastien Gambs (UQAM, Canada)

0
With the widespread development of the quantified-self movement, an increasing number of users rely on mobile applications to monitor their physical activity through their smartphones. However, granting applications a direct access to sensor data exposes users to privacy risks. In particular, motion sensor data are usually transmitted to analytics applications hosted in the cloud, which leverages on machine learning models to provide feedback on their activity status to users. In this setting, nothing prevents the service provider to infer private and sensitive information about a user such as health or demographic attributes. To address this issue, we propose DySan, a privacy-preserving framework to sanitize motion sensor data against unwanted sensitive inferences (i.e., improving privacy) while limiting the loss of accuracy on the physical activity monitoring (i.e., maintaining data utility). Our approach is inspired from the framework of Generative Adversarial Networks to sanitize the sensor data for the purpose of ensuring a good trade-off between utility and privacy. More precisely, by learning in a competitive manner several networks, DySan is able to build models that sanitize motion data against inferences on a specified sensitive attribute (e.g., gender) while maintaining an accurate activity recognition. DySan builds various sanitizing models, characterized by different sets of hyperparameters in the global loss function, to propose a transfer learning scheme over time by dynamically selecting the model which provides the best utility and privacy trade-off according to the incoming data. Experiments conducted on real datasets demonstrate that DySan can drastically limit the gender inference up to 41% (from 98% with raw data to 57% with sanitized data) while only reducing the accuracy of activity recognition by 3% (from 95% with raw data to 92% with sanitized data).

Session Chair

Sherman S. M. Chow

Enter Zoom
Session 7B

Software Security and Vulnerability Analysis (II)

Conference
10:30 AM — 11:50 AM HKT
Local
Jun 9 Wed, 10:30 PM — 11:50 PM EDT

SoK: Enabling Security Analyses of Embedded Systems via Rehosting

Andrew Fasano (Northeastern University, USA), Tiemoko Ballo (MIT Lincoln Laboratory, USA), Marius Muench (Vrije Universiteit Amsterdam, Netherlands), Tim Leek (MIT Lincoln Laboratory, USA), Alexander Oleinik (Boston University, USA), Brendan Dolan-Gavitt (New York University, USA), Manuel Egele (Boston University, USA), Aurélien Francillon (EURECOM, France), Long Lu (Northeastern University, USA), Nick Gregory (New York University, USA), Davide Balzarotti (EURECOM, France), William Robertson (Northeastern University, USA)

5
Closely monitoring the behavior of a software system during its execution enables developers and analysts to observe, and ultimately understand, how it works. This kind of dynamic analysis can be instrumental to reverse engineering, vulnerability discovery, exploit development, and debugging. While these analyses are typically wellsupported for homogeneous desktop platforms (e.g., x86 desktop PCs), they can rarely be applied in the heterogeneous world of embedded systems. One approach to enable dynamic analyses of embedded systems is to move software stacks from physical systems into virtual environments that sufficiently model hardware behavior. This process which we call “rehosting” poses a significant research challenge with major implications for security analyses. Although rehosting has traditionally been an unscientific and ad-hoc endeavor undertaken by domain experts with varying time and resources at their disposal, researchers are beginning to address rehosting challenges systematically and in earnest. In this paper, we establish that emulation is insufficient to conduct large-scale dynamic analysis of real-world hardware systems and present rehosting as a firmwarecentric alternative. Furthermore, we taxonomize preliminary rehosting efforts, identify the fundamental components of the rehosting process, and propose directions for future research.

BugGraph: Differentiating Source-Binary Code Similarity with Graph Triplet-Loss Network

Yuede Ji (George Washington University, USA), Lei Cui (George Washington University, USA), H. Howie Huang (George Washington University, USA)

2
Binary code similarity detection, which answers whether two pieces of binary code are similar, has been used in a number of applications, such as vulnerability detection and automatic patching. Existing approaches face two hurdles in their efforts to achieve high accuracy and coverage: (1) the problem of source-binary code similarity detection, where the target code to be analyzed is in the binary format while the comparing code (with ground truth) is in source code format. Meanwhile, the source code is compiled to the comparing binary code with either a random or fixed configuration (e.g., architecture, compiler family, compiler version, and optimization level), which significantly increases the difficulty of code similarity detection; and (2) the existence of different degrees of code similarity. Less similar code is known to be more, if not equally, important in various applications such as binary vulnerability study. To address these challenges, we design BugGraph, which performs sourcebinary code similarity detection in two steps. First, BugGraph identifies the compilation provenance of the target binary and compiles the comparing source code to a binary with the same provenance. Second, BugGraph utilizes a new graph triplet-loss network on the attributed control flow graph to produce a similarity ranking. The experiments on four real-world datasets show that BugGraph achieves 90% and 75% true positive rate for syntax equivalent and similar code, respectively, an improvement of 16% and 24% over state-of-the-art methods. Moreover, BugGraph is able to identify 140 vulnerabilities in six commercial firmware.

Evaluating Synthetic Bugs

Joshua Bundt (Northeastern University, USA), Andrew Fasano (Northeastern University, USA), Brendan Dolan-Gavitt (NYU, USA), William Robertson (Northeastern University, USA), Tim Leek (MIT Lincoln Laboratory, USA)

5
Fuzz testing has been used to find bugs in programs since the 1990s, but despite decades of dedicated research, there is still no consensus on which fuzzing techniques work best. One reason for this is the paucity of ground truth: bugs in real programs with known root causes and triggering inputs are difficult to collect at a meaningful scale. Bug injection technologies that add synthetic bugs into real programs seem to offer a solution, but the differences in finding these synthetic bugs versus organic bugs have not previously been explored at a large scale. Using over 80 years of CPU time, we ran eight fuzzers across 20 targets from the Rode0day bug-finding competition and the LAVA-M corpus. Experiments were standardized with respect to compute resources and metrics gathered. These experiments show differences in fuzzer performance as well as the impact of various configuration options. For instance, it is clear that integrating symbolic execution with mutational fuzzing is very effective and that using dictionaries improves performance. Other conclusions are less clear-cut; for example, no one fuzzer beat all others on all tests. It is noteworthy that no fuzzer found any organic bugs (i.e., one reported in a CVE), despite 50 such bugs being available for discovery in the fuzzing corpus. A close analysis of results revealed a possible explanation: a dramatic difference between where synthetic and organic bugs live with respect to the “main path” discovered by fuzzers. We find that recent updates to bug injection systems have made synthetic bugs more difficult to discover, but they are still significantly easier to find than organic bugs in our target programs. Finally, this study identifies flaws in bug injection techniques and suggests a number of axes along which synthetic bugs should be improved.

Bran: Reduce Vulnerability Search Space in Large Open Source Repositories by Learning Bug Symptoms

Dongyu Meng (University of California, Santa Barbara, USA), Michele Guerriero (Politecnico di Milano, Italy), Aravind Machiry (University of California, Santa Barbara, USA), Hojjat Aghakhani (University of California, Santa Barbara, USA), Priyanka Bose (University of California, Santa Barbara, USA), Andrea Continella (University of California, Santa Barbara, USA/University of Twente, Netherlands), Christopher Kruegel (University of California, Santa Barbara, USA), Giovanni Vigna (University of California, Santa Barbara, USA)

2
Software is continually increasing in size and complexity, and therefore, vulnerability discovery would benefit from techniques that identify potentially vulnerable regions within large code bases, as this allows for easing vulnerability detection by reducing the search space. Previous work has explored the use of conventional codequality and complexity metrics in highlighting suspicious sections of (source) code. Recently, researchers also proposed to reduce the vulnerability search space by studying code properties with neural networks. However, previous work generally failed in leveraging the rich metadata that is available for long-running, large code repositories. In this paper, we present an approach, named Bran, to reduce the vulnerability search space by combining conventional code metrics with fine-grained repository metadata. Bran locates code sections that are more likely to contain vulnerabilities in large code bases, potentially improving the efficiency of both manual and automatic code audits. In our experiments on four large code bases, Bran successfully highlights potentially vulnerable functions, outperforming several baselines, including state-of-art vulnerability prediction tools. We also assess Bran’s effectiveness in assisting automated testing tools. We use Bran to guide syzkaller, a known kernel fuzzer, in fuzzing a recent version of the Linux kernel. The guided fuzzer identifies 26 bugs (10 are zero-day flaws), including arbitrary writes and reads.

Session Chair

Shuai Wang

Enter Zoom
Session 8A

Malware and Cybercrime (I)

Conference
2:00 PM — 4:00 PM HKT
Local
Jun 10 Thu, 2:00 AM — 4:00 AM EDT

Malware Makeover: Breaking ML-based Static Analysis by Modifying Executable Bytes

Keane Lucas (Carnegie Mellon University, USA), Mahmood Sharif (VMware and TAU, Israel), Lujo Bauer (Carnegie Mellon University, USA), Michael K. Reiter (Duke University, USA), Saurabh Shintre (NortonLifeLock Research Group, USA)

4
Motivated by the transformative impact of deep neural networks (DNNs) in various domains, researchers and anti-virus vendors have proposed DNNs for malware detection from raw bytes that do not require manual feature engineering. In this work, we propose an attack that interweaves binary-diversification techniques and optimization frameworks to mislead such DNNs while preserving the functionality of binaries. Unlike prior attacks, ours manipulates instructions that are a functional part of the binary, which makes it particularly challenging to defend against. We evaluated our attack against three DNNs in white- and black-box settings, and found that it often achieved success rates near 100%. Moreover, we found that our attack can fool some commercial anti-viruses, in certain cases with a success rate of 85%. We explored several defenses, both new and old, and identified some that can foil over 80% of our evasion attempts. However, these defenses may still be susceptible to evasion by attacks, and so we advocate for augmenting malware-detection systems with methods that do not rely on machine learning.

Identifying Behavior Dispatchers for Malware Analysis

Kyuhong Park (Georgia Institute of Technology, USA), Burak Sahin (Georgia Institute of Technology, USA), Yongheng Chen (Georgia Institute of Technology, USA), Jisheng Zhao (Rice University, USA), Evan Downing (Georgia Institute of Technology, USA), Hong Hu (The Pennsylvania State University, USA), Wenke Lee (Georgia Institute of Technology, USA)

5
Malware is a major threat to modern computer systems. Malicious behaviors are hidden by a variety of techniques: code obfuscation, message encoding and encryption, etc. Countermeasures have been developed to thwart these techniques in order to expose malicious behaviors. However, these countermeasures rely heavily on identifying specific API calls, which has significant limitations as these calls can be misleading or hidden from the analyst. In this paper, we show that malicious programs share a key component which we call a behavior dispatcher, a code structure which is intercepted between various condition checks and malicious actions. By identifying these behavior dispatchers, a malware analysis can be guided into behavior dispatchers and activate hidden malicious actions more easily. We propose BDHunter, a system that automatically identifies behavior dispatchers to assist triggering malicious behaviors. BDHunter takes advantage of the observation that a dispatcher compares an input with a set of expected values to determine which malicious behaviors to execute next. We evaluate BDHunter on recent malware samples to identify behavior dispatchers and show that these dispatchers can help trigger more malicious behaviors (otherwise hidden). Our experimental results show that BDHunter identifies 77.4% of dispatchers within the top 20 candidates discovered. Furthermore, BDHunter-guided concolic execution successfully triggers 13.0× and 2.6× more malicious behaviors, compared to unguided symbolic and concolic execution, respectively. These demonstrate that BDHunter effectively identifies behavior dispatchers, which are useful for exposing malicious behaviors.

MalPhase: Fine-Grained Malware Detection Using Network Flow Data

Michal Piskozub (University of Oxford, United Kingdom), Fabio De Gaspari (Sapienza University of Rome, Italy), Freddie Barr-Smith (University of Oxford, United Kingdom), Luigi Mancini (Sapienza University of Rome, Italy), Ivan Martinovic (University of Oxford, United Kingdom)

3
Economic incentives encourage malware authors to constantly develop new, increasingly complex malware to steal sensitive data or blackmail individuals and companies into paying large ransoms. In 2017, the worldwide economic impact of cyberattacks is estimated to be between 445 and 600 billion USD, or 0.8% of global GDP1. Traditionally, one of the approaches used to defend against malware is network traffic analysis, which relies on network data to detect the presence of potentially malicious software. However, to keep up with increasing network speeds and amount of traffic, network analysis is generally limited to work on aggregated network data, which is traditionally challenging and yields mixed results. In this paper we present MalPhase, a system that was designed to cope with the limitations of aggregated flows. MalPhase features a multiphase pipeline for malware detection, type and family classification. The use of an extended set of network flow features and a simultaneous multi-tier architecture facilitates a performance improvement for deep learning models, making them able to detect malicious flows (> 98% F1) and categorize them to a respective malware type (> 93% F1) and family (> 91% F1). Furthermore, the use of robust features and denoising autoencoders allows MalPhase to perform well on samples with varying amounts of benign traffic mixed in. Finally, MalPhase detects unseen malware samples with performance comparable to that of known samples, even when interlaced with benign flows to reflect realistic network environments.

Session Chair

Sang Kil Cha

Enter Zoom
Session 8B

Blockchain and Distributed Systems

Conference
2:00 PM — 4:00 PM HKT
Local
Jun 10 Thu, 2:00 AM — 4:00 AM EDT

Targeting the Weakest Link: Social Engineering Attacks in Ethereum Smart Contracts

Nikolay Ivanov (Michigan State University, USA), Jianzhi Lou (Michigan State University, USA), Ting Chen (University of Electronic Science and Technology of China, China), Jin Li (Guangzhou University, China), Qiben Yan (Michigan State University, USA)

1
Ethereum holds multiple billions of U.S. dollars in the form of Ether cryptocurrency and ERC-20 tokens, with millions of deployed smart contracts algorithmically operating these funds. Unsurprisingly, the security of Ethereum smart contracts has been under rigorous scrutiny. In recent years, numerous defense tools have been developed to detect different types of smart contract code vulnerabilities. When opportunities for exploiting code vulnerabilities diminish, the attackers start resorting to social engineering attacks, which aim to influence humans — often the weakest link in the system. The only known class of social engineering attacks in Ethereum are honeypots, which plant hidden traps for attackers attempting to exploit existing vulnerabilities, thereby targeting only a small population of potential victims.

In this work, we explore the possibility and existence of new social engineering attacks beyond smart contract honeypots. We present two novel classes of Ethereum social engineering attacks — Address Manipulation and Homograph — and develop six zero-day social engineering attacks. To show how the attacks can be used in popular programming patterns, we conduct a case study of five popular smart contracts with combined market capitalization exceeding $29 billion, and integrate our attack patterns in their source codes without altering their existing functionality. Moreover, we show that these attacks remain dormant during the test phase but activate their malicious logic only at the final production deployment. We further analyze 85,656 open-source smart contracts, and discover that 1,027 of them can be used for the proposed social engineering attacks. We conduct a professional opinion survey with experts from seven smart contract auditing firms, corroborating that the exposed social engineering attacks bring a major threat to the smart contract systems.

PSec: Programming Secure Distributed Systems using Enclaves

Shivendra Kushwah (University of California, Berkeley, USA), Ankush Desai (Amazon Inc, USA), Pramod Subramanyan (Indian Institute of Technology - Kanpur, India), Sanjit A. Seshia (University of California, Berkeley, USA)

2
We introduce PSec, a domain-specific language for programming secure distributed systems. PSec is a state-machine based programming language with information flow control capabilities that leverages Intel SGX enclaves to provide security guarantees at runtime. Combining state machines and information flow control with hardware enclaves enables programmers to build complex distributed systems without inadvertently leaking sensitive information to adversaries. We formally prove the security properties of PSec and evaluate our work by programming several real-world examples, including One Time Passcode and Secure Electronic Voting systems. We present performance results of PSec systems and show that there is an acceptable performance overhead of ∼3x for long running systems with a possible minimum of ∼1.2x, as compared to baseline systems that do not provide any security guarantees.

Fact and Fiction: Challenging the Honest Majority Assumption of Permissionless Blockchains

Runchao Han (Monash University and CSIRO-Data61, Australia) , Zhimei Sui (Monash University, Australia), Jiangshan Yu (Monash University, Australia), Joseph Liu (Monash University, Australia), Shiping Chen (CSIRO-Data61, Australia)

1
Honest majority is the key security assumption of Proof-of-Work (PoW) based blockchains. However, the recent 51% attacks render this assumption unrealistic in practice. In this paper, we challenge this assumption against rational miners in the PoW-based blockchains in reality. In particular, we show that the current incentive mechanism may encourage rational miners to launch 51% attacks in two cases. In the first case, we consider a miner of a stronger blockchain launches 51% attacks on a weaker blockchain, where the two blockchains share the same mining algorithm. In the second case, we consider a miner rents mining power from cloud mining services to launch 51% attacks. As 51% attacks lead to double-spending, the miner can profit from these two attacks. If such double-spending is more profitable than mining, miners are more intended to launch 51% attacks rather than mine honestly. We formally model such behaviours as a series of actions through a Markov Decision Process. Our results show that, for most mainstream PoW-based blockchains, 51% attacks are feasible and profitable, so profit-driven miners are incentivised to launch 51% attacks to gain extra profit. In addition, we leverage our model to investigate the recent 51% attack on Ethereum Classic (on 07/01/2019), which is suspected to be an incident of 51% attacks. We provide insights on the attacker strategy and expected revenue, and show that the attacker’s strategy is near-optimal.

Non-Intrusive and High-Efficient Balance Tomography in the Lightning Network

Yan Qiao (University of Victoria, Canada), Kui Wu (University of Victoria, Canada), Majid Khabbazian (University of Victoria, Canada)

1
The Lightning Network (LN) is a second layer technology for solving the scalability problem of blockchain-based cryptocurrencies such as Bitcoin. The LN nodes (i.e., LN users), linked by payment channels, can make payments to each other directly or through multiple hops of payment channels, subject to the available balances of the serving channels. In current LN implementation, the channel capacity (i.e., the sum of the bidirectional balances in the channel) is open to the public, but the bidirectional balances are kept secret for privacy concerns. Nevertheless, the balances can be directly measured by conducting multiple fake payments to probe the precise value of the balance. Such a method, while effective, creates many fake invoices and incurs high cost when used for discovering balances for multiple users. We present a novel non-intrusive balance tomography (NIBT) method, which infers the channel balances by performing legal transactions between two pre-created LN nodes. NIBT iteratively reduces the balance ranges and uses an efficient balance inference algorithm to find the optimal payment in each iteration to cut off the maximum balance ranges. Experimental results show that NIBT can accurately infer about 92% of all covered balances with an extremely low cost.

Redactable Blockchain Supporting Supervision and Self-Management

Yanxue Jia (Shanghai Jiao Tong University, China), Shifeng Sun (Monash University/Data 61, CSIRO, Australia), Yi Zhang (Shanghai Jiao Tong University, China), Zhiqiang Liu (Shanghai Jiao Tong University, China), Dawu Gu (Shanghai Jiao Tong University, China)

2
The immutability of blockchain is crucial to the security of many blockchain applications, while it is still desired or even legally obliged to allow for redacting the contents of blockchain for some scenarios. In this work, we revisit the conflict between the immutability and redaction of blockchain, and put forward a new fine-grained redactable blockchain with a semi-trusted regulator, who follows our protocol but has a tendency to abuse his power. To the best of our knowledge, it is the first blockchain that not only supports the supervision of blockchain content, but also allows users themselves to manage their own data. To this end, we introduce a new variant of chameleon-hash function, named stateful Chameleon Hash with Revocable Subkey, which is important for building our redactable blockchain and may be of independent interest. We also propose a black-box construction from standard chameleon-hash functions, and prove its security properties under our proposed security notions. At last, we provide a proof-ofconcept implementation. The evaluation results demonstrate that our redactable blockchain is practical and can be adopted with small additional overhead compared to the immutable blockchain.

Non-Equivocation in Blockchain: Double-Authentication-Preventing Signatures Gone Contractual

Yannan Li (University of Wollongong, Australia), Willy Susilo (University of Wollongong, Australia), Guomin Yang (University of Wollongong, Australia), Yong Yu (Shaanxi Normal University, China), Tran Viet Xuan Phuong (University of Wollongong, Australia), Dongxi Liu (Data61, CSIRO, Australia)

1
Equivocation is one of the most fundamental problems that need to be solved when designing distributed protocols. Traditional methods to defeat equivocation rely on trusted hardware or particular assumptions, which may hinder their adoption in practice. The advent of blockchain and decentralized cryptocurrencies provides an auspicious breakthrough paradigm to resolve the problem above. In this paper, we propose a blockchain-based solution to address contractual equivocation, which supports user-defined fine-grained policybased equivocation. Specifically, users will be de-incentive if the statements they made breach the predefined access rules. The core of our solution is a newly introduced primitive named Policy-Authentication-Preventing Signature (PoAPS), which combined with a deposit mechanism allows a signer to make conflict statements corresponding to a policy to be penalized. We present a generic construction of PoAPS based on Policy-Based Verifiable Secret Sharing (PBVSS) and demonstrate its practicality via a concrete implementation in the blockchain. Compared with the existing solutions that only handle specific types of equivocation, our proposed approach is more generic and can be instantiated to deal with various kinds of equivocation.

Session Chair

Yajin Zhou

Enter Zoom
Session 9A

Hardware Security (II)

Conference
4:20 PM — 5:20 PM HKT
Local
Jun 10 Thu, 4:20 AM — 5:20 AM EDT

(Mis)managed: A Novel TLB-based Covert Channel on GPUs

Ajay Nayak (Indian Institute of Science, India), Pratheek B (Indian Institute of Science, India), Vinod Ganapathy (Indian Institute of Science, India), Arkaprava Basu (Indian Institute of Science, India)

0
GPUs are now commonly available in most modern computing platforms. They are increasingly being adopted in cloud platforms and data centers due to their immense computing capability. In response to this growth in usage, manufacturers continuously try to improve GPU hardware by adding new features. However, this increase in usage and the addition of utility-improving features can create new, unexpected attack channels. In this paper, we show that two such features—unified virtual memory (UVM) and multi-process service (MPS)—primarily introduced to improve the programmability and efficiency of GPU kernels have an unexpected consequence—that of creating a novel covert-timing channel via the GPU’s translation lookaside buffer (TLB) hierarchy. To enable this covert channel, we first perform experiments to understand the characteristics of TLBs present on a GPU. The use of UVM allows fine-grained management of translations, and helps us discover several idiosyncrasies of the TLB hierarchy, such as three-levels of TLB, coalesced entries. We use this newly-acquired understanding to demonstrate a novel covert channel via the shared TLB. We then leverage MPS to increase the bandwidth of this channel by 40×. Finally, we demonstrate the channel’s utility by leaking data from a GPU-accelerated database application.

Scanning the Cycle: Timing-based Authentication on PLCs

Chuadhry Mujeeb Ahmed (University of Strathclyde, Scotland), Martin Ochoa (AppGate, USA), Jianying Zhou (Singapore University of Technology and Design, Singapore), Aditya Mathur (Singapore University of Technology and Design, Singapore)

0
Programmable Logic Controllers (PLCs) are a core component of an Industrial Control System (ICS). However, if a PLC is compromised or the commands sent across a network from the PLCs are spoofed, consequences could be catastrophic. In this work, a novel technique to authenticate PLCs is proposed that aims at raising the bar against powerful attackers while being compatible with real-time systems. The proposed technique captures timing information for each controller in a non-invasive manner. It is argued that Scan Cycle is a unique feature of a PLC that can be approximated passively by observing network traffic. An attacker that spoofs commands issued by the PLCs would deviate from such fingerprints. To detect replay attacks a PLC Watermarking technique is proposed. PLC Watermarking models the relation between the scan cycle and the control logic by modeling the input/output as a function of request/response messages of a PLC. The proposed technique is validated on an operational water treatment plant (SWaT) and smart grid (EPIC) testbeds. Results from experiments indicate that PLCs can be distinguished based on their scan cycle timing characteristics.

Transduction Shield: A Low-Complexity Method to Detect and Correct the Effects of EMI Injection Attacks on Sensors

Yazhou Tu (University of Louisiana at Lafayette, USA), Vijay Srinivas Tida (University of Louisiana at Lafayette, USA), Zhongqi Pan (University of Louisiana at Lafayette, USA), Xiali Hei (University of Louisiana at Lafayette, USA)

0
The reliability of control systems often relies on the trustworthiness of sensors. As process automation and robotics keep evolving, sensing methods such as pressure sensing are extensively used in both conventional systems and rapidly emerging applications. The goal of this paper is to investigate the threats and design a low-complexity defense method against EMI injection attacks on sensors. To ensure the security and usability of sensors and automated processes, we propose to leverage a matched dummy sensor circuit that shares the sensor’s vulnerabilities to EMI but is insensitive to legitimate signals that the sensor is intended to measure. Our method can detect and correct corrupted sensor measurements without introducing components or modules that are highly complex compared to an original low-end sensor circuit.We analyze and evaluate our method on sensors with EMI injection experiments using different attack parameters. We investigate several attack scenarios, including manipulating the DC voltage of the sensor output, injecting sinusoidal signals, white noises, and malicious voice signals. Our experimental results suggest that, with relatively low cost and computation overhead, the proposed method not only detects the attack but also can correct corrupted sensor data to help maintain the functioning of systems based on different kinds of sensors in the presence of attacks.

Session Chair

Guoxing Chen

Enter Zoom
Session 9B

Malware and Cybercrime (II)

Conference
4:20 PM — 5:20 PM HKT
Local
Jun 10 Thu, 4:20 AM — 5:20 AM EDT

Analysis and Takeover of the Bitcoin-Coordinated Pony Malware

Tsuyoshi Taniguchi (Fujitsu System Integration Laboratories LTD., Japan), Harm Griffioen (Hasso Plattner Institute, Germany), Christian Doerr (Hasso Plattner Institute, Germany)

1
Malware, like all products and services, evolves with bursts of innovation. These advances usually happen whenever security controls get “good enough” to significantly impact the revenue stream of malicious actors, and in the past we have seen the malware ecosystem to adopt concepts such as code obfuscation, polymorphism, domain-generation algorithms (DGAs), as well as virtual machine and sandbox evasion whenever defenses were able to perform consistent and pervasive suppression of these threats. The latest innovation step addresses one of the main Archilles’ heels in malware operations: the resilient addressing of the command & control (C&C) server. As domain blacklisting and DGA reversing have become mature security practices, malware authors are now turning to the Bitcoin blockchain, and use its resilient design principle to disseminate control information that cannot be removed by defenders. In this paper, we report on the adoption of Bitcoin-based C&C addressing in the Pony malware, one of the most widely occurring malware platforms on Windows. We forensically analyze the blockchain-based C&C mechanism of the Pony malware, track the malicious operations over a period of 12 months, and report how the adversaries experimented and optimized their deployment over time. We identify a security flaw in the C&C addressing, which is used to perform a takeover of the malware’s loading mechanism to quantify the volume and origin of the incoming infections.

See through Walls: Detecting Malware in SGX Enclaves with SGX-Bouncer

Zeyu Zhang (Tsinghua University, China/George Mason University, USA), Xiaoli Zhang (Tsinghua University, China), Qi Li (Tsinghua University, China), Kun Sun (George Mason University, USA), Yinqian Zhang (Ohio State University, USA), SongSong Liu (George Mason University, USA), Yukun Liu (Alibaba Inc, China), Xiaoning Li (Alibaba Inc, Seattle, USA)

1
Intel Software Guard Extensions (SGX) offers strong confidentiality and integrity protection to software programs running in untrusted operating systems. Unfortunately, SGX may be abused by attackers to shield suspicious payloads and conceal misbehaviors in SGX enclaves, which cannot be easily detected by existing defense solutions. There is no comprehensive study conducted to characterize malicious enclaves. In this paper, we present the first systematic study that scrutinizes all possible interaction interfaces between enclaves and the outside (i.e., cache-memory hierarchy, host virtual memory, and enclave-mode transitions), and identifies seven attack vectors. Moreover, we propose SGX-Bouncer, a detection framework that can detect these attacks by leveraging multifarious side-channel observations and SGX-specific features. We conduct empirical evaluations with existing malicious SGX applications, which suggests SGX-Bouncer can effectively detect various abnormal behaviors from malicious enclaves.

UltraPIN: Inferring PIN Entries via Ultrasound

Ximing Liu (School of Information Systems, Singapore Management University, Singapore), Yingjiu Li (University of Oregon, USA), Robert H. Deng (School of Information Systems, Singapore Management University, Singapore)

0
While PIN-based user authentication systems such as ATM have long been considered to be secure enough, they are facing new attacks, named UltraPIN, which can be launched from commodity smartphones. As a target user enters a PIN on a PIN-based user authentication system, an attacker may use UltraPIN to infer the PIN from a short distance (50 cm to 100 cm). In this process, UltraPIN leverages smartphone speakers to issue human-inaudible ultrasound signals, and uses smartphone microphones to keep recording acoustic signals. It applies a series of signal processing techniques to extract high-quality feature vectors from low-energy and high-noise signals, and then applies a combination of machine learning models to classify finger movement patterns during PIN entry and generate a ranked list of highly possible PINs as result. Rigorous experiments show that UltraPIN is highly effective and robust in PIN inference.

Session Chair

Junghwan "John" Rhee

Enter Zoom
Session Closing

Closing

Conference
5:20 PM — 5:30 PM HKT
Local
Jun 10 Thu, 5:20 AM — 5:30 AM EDT

Announcement of ASIACCS 2022 and Closing Remarks

Conference Chairs

0
This talk does not have an abstract.

Session Chair

Conference Chairs

Enter Zoom

Made with in Toronto · Privacy Policy · © 2021 Duetone Corp.