In October I attended the 19th ACM Conference on Computer and Communications Security (CCS) in Raleigh, North Carolina. It was my fourth time attending (and third city visited for) the conference.
Here are some of my interesting takeaways from the conference:
- The next thing after ASLR? Address space layout randomization (ASLR) is an anti-malware technique. Under ASLR you mix up where your program keeps its important data structures. Doing so makes it harder for malicious software to find and abuse those structures. Taking “mix things up” one step further, I enjoyed the talk on Binary Stirring: Self-randomizing Instruction Addresses of Legacy X86 Binary Code. The authors’ technique “transforms legacy application binary code into self-randomizing code that statically re-randomizes itself each time it is loaded.”
The point of Binary Stirring is to end up with a completely different (but functionally equivalent) executable code segment, each time you load a program. The authors double “each code segment into two separate segments—one in which all bytes are treated as data, and another in which all bytes are treated as code. …In the data-only copy (.told), all bytes are preserved at their original addresses, but the section is set non-executable (NX). …In the code-only copy (.tnew), all bytes are disassembled into code blocks that can be randomly stirred into a new layout each time the program starts.” (The authors measured about a 2% performance penalty from mixing up the code segment.)
But why mix the executable bytes at all? Binary Stirring is intended to protect against clever “return-oriented programming” (ROP) attacks by eliminating all predictable executable code from the program. If you haven’t studied ROP (I hadn’t before I attended the talk) then it’s worth taking a look, just to appreciate the cleverness of the attack & the challenge of mitigating it. Start with last year’s paper Q: Exploit Hardening Made Easy, especially the related work survey in section 9.
- Oblivious RAM: ORAM is a neat concept, dating back to at least 1990, wherein an adversary who can see your memory accesses cannot learn anything about what your program is doing. (See below.) I don’t know of any practical real-world problems that are solvable by ORAM — a current general challenge is that you only get obliviousness by utterly giving up on performance — but I still enjoy hearing about work on the topic, including this paper that combines ORAM with the neat concept of secure two-party computation(wherein two parties must cooperate to calculate a function’s output, but where neither party reveals anything about its half of the computation).
Regarding ORAM, imagine a stock analyst who stores gigabytes of encrypted market information “in the cloud.” In order to make a buy/sell decision about a particular stock (say, NASDAQ:TSYS), she would first download a few kilobytes of historical information about TSYS from her cloud storage. The problem is that an adversary at the cloud provider could detect that she was interested in TSYS stock, even though the data is encrypted in the cloud. (How? Well, imagine that the adversary watched her memory access patterns the last time she bought or sold TSYS stock. Those access patterns will be repeated this time when she examines TSYS stock.) The point of oblivious RAM is to make it impossible for the adversary to glean which records the analyst downloads.
- Fully homomorphic encryption: The similar concept of fully homomorphic encryption (FHE) was discussed at some of the post-conference workshops. FHE is the concept that you can encrypt data (such as database entries), store them “in the cloud,” and then have the cloud do computation for you (such as database searches) on the encrypted data, without decrypting.
When I first heard about the concept of homomorphic encryption (circa 2005, from some of my excellent then-colleagues at IBM Research) I felt it was one of the coolest things I’d encountered in a company filled with cool things. Unfortunately FHE is still somewhat of a pipe dream — like ORAM, it’ll be a long while before it’s efficient enough to solve any practical real-world problems — but it remains an active area of interesting research.
- Electrical network frequency (ENF): In the holy cow, how cool is that? category, the paper “How Secure are Power Network Signature Based Time Stamps?” introduced me to a new forensics concept: “One emerging direction of digital recording authentication is to exploit an potential time stamp originated from the power networks. This time stamp, referred to as the Electrical Network Frequency (ENF), is based on the fluctuation of the supply frequency of a power grid. … It has been found that digital devices such as audio recorders, CCTV recorders, and camcorders that are plugged into the power systems or are near power sources may pick up the ENF signal due to the interference from electromagnetic fields created by power sources.” Wow!
The paper is about anti-forensics (how to remove the ENF signature from your digital recording) and counter-anti-forensics (how to detect when someone has removed the ENF signature). The paper’s discussion of ENF analysis reminded me loosely of one of my all-time favorite papers, also from CCS, on remote measurement of CPU load by measuring clock skew as seen through TCP (transmission control protocol) timestamps.
- Resource-freeing attacks (RFA): I also enjoy papers about virtualization, especially regarding the fair or unfair allocation of resources across multiple competing VMs. In the paper “Resource-Freeing Attacks: Improve Your Cloud Performance (at Your Neighbor’s Expense)”, the authors show how to use antisocial virtual behavior for fun and profit: “A resource-freeing attack (RFA) [improves] a VM’s performance by forcing a competing VM to saturate some bottleneck. If done carefully, this can slow down or shift the competing application’s use of a desired resource. For example, we investigate in detail an RFA that improves cache performance when co-resident with a heavily used Apache web server. Greedy users will benefit from running the RFA, and the victim ends up paying for increased load and the costs of reduced legitimate traffic.”
A disappointing aspect of the paper is that they don’t spend much time discussing how one can prevent RFAs. Their suggestions are (1) use a dedicated instance, or (2) build better hypervisors, or (3) do better scheduling. That last suggestion reminded me of another of my all-time favorite research results, from last year’s “Scheduler Vulnerabilities and Attacks in Cloud Computing”, wherein the authors describe a “theft-of-service” attack: A virtual machine calls Halt() just before the hypervisor timer fires to measure resource use by VMs, meaning that the VM consumes CPU resources but (a) isn’t charged for them and (b) is allocated even more resources at the expense of other VMs.
- My favorite work of the conference: The paper is a little hard to follow, but I loved the talk on “Scriptless Attacks – Stealing the Pie Without Touching the Sill”. The authors were interested in whether an attacker could still perform “information theft” attacks once all the XSS (cross-site scripting) vulnerabilities are gone. Their answer: “The surprising result is that an attacker can also abuse Cascading Style Sheets (CSS) in combination with other Web techniques like plain HTML, inactive SVG images or font files.”
One of their examples is that the attacker can very rapidly shrink then grow the size of a text entry field. When the text entry field shrinks to one pixel smaller than the width of the character the user typed, the browser automatically creates a scrollbar. The attacker can note the appearance of the scrollbar and infer the character based on the amount the field shrank. (The shrinking and expansion takes place too fast for the user to notice.) The data exfiltration happens even with JavaScript completely disabled. Pretty cool result.
Finally, here are some honorable-mention papers in four categories — work I enjoyed reading, that you might too:
Those who cannot remember the past, are condemned to repeat it:
Sidestepping institutional security:
Why be a white hat? The dark side is where all the money is made:
Badware and goodware:
Overall I enjoyed the conference, especially the “local flavor” that the organizers tried to inject by serving stereotypical southern food (shrimp on grits, fried catfish) and hiring a bluegrass band (the thrilling Steep Canyon Rangers) for a private concert at Raleigh’s performing arts center.