Last week I attended the Network and Distributed System Security (NDSS) Symposium in San Diego, California. NDSS is generally considered one of the top-tier academic security conferences. This was my second year in a row attending the symposium.
Things I learned or especially enjoyed are in boldface text below (you may safely skip the non-boldfaced text in the list):
- What you publicly “Like” on Facebook makes it easy for someone to profile probable values for attributes you didn’t make public, including gender, relationship status, and age. For example, as described by Chaabane et al., “single users share more interests than married ones. In particular, a single user has an average of 9 music interests whereas a married user has only 5.79.”
- Femtocells could represent a major vector for attacking or disrupting cellular infrastrcture. I’ve used femtocells; they are great for having strong conversations in weak coverage areas, but to quote the conclusion of Golde et al.: “Deployed 3G femtocells already outnumber traditional 3G base stations globally, and their deployment is increasing rapidly. However, the security of these low-cost devices and the overall architecture seems poorly implemented in practice. They are inherently trusted, able to monitor and modify all communication passing through them, and with an ability to contact other femtocells through the VPN network…[However, it is well known that it is possible to get root level access to these devices. We] evaluated and demonstrated attacks originating from a rogue femtocell and their impact on endusers and mobile operators. It is not only possible to intercept and modify mobile communication but also completely impersonate subscribers. Additionally, using the provided access to the operator network, we could leverage these attacks to a global scale, affect the network availability, and take control of a part of the femtocell infrastructure…We believe that attacks specifically targeting end-users are a major problem and almost impossible to mitigate by operators due to the nature of the current femtocell architecture. The only solution towards attacks against end-users would be to not treat the femtocell as a trusted device and rely on end-to-end encryption between the phone and the operator network. However, due to the nature of the 3G architecture and protocols and the large amount of required changes, it is probably not a practical solution.”
- Appending padding bytes onto data, before encrypting the data, can be dangerous. “Padding” is non-useful data appended to a message simply to ensure a minimum length message or to ensure the message ends on a byte-multiple boundary. (Some encryption functions require such precisely sized input.) As described by AlFardan and Paterson: “the padding oracle attack…exploits the MAC-then-Pad-then-Encrypt construction used by TLS and makes use of subtle timing differences that may arise in the cryptographic processing carried out during decryption, in order to glean information about the correctness or otherwise of the plaintext format underlying a target ciphertext. Specifically, Canvel et al. used timing of encrypted TLS error messages in order to distinguish whether the padding occurring at the end of the plaintext was correctly formatted according to the TLS standard or not. Using Vaudenay’s ideas, this padding oracle information can be leveraged to build a full plaintext recovery attack.” AlFardan and Paterson’s paper described a padding oracle attack against two implementations of the DTLS (datagram transport layer security) protocol, resulting in full or partial plaintext recovery.
- If you want people to click on your malicious link, keep it short and simple. Onarlioglu et al. presented the results of their user-behavior study that showed, among other results: “When the participants did not have the technical knowledge to make an informed decision for a test and had to rely on their intuition, a very common trend was to make a guess based on the ‘size’, the ‘length’, or the ‘complexity’ of the artifacts involved. For example, a benign Amazon link was labeled as malicious by non-technical participants on the basis that the URL contained a crowded parameter string. Some of the comments included: ‘Too long and complicated.’, ‘It consists of many numbers.’, ‘It has lots of funny letters.’ and ‘It has a very long name and also has some unknown code in it.’. Many of these participants later said they would instead follow a malicious PayPal phishing URL because ‘It is simple.’, ‘Easy to read.’, ‘Clear obvious link.’ and it has a ‘Short address’. One participant made a direct comparison between the two links: ‘This is not dangerous, address is clear. [Amazon link] was dangerous because it was not like this.’. Interestingly, in some cases, the non-technical participants managed to avert attacks thanks to this strategy. For example, a number of participants concluded that a Facebook post containing a code injection attack was dangerous solely on the grounds that the link was ‘long’ and ‘confusing’…the majority of the non-techie group was not aware of the fact that a shortened URL could link to any destination on the web. Rather, they thought that TinyURL was the website that actually hosted the content.”
- There needs to be more transparency into how lawful telephone interception systems are constructed and deployed. At CCS two years ago a paper by Sherr et al. was presented that described a control-plane-DoS attack on CALEA systems; here Bates et al. propose a cryptography-based forensics engine for audit and accounting of CALEA systems. As described by Bates et al.: “The inability to properly study deployed wiretap systems gives an advantage to those who wish to circumvent them; those who intend to illegally subvert a surveillance system are not usually constrained by the laws governing access to the wiretaps. Indeed, the limited amount of research that has looked at wiretap systems and standards has shown that existing wiretaps are vulnerable to unilateral countermeasures by the target of the wiretap, resulting in incorrect call records and/or omissions in audio recordings.” Given the amount of light that people are shining on other infrastructure-critical systems such as smart meters and SCADA control systems, perhaps the time is ripe for giving lawful-intercept and monitoring systems the same treatment.
- There are still cute ideas in hardware virtualization. Sun et al. presented work (that followed the nifty Lockdown work by Vasudevan et al. at Carnegie Mellon, of which I was previously unaware) on using the ACPI S3 sleep mode as a BIOS-assisted method for switching between “running” OSes. The idea is that when you want to switch from your “general web surfing” OS to your “bank access” OS, you simply suspend the first VM (to the S3 state) and then wake the second VM (from its sleep state). Lockdown did the switch in 20 seconds using the S4 sleep mode; Sun et al.’s work on SecureSwitch does the switch in 6 seconds using the S3 sleep mode but requires some hardware modifications. Given my interest in hardware virtualization, I particularly enjoyed learning about these two projects. I also liked the three other systems-security papers presented in the same session: Lin et al. presented forensics work on discovering data structures in unmapped memory; El Defrawy et al. presented work on modifying low-end microcontrollers to provide inexpensive roots of trust for embedded systems; and Tian et al. presented a scheme for one virtual machine to continuously monitor another VM’s heap for evidence of buffer overflow.
- Defensive systems that take active responses, such as the OSPF “fight-back” mechanism, can introduce new vulnerabilities as a result of these responses. In some of my favorite work from the conference, Nakibly et al. described a new “Disguised LSA” attack against the Open Shortest Path First (OSPF) interior gateway protocol. The authors first describe the OSPF “fight-back” mechanism: “Once a router receives an instance of its own LSA [link state advertisement] which is newer than the last instance it originated, it immediately advertises a newer instance of the LSA which cancels out the false one.” However, “[The] OSPF spec states that two instances of an LSA [link state advertisement] are considered identical if they have the same values in the following three fields: Sequence Number, Checksum, and Age…all three relevant fields [are] predictable.” In the Disguised LSA attack the authors first send a forged LSA (purportedly from a victim router) with sequence number N (call this “LSA-A”), then one second later send another forged LSA with sequence number N+1 (LSA-B). When the victim router receives LSA-A it will fight back by sending a new LSA with sequence number N+1 (LSA-C). But when the victim receives LSA-B it will ignore it as being a duplicate of LSA-C. Meanwhile, any router that receives LSA-B before LSA-C will install (the attacker’s) LSA-B and discard (the victim’s) LSA-C as a duplicate. Not all routers in an area will be poisoned by LSA-B, but the authors’ simulation suggests that 90% or more routers in an AS could be poisoned. In other disrupting-networks work, Schuchard et al. presented a short paper on how an adversary can send legitimate but oddly-formed BGP messages to cause routers in an arbitrary network location to fall into one of a “variety of failure modes, ranging from severe performance degradation to the unrecoverable failure of all active routing sessions”; and Jiang et al. demonstrated that “a vulnerability affecting the large majority of popular DNS implementations which allows a malicious domain name [such as those used in] malicious activities such as phishing, malware propagation, and botnet command and control [to] stay resolvable long after it has been removed from the upper level servers”, even after the TTL for the domain name expires in DNS caches.
- Hot topic 1: Three papers discussed negative aspects of location privacy in cellular networks. Kune et al. describe both an attack to determine the TMSI (temporary mobile subscriber identity) assigned to a telephone number in a GSM network, and a technique for monitoring PCCH (paging channel) traffic from a particular cell tower to determine if the subscriber is in the vicinity of (within a few kilometers of) that tower. Bindschaedler et al. show empirically that recent research on “mix zones” — geographic areas in which users can mix or change their device identifiers such as IP and MAC addresses to hide their movement and ongoing communications — is not yet effective as a privacy preservation mechanism for cellular users. Finally, in the words of Qian et al.: “An important class of attacks against cellular network infrastructures, i.e., signaling DoS attack, paging channel overload, and channel exhaustion attack, operates by sending low rate data traffic to a large number of mobile devices at a particular location to exhaust bottleneck resources…[We demonstrate] how to create a hit-list of reachable mobile IP addresses associated with the target location to facilitate such targeted DoS attacks.” Of particular interest: “We show that 80% of the devices keep their device IPs for more than 4 hours, leaving ample time for attack reconnaissance” and that often on UMTS networks in large U.S. cities an attacker could “locate enough IPs to impose 2.5 to 3.5 times the normal load on the network.”
- Hot topic 2: Three papers discussed privacy preservation in cloud-based searching. Chen et al. presented an interesting architecture where a private cloud and public cloud work are used together to perform a search over sensitive DNA information: “Inspired by the famous “seed-and-extend” method, our approach strategically splits a mapping task: the public cloud seeks exact matches between the keyed hash values of short read substrings (called seeds) and those of reference sequences to roughly position reads on the genome; the private cloud extends the seeds from these positions to find right alignments. Our novel seed-combination technique further moves most workload of this task to the public cloud. The new approach is found to work effectively against known inference attacks, and also easily scale to millions of reads.” Lu, in addition to having the best opening to a Related Work section that I’ve ever read — “This section overviews related work; it can be skipped with no lack of continuity” — demonstrates “how to build a system that supports logarithmic search over encrypted data.” This system “would allow a database owner to outsource its encrypted database to a cloud server. The owner would retain control over what records can be queried and by whom, by granting each authorized user a search token and a decryption key. A user would then present this token to cloud server who would use it to find encrypted matching records, while learning nothing else. A user could then use its owner-issued decryption key to learn the actual matching records.” Finally, Stefanov et al. presented sort-of-cloud-related work on optimizing “Oblivious RAM”: “The goal of O-RAM is to completely hide the data access pattern (which blocks were read/written) from the server. In other words, each data read or write request will generate a completely random sequence of data accesses from the server’s perspective.”
- Hot topic 3: Five papers discussed smartphone and/or app insecurity. In work that had my jaw hitting the floor regarding the security design of production apps, Schrittwieser et al. “analyze nine popular mobile messaging and VoIP applications and evaluate their security models with a focus on authentication mechanisms. We find that a majority of the examined applications use the user’s phone number as a unique token to identify accounts, which further encumbers the implementation of security barriers. Finally, experimental results show that major security flaws exist in most of the tested applications, allowing attackers to hijack accounts, spoof sender-IDs or enumerate subscribers.” Davi et al. described a control-flow integrity checker for smartphones with ARM processors: asserting “the basic safety property that the control-flow of a program follows only the legitimate paths determined in advance. If an adversary hijacks the control-flow, CFI enforcement can detect this divagation and prevent the attack.” Zhou et al. analyzed the prevalence of malware in five Android app markets, including the official market and four popular alternative markets. Two papers (Bugiel et al. and Grace et al.) address privilege-escalation problems in Android, where malicious applications are able to gain unapproved privileges either (Bugiel et al.) by colluding with other differently-privileged applications or (Grace et al.) by invoking APIs unexpectedly exported by the Android framework. The presenter for the latter paper showed a video of a malicious application sending an SMS message and rebooting(!) the phone, both without holding any user-granted permissions.
There were three keynote speakers, whose messages were: (1) you’re a security professional so you need to be involved in publicly advocating one way or the other on security-related social issues; (2) the future [30 years ahead] will be a strange and interesting place and, since you’re a security researcher, you’ll help us get there; and (3) passwords are passé; if you’re not using two- (or more-)factor authentication then you’re not a good security practitioner.
I like NDSS because of the practical nature of the (mostly academic) work presented: Much of the work feels innovative enough to advance the science of security, yet relevant and practical enough to immediately be integrated as useful extensions to existing commercial products. Only 17% of submitted manuscripts were accepted for publication, so the quality of work presented was good. Unfortunately, attendance is low — someone told me there were 210 people there, but I never heard the official count — so it is not as good a see-old-friends-and-make-new-ones event as, say, CCS.