Jagged Thoughts | Dr. John Linwood Griffin

February 14, 2012

NDSS 2012

Filed under: Reviews — JLG @ 1:03 AM

Last week I attended the Network and Distributed System Security (NDSS) Symposium in San Diego, California.  NDSS is generally considered one of the top-tier academic security conferences.  This was my second year in a row attending the symposium.

Things I learned or especially enjoyed are in boldface text below (you may safely skip the non-boldfaced text in the list):

  1. What you publicly “Like” on Facebook makes it easy for someone to profile probable values for attributes you didn’t make public, including gender, relationship status, and age.  For example, as described by Chaabane et al., “single users share more interests than married ones.  In particular, a single user has an average of 9 music interests whereas a married user has only 5.79.”
  2. Femtocells could represent a major vector for attacking or disrupting cellular infrastrcture.  I’ve used femtocells; they are great for having strong conversations in weak coverage areas, but to quote the conclusion of Golde et al.: “Deployed 3G femtocells already outnumber traditional 3G base stations globally, and their deployment is increasing rapidly. However, the security of these low-cost devices and the overall architecture seems poorly implemented in practice. They are inherently trusted, able to monitor and modify all communication passing through them, and with an ability to contact other femtocells through the VPN network…[However, it is well known that it is possible to get root level access to these devices.  We] evaluated and demonstrated attacks originating from a rogue femtocell and their impact on endusers and mobile operators. It is not only possible to intercept and modify mobile communication but also completely impersonate subscribers. Additionally, using the provided access to the operator network, we could leverage these attacks to a global scale, affect the network availability, and take control of a part of the femtocell infrastructure…We believe that attacks specifically targeting end-users are a major problem and almost impossible to mitigate by operators due to the nature of the current femtocell architecture. The only solution towards attacks against end-users would be to not treat the femtocell as a trusted device and rely on end-to-end encryption between the phone and the operator network. However, due to the nature of the 3G architecture and protocols and the large amount of required changes, it is probably not a practical solution.”
  3. Appending padding bytes onto data, before encrypting the data, can be dangerous.  “Padding” is non-useful data appended to a message simply to ensure a minimum length message or to ensure the message ends on a byte-multiple boundary.  (Some encryption functions require such precisely sized input.)  As described by AlFardan and Paterson: “the padding oracle attack…exploits the MAC-then-Pad-then-Encrypt construction used by TLS and makes use of subtle timing differences that may arise in the cryptographic processing carried out during decryption, in order to glean information about the correctness or otherwise of the plaintext format underlying a target ciphertext.  Specifically, Canvel et al. used timing of encrypted TLS error messages in order to distinguish whether the padding occurring at the end of the plaintext was correctly formatted according to the TLS standard or not.  Using Vaudenay’s ideas, this padding oracle information can be leveraged to build a full plaintext recovery attack.”  AlFardan and Paterson’s paper described a padding oracle attack against two implementations of the DTLS (datagram transport layer security) protocol, resulting in full or partial plaintext recovery.
  4. If you want people to click on your malicious link, keep it short and simple.  Onarlioglu et al. presented the results of their user-behavior study that showed, among other results: “When the participants did not have the technical knowledge to make an informed decision for a test and had to rely on their intuition, a very common trend was to make a guess based on the ‘size’, the ‘length’, or the ‘complexity’ of the artifacts involved. For example, a benign Amazon link was labeled as malicious by non-technical participants on the basis that the URL contained a crowded parameter string. Some of the comments included: ‘Too long and complicated.’, ‘It consists of many numbers.’, ‘It has lots of funny letters.’ and ‘It has a very long name and also has some unknown code in it.’. Many of these participants later said they would instead follow a malicious PayPal phishing URL because ‘It is simple.’, ‘Easy to read.’, ‘Clear obvious link.’ and it has a ‘Short address’. One participant made a direct comparison between the two links: ‘This is not dangerous, address is clear. [Amazon link] was dangerous because it was not like this.’. Interestingly, in some cases, the non-technical participants managed to avert attacks thanks to this strategy. For example, a number of participants concluded that a Facebook post containing a code injection attack was dangerous solely on the grounds that the link was ‘long’ and ‘confusing’…the majority of the non-techie group was not aware of the fact that a shortened URL could link to any destination on the web. Rather, they thought that TinyURL was the website that actually hosted the content.”
  5. There needs to be more transparency into how lawful telephone interception systems are constructed and deployed.  At CCS two years ago a paper by Sherr et al. was presented that described a control-plane-DoS attack on CALEA systems; here Bates et al. propose a cryptography-based forensics engine for audit and accounting of CALEA systems.  As described by Bates et al.: “The inability to properly study deployed wiretap systems gives an advantage to those who wish to circumvent them; those who intend to illegally subvert a surveillance system are not usually constrained by the laws governing access to the wiretaps. Indeed, the limited amount of research that has looked at wiretap systems and standards has shown that existing wiretaps are vulnerable to unilateral countermeasures by the target of the wiretap, resulting in incorrect call records and/or omissions in audio recordings.”  Given the amount of light that people are shining on other infrastructure-critical systems such as smart meters and SCADA control systems, perhaps the time is ripe for giving lawful-intercept and monitoring systems the same treatment.
  6. There are still cute ideas in hardware virtualization.  Sun et al. presented work (that followed the nifty Lockdown work by Vasudevan et al. at Carnegie Mellon, of which I was previously unaware) on using the ACPI S3 sleep mode as a BIOS-assisted method for switching between “running” OSes.  The idea is that when you want to switch from your “general web surfing” OS to your “bank access” OS, you simply suspend the first VM (to the S3 state) and then wake the second VM (from its sleep state).  Lockdown did the switch in 20 seconds using the S4 sleep mode; Sun et al.’s work on SecureSwitch does the switch in 6 seconds using the S3 sleep mode but requires some hardware modifications.  Given my interest in hardware virtualization, I particularly enjoyed learning about these two projects.  I also liked the three other systems-security papers presented in the same session: Lin et al. presented forensics work on discovering data structures in unmapped memory; El Defrawy et al. presented work on modifying low-end microcontrollers to provide inexpensive roots of trust for embedded systems; and Tian et al. presented a scheme for one virtual machine to continuously monitor another VM’s heap for evidence of buffer overflow.
  7. Defensive systems that take active responses, such as the OSPF “fight-back” mechanism, can introduce new vulnerabilities as a result of these responses.  In some of my favorite work from the conference, Nakibly et al. described a new “Disguised LSA” attack against the Open Shortest Path First (OSPF) interior gateway protocol.  The authors first describe the OSPF “fight-back” mechanism: “Once a router receives an instance of its own LSA [link state advertisement] which is newer than the last instance it originated, it immediately advertises a newer instance of the LSA which cancels out the false one.”  However, “[The] OSPF spec states that two instances of an LSA [link state advertisement] are considered identical if they have the same values in the following three fields: Sequence Number, Checksum, and Age…all three relevant fields [are] predictable.”  In the Disguised LSA attack the authors first send a forged LSA (purportedly from a victim router) with sequence number N (call this “LSA-A”), then one second later send another forged LSA with sequence number N+1 (LSA-B).  When the victim router receives LSA-A it will fight back by sending a new LSA with sequence number N+1 (LSA-C).  But when the victim receives LSA-B it will ignore it as being a duplicate of LSA-C.  Meanwhile, any router that receives LSA-B before LSA-C will install (the attacker’s) LSA-B and discard (the victim’s) LSA-C as a duplicate.  Not all routers in an area will be poisoned by LSA-B, but the authors’ simulation suggests that 90% or more routers in an AS could be poisoned.  In other disrupting-networks work, Schuchard et al. presented a short paper on how an adversary can send legitimate but oddly-formed BGP messages to cause routers in an arbitrary network location to fall into one of a “variety of failure modes, ranging from severe performance degradation to the unrecoverable failure of all active routing sessions”; and Jiang et al. demonstrated that “a vulnerability affecting the large majority of popular DNS implementations which allows a malicious domain name [such as those used in] malicious activities such as phishing, malware propagation, and botnet command and control [to] stay resolvable long after it has been removed from the upper level servers”, even after the TTL for the domain name expires in DNS caches.
  8. Hot topic 1:  Three papers discussed negative aspects of location privacy in cellular networks.  Kune et al. describe both an attack to determine the TMSI (temporary mobile subscriber identity) assigned to a telephone number in a GSM network, and a technique for monitoring PCCH (paging channel) traffic from a particular cell tower to determine if the subscriber is in the vicinity of (within a few kilometers of) that tower.  Bindschaedler et al. show empirically that recent research on “mix zones” — geographic areas in which users can mix or change their device identifiers such as IP and MAC addresses to hide their movement and ongoing communications — is not yet effective as a privacy preservation mechanism for cellular users.  Finally, in the words of Qian et al.: “An important class of attacks against cellular network infrastructures, i.e., signaling DoS attack, paging channel overload, and channel exhaustion attack, operates by sending low rate data traffic to a large number of mobile devices at a particular location to exhaust bottleneck resources…[We demonstrate] how to create a hit-list of reachable mobile IP addresses associated with the target location to facilitate such targeted DoS attacks.”  Of particular interest: “We show that 80% of the devices keep their device IPs for more than 4 hours, leaving ample time for attack reconnaissance” and that often on UMTS networks in large U.S. cities an attacker could “locate enough IPs to impose 2.5 to 3.5 times the normal load on the network.”
  9. Hot topic 2:  Three papers discussed privacy preservation in cloud-based searching.  Chen et al. presented an interesting architecture where a private cloud and public cloud work are used together to perform a search over sensitive DNA information: “Inspired by the famous “seed-and-extend” method, our approach strategically splits a mapping task: the public cloud seeks exact matches between the keyed hash values of short read substrings (called seeds) and those of reference sequences to roughly position reads on the genome; the private cloud extends the seeds from these positions to find right alignments. Our novel seed-combination technique further moves most workload of this task to the public cloud. The new approach is found to work effectively against known inference attacks, and also easily scale to millions of reads.”  Lu, in addition to having the best opening to a Related Work section that I’ve ever read — “This section overviews related work; it can be skipped with no lack of continuity” — demonstrates “how to build a system that supports logarithmic search over encrypted data.”  This system “would allow a database owner to outsource its encrypted database to a cloud server. The owner would retain control over what records can be queried and by whom, by granting each authorized user a search token and a decryption key. A user would then present this token to cloud server who would use it to find encrypted matching records, while learning nothing else. A user could then use its owner-issued decryption key to learn the actual matching records.”  Finally, Stefanov et al. presented sort-of-cloud-related work on optimizing “Oblivious RAM”: “The goal of O-RAM is to completely hide the data access pattern (which blocks were read/written) from the server. In other words, each data read or write request will generate a completely random sequence of data accesses from the server’s perspective.”
  10. Hot topic 3: Five papers discussed smartphone and/or app insecurity.  In work that had my jaw hitting the floor regarding the security design of production apps, Schrittwieser et al. “analyze nine popular mobile messaging and VoIP applications and evaluate their security models with a focus on authentication mechanisms. We find that a majority of the examined applications use the user’s phone number as a unique token to identify accounts, which further encumbers the implementation of security barriers. Finally, experimental results show that major security flaws exist in most of the tested applications, allowing attackers to hijack accounts, spoof sender-IDs or enumerate subscribers.”  Davi et al. described a control-flow integrity checker for smartphones with ARM processors: asserting “the basic safety property that the control-flow of a program follows only the legitimate paths determined in advance. If an adversary hijacks the control-flow, CFI enforcement can detect this divagation and prevent the attack.”  Zhou et al. analyzed the prevalence of malware in five Android app markets, including the official market and four popular alternative markets.  Two papers (Bugiel et al. and Grace et al.) address privilege-escalation problems in Android, where malicious applications are able to gain unapproved privileges either (Bugiel et al.) by colluding with other differently-privileged applications or (Grace et al.) by invoking APIs unexpectedly exported by the Android framework.  The presenter for the latter paper showed a video of a malicious application sending an SMS message and rebooting(!) the phone, both without holding any user-granted permissions.

There were three keynote speakers, whose messages were: (1) you’re a security professional so you need to be involved in publicly advocating one way or the other on security-related social issues; (2) the future [30 years ahead] will be a strange and interesting place and, since you’re a security researcher, you’ll help us get there; and (3) passwords are passé; if you’re not using two- (or more-)factor authentication then you’re not a good security practitioner.

I like NDSS because of the practical nature of the (mostly academic) work presented:  Much of the work feels innovative enough to advance the science of security, yet relevant and practical enough to immediately be integrated as useful extensions to existing commercial products.  Only 17% of submitted manuscripts were accepted for publication, so the quality of work presented was good.  Unfortunately, attendance is low — someone told me there were 210 people there, but I never heard the official count — so it is not as good a see-old-friends-and-make-new-ones event as, say, CCS.

February 4, 2012

Jagged Technology debrief

Filed under: Reviews — JLG @ 6:41 PM

Jagged Technology, LLC — tagline “We may be rough, but we’re sharp.” — was the name of the (first) company that I founded.

The website read:  “Jagged Technology executes R&D and provides consulting services in the fields of computer science, information processing, and computer systems software and hardware engineering.  Our areas of expertise include storage systems, security architectures, network components, and virtualization and operating systems.  Please contact [us] to discuss teaming, collaboration, sales, and employment opportunities.”

The company existed from late 2007 through early 2009.  My business plan was to pursue Small Business Innovative Research (SBIR) contract awards, following in the footsteps of the successful group I had previously worked for.  (That group had been part of a small government contracting business that had won numerous SBIR awards, but by the time I worked for them they had been acquired by a larger company and were no longer eligible to pursue SBIRs.)

I was wildly unsuccessful — other than in fulfilling my dream to found a business and work for myself — but I enjoyed every minute of it.  There were a few things I did right:

  • I earned revenue!  Not very much, and I certainly came nowhere near turning a profit, but I was able to score some consulting work (evaluating storage system technologies and formulating research and product objectives) meaning that it wasn’t entirely a pipe dream.
  • I had great mentors.  Before starting Jagged I had the privilege to work with and learn from Dr. Tiffany Frazier and Dr. Chuck Morefield.  Both were successful government-contracting entrepreneurs; Tiffany inspired me to dream big; Chuck took the time to sit down with me to describe what had worked for him when he took the plunge.  While running Jagged I relied on Dr. Greg Shannon‘s advice and assistance in creating winnable (if not winning) SBIR proposals and partnerships.
  • I had good ideas.  After I dissolved the company and started working at my next job, I talked with one of the program managers on the phone (about the next round of SBIR solicitations).  The PM recognized my name and mentioned that he’d really liked what I submitted during the previous round.  (I managed not to scream “so why didn’t you fund it?!?”)

Of course, there were many things I did wrong, including:

  • Not having a contract in hand before starting the company.  Nearly every successful government contractor I’ve talked with has said that they already had a contract lined up before filing the paperwork.  Doing so is especially important because it takes forever (often 9 months or longer) between when you first submit a proposal and when you start getting paid for the work.  I assumed that I’d be able quickly to win one or more contracts and that business would grow rapidly after that.
  • Not having a reputation or past performance with the customers I targeted.  The ratio of submitted proposals to winning proposals was around 30:2 on many of the programs to which I proposed.  Often the program manager was already working with a contractor to develop the SBIR topic and announcement (so there’s 1 of the 2 awards likely already earmarked) so in essence I was competing against 29 other companies — many of whom were actual established companies with existing products, intellectual property, and history of contract performance.  And if the PM has actually worked with any of the other 29 companies on a previous program, then my already unfavorable odds get significantly worse.
  • Overemphasizing the technical aspect of a proposal instead of all evaluated aspects.  SBIRs are evaluated equally on technical merit, feasibility of successful execution, and commercialization strategy.  My technical writeups were strong but the other two components (67% of the evaluation criteria) were usually weak; I typically listed only one or two people (me and a technical colleague) as lined up to execute the work and only one or two ideas on how I would partner with larger companies to commercialize the work.  My early proposals were especially unbalanced since I entered entrepreneurship with the idea that if I proposed a superior mousetrap then the money would beat a path to my door.  It would have made a significant difference in my chances for success if I had experienced partners as co-founders.
  • Focusing my business plan on the SBIR program instead of building a diverse revenue stream.  It is hard for me today not to be severely disappointed in the SBIR program.  I poured significant time and money and opportunity into trying to come up with solutions to the government’s and military’s most urgent technological challenges; when I finally received a rejection notice (usually 9 to 12 months later) and I requested my legally required debrief, typically all I would get was a one-line memo saying that the evaluators “didn’t like my commercialization plan.”  Well, what didn’t they like and how can I do better next time?  Anyway, since I didn’t have contract in hand at the beginning (in retrospect) I would have been better served by focusing on commercially salable technology development — and sales — and using the SBIR program as a way to fund development of related but noncritical offshoot ideas.
  • Not having opened a new cellular telephone account for business purposes.  To this day I get people calling my personal cell phone to try to sell widgets or web hosting to Jagged Technology.  On the plus side, registering your home address for a small business gets you signed up for all sorts of fascinating catalogs for boxes and shipping containers, plastic parts, and scientific glassware — I do enjoy receiving these catalogs, and it’s good to know that if I need a pallet of hazardous materials placards I know exactly whom to call.

The best thing to come out of having started Jagged Technology is a better understanding of how business works — for example, the complexity of balancing your product development schedule and projected contract or sales awards with the available resources (time, people, money) on hand to pursue the work; or the need to work in advance to establish a business area (for example by feeding ideas to program managers, and by visibly contributing to related product or community efforts) instead of simply responding to opportunities; or why a CEO spends his or her days talking with the chief finance officer, chief marketing officer, and chief operating officer instead of spending those days talking with project leads and engineers and salespeople.  It’s not all about better mousetraps.

Just as I feel that I’m a better car driver after having taken the Motorcycle Safety Foundation RiderCourse, I feel as though I’m a better researcher, mentor, and employee today thanks to having immersed myself in founding Jagged Technology and having worked hard for eighteen months to make it successful.

And here’s hoping to more success (or at least more revenue) next time!

February 2, 2012

Discovering flight

Filed under: Aviation — JLG @ 8:00 PM

My first flight lesson was today.  I had a fabulous time.

JLG at Hanscom Field, February 2, 2012

Getting here required the encouragement of private pilot Anastasia (Pittsburgh, 2003), aerobatic pilot Doug (Hawthorne, 2006), sport pilot Tim (Annapolis, 2011), and co-pilot Evelyn (Boston, 2012).  I have long wanted to fly but have long been afraid to start — not because of the flying itself but because of the lifestyle commitment that flying represents.  Various pilots and aircraft owners have warned me that flight is a “use it or lose it skill” and that once you start flying you need to keep flying regularly to stay proficient.

But carpe diem, no?  I’ve been wanting to do this for a decade, and it’s not like it’s going to become more convenient nor less expensive as more time passes.  Evelyn is enthusiastic and exceptionally supportive of the idea of me getting out of the apartment and getting into a cockpit.  So, private pilot certificate, here I come.

There are three general aviation airports within reasonable distance of our place: Norwood Memorial (30 minutes southwest), Hanscom Field (35 NW), and Beverly Municipal (40 NE).  There are at four different flight schools at these three airports, all of which seem fine at first glance.  How to choose?  Unfortunately I haven’t met any local pilots yet to get direct recommendations.  But one of the schools advertises aerobatic training and aerobatic aircraft as part of their fleet — and thanks to Doug I’m very interested in aerobatics — so earlier this week I scheduled a “discovery flight” with that company to meet the staff, try out the drive to Hanscom, discuss the curriculum and training plan, and get up in the air.

The weather this morning was cold, windy, and overcast, which meant hardly anyone else was using the sky; there were only two other general aviation planes flying near us in the practice area west of the airport.  I spent most of the flight grinning from ear to ear.  Some of my takeaways:

  • The aircraft (at least the Cessna 172SP that we flew today) has metallic wicks on the rear of some of the control surfaces to dissipate any static charge that builds up during the flight.  Without the wicks, the buildup of electrical potential could cause communications problems.  Neat.
  • Full throttle isn’t necessarily better.  The concept of “cruise speed” on aircraft has always perplexed me; if the plane can fly faster, why wouldn’t you fly faster in general?  Answer: it’s much louder and much less smooth of a ride, not to mention much less fuel efficient.
  • I should expect my instructors always to pretend that the GPS is always broken.  I’m fairly interested in old-school navigation, be it dead reckoning or the use of VOR/DME equipment, so I was worried that everything would focus on GPS.  The instructor assured me that, other than a 30 minute lesson in GPS (“it’s nice to have when the light is fading and you’re lost”), I would get all the old-school navigation I wanted.
  • Speaking of dead reckoning:  After an hour of having me fly in basically Brownian motion (“ok, now make a steep 180 degree turn keeping your airspeed and altitude fixed but without looking at the instrument panel”) the instructor asked “where’s the airport?”  I’m proud to report that my guess was only off by 90 degrees (I said “east”, the correct answer was “north”).

Most flight schools appear to offer a discovery flight of 30 or 60 minutes (today’s was 60 minutes of flight time as part of an overall 150-minute lesson, including the pre-flight briefing and weather analysis, pre-flight inspection, and post-flight briefing, for $200).  In a discovery flight the student gets to perform the takeoff (yay!) and most of the flying, with hands-on practice in basic aircraft handling: roll/pitch/yaw, trim for level flight, coordinated turns, slow flight, and visual references.  I started lessons today thanks to the discovery flight that Anastasia gifted me a decade ago, plus the ongoing encouragement I’ve received from my friends and family:  Thank you.

January 30, 2012

ShmooCon 2012

Filed under: Reviews — JLG @ 5:36 PM

Last weekend I attended ShmooCon for the first time.

I enjoyed it, though it was more useful for a “street cred knowledge” standpoint that it was for developing enterprise-class security products.  My favorite items were:

  1. The best work presented:  “Credit card fraud: The contactless generation”:  This talk demonstrated, using actual equipment and an actual volunteer from the audience, that it is possible to create a working credit card replica without ever having physical access to one of the new “contactless” RFID credit cards.  Moreover, the foil sleeves that are supposed to prevent remote reading don’t work perfectly.  This area of continuing work truly scares me, since the technology is being used by banks to shift responsibility for fraud onto the consumers.  http://www.forbes.com/sites/andygreenberg/2012/01/30/hackers-demo-shows-how-easily-credit-cards-can-be-read-through-clothes-and-wallets/
  2. “Inside Apple’s MDM Black Box”:  The speaker has reverse-engineered the process by which MDM (mobile device management) traffic travels from an enterprise server, through Apple, to an iOS device; and demonstrated how third parties can build their own MDM devices instead of having to buy a big expensive product to do so.
  3. “A New Model for Enterprise Defense”:  One of the IT folks at Intel (Toby Kohlenberg) is pushing a solution to the multiple-fidelities-of-application-access problem.  Their main goal is to change access control decisions from a binary yes/no decision to a more nuanced approach based on “multilevel trust”.  For example, the goal is when a salesperson accesses corporate resources: From a coffee shop, they are limited only to viewing customer information and order status.  From a hotel room, they can modify orders and view pricing information, and all accesses are fully logged and audited.  From within a corporate site, they can modify customer information and change pricing information.  The talk was about how Intel has started a long multi-year effort to try to achieve this vision.  They’ve only just started, and unfortunately it seemed it would be a long time before their applications supported fine-grained access control.
  4. The announcement of www.routerpwn.com by a Mexican security researcher.  The purpose of Routerpwn is to demonstrate just how easy it is to break the security on many common routers; for example you click on a Javascript link and enter an IP address and boom, you’ve reset the administrative password.
  5. My favorite talk: Brendan O’Connor presented work on building low-cost sensor/wifi devices that can be stealthily placed or launched-by-drone into a target environment of interest.  (There’s nothing new about stealth placement, except he was able to make a workable device for $50, far cheaper than the usual $500 or $5000 devices.)  He also announced that he won one of the DARPA cyber fast track awards.  http://blog.ussjoin.com/2012/01/dropping-the-f-bomb.html

November 8, 2011

DARPA colloquium on future directions in cyber security

Filed under: Reviews — JLG @ 10:09 AM

I attended a DARPA cybersecurity workshop yesterday.  Three program managers spoke about programs that I found especially interesting:

1) Dan Roelker. “His research and development interests relate to cyberwarfare strategy, remote software deployment, network control theory, and binary analysis.” Two programs of note:

a) “Foundational cyberwarfare.” This topic includes exploit research, network analysis, planning and execution, cyberwarfare platform development, and visualization.

b) “Binary executable transforms.” This topic is narrowly focused on low-level code analysis and modification tools.

2) Peiter ‘Mudge’ Zatko. He’s introduced a new program designed to award small amounts of funding (~$50K) for small efforts (~months) in as little as four days after proposal submission, a timescale that I think is pretty exciting. The program:

c) “Cyber fast track.” “The program will accept proposals for all types of cyber security research and development. Of particular interest are efforts with the potential to reduce attack surface areas, reverse current asymmetries, or that are strategic, rather than tactical in nature. Proposed technologies may be hardware, software, or any combination thereof.”

3) Tim Fraser. “He is interested in cyber-security, specifically in using automation to give cyber defenders the same advantages in scope, speed, and scale that are presently too-often enjoyed only by the cyber attacker.” He has an ongoing program:

d) “Moving malware research forward.” I know one of his performers (SENTAR Inc.), they are working on malware classification technology that can extract distinguishing features from malware.

700 people attended the workshop. Other noteworthy themes from the event:

  • Across the board, DARPA seemed to be trying to be less quiet about its work on offensive cyber technologies — lending hope to our eventual ability to speak about such topics outside of a Top Secret darkroom. Several speakers (and me, previously) have mentioned that the CNE (computer network attack and exploitation) and CND (defense) sides of the house absolutely must inform each other to be effective. A speaker brought up the point that effective deterrence requires that your adversary understand what you are capable of.
  • Richard Clarke gave a fascinating talk where he questioned the U.S.’s ability to wage physical war on other countries when our own critical infrastructure is so devastatingly susceptible to cyber attack and disruption by those other countries.
  • He further stated that the only organization capable of defending the U.S. is the department of defense — that it is folly to rely on either the DHS or commercial entities themselves to adequately protect themselves against nation-state adversaries. Several people recommended that I read his new book, “Cyber War: The Next Threat to National Security and What to Do About It”.
  • Several speakers suggested that there needs to be true repercussions for anyone (a person or a state) that perpetuates a cyber attack against the United States. This is an interesting legal position that I had not heard advanced before.
  • Jim Gosler spoke to convey how we consistently underestimate the adversary, including his motives, resources, and capabilities. He gave an example of the Soviets successfully implanting keylogger-equivalents in typewriters in sensitive environments (project Gunman). If that is possible, anything is possible…and we need to adjust our thinking to what 100, or 1000, red teams could do in parallel with you as a target.

August 14, 2011

Black Hat USA 2011 and DEF CON 19

Filed under: Reviews — JLG @ 12:19 PM

This week I attended BH and DC for the first time.  My takeaways are:

1. Recent legislation and threats of lawsuits have had a chilling effect on the work presented at these conferences.

I felt there were very few “success stories” of systems hacked or security measures defeated.  Even the white-hat types who presented seemed subdued in the impact or scope of the projects they presented.  For example, one of my favorite talks was “Battery firmware hacking” by Dr. Charlie Miller; see for example http://hardware.slashdot.org/story/11/07/22/2021230/Apple-Laptops-Vulnerable-To-Battery-Firmware-Hack.  Miller described how he discovered the default unlock code for the battery firmware for Apple batteries, and showed some of the values that could be modified, and described the process of updating the firmware…but didn’t go so far as to show a video of a battery catching fire or exploding.  It seems implausible to me that he didn’t try it [he claimed he didn’t] — leading several attendees to opine that he was threatened with a lawsuit from Apple [as he allegedly was in previous years] if he did so.

2. Everyone under the sun identifies themselves as a “pen tester”.

Either there is far more work on penetration testing than I was aware of, or someone’s lyin’.  (One of my friends suggested that “pen tester” is also used as a G-rated term for someone who does computer network operations [CNO]-type work for hire, especially in the shady world of corporate espionage, so perhaps it’s just a catch-all euphemism.)  This made me wonder what competitive advantages are marketed in penetration testing — cost? speed? past performance? specialization by technology or by threat?

3. It’s probably a lot easier to be invited to speak at these conferences than you would think.

The quality of work presented was low, especially at DC.  If you are interested in presenting you need to have some sort of interesting hobby or side project.  Spend a couple of months hacking on an interesting enough idea and you could be standing on a stage in Vegas next summer.

4. It’s the year of the UAV!  (Unless that year was last year.)

There were at least two homebrew unmanned aerial vehicles on display, including a neat one that had vertical take-off and land (VTOL) capability.  One of the BH talks was “Aerial Cyber Apocalypse: If we can do it… they can too” where the presenters (Richard Perkins and Mike Tassey) detailed the construction of their inexpensive autonomous UAV with 10-pound payload (in their case, with signals intelligence equipment onboard).  Yikes.  Based on my previous experience with the DoD SBIR program, I anticipate a surge in Government solicitations to detect and deflect UAVs over the next 1-2 years in response to the commoditization of cheap payload-capable UAV technology.

5. There is an exciting new DARPA program for hackers to get fast and short-term funding for their hacking.

The usual contracting process (get a DUNS number, then get another number, then put an accounting system in place, then competitively bid on a proposal, then wait 3-6 months, then get a contract in place, then deliver in 6-12 months) can take upwards of 6-7 years to transition useful technology to the Government.  A DARPA program manager (Pieter “Mudge” Zatko) has pushed through a program (DARPA-RA-11-52) wherein small groups of hackers can receive small amounts of money in only two weeks, without having to jump through the usual contracting hoops, and you retain commercial rights to the resulting work.  I’ve heard people talk before about trying to streamline the Government funding process but this is the first concrete example I’ve seen…I hope it works.

6. There were many talks on mobile device and mobile infrastructure [in]security, especially focused on Apple products.

These included:  A talk on behavior-based intrusion detection systems (a complementary approach to signature-based IDSes) in the context of mobile devices, drawing on similar work done on regular OSes (system calls made, resources utilized, Internet destinations contacted); a talk discussing kernel-level exploitation of iPhones using previously disclosed vulnerabilities (uninitialized kernel variables, kernel stack buffer overflows, out of bound writes and kernel heap buffer overflows); a talk identifying vulnerabilities in the Android OS and in applications on the Android Market; a talk discussing the length of time (surprisingly, months) it takes for Android security updates to be installed by at least 50% of vulnerable systems; and a talk about reverse-engineering the enterprise-targeted “mobile device management (MDM)” infrastructure available for IT departments to use to push security policies onto iOS devices (and how the MDM process could be co-opted by a sophisticated attacker to gain access to a device).

The work I found most interesting is as follows:

A. “Virtualization Under Attack: Breaking out of KVM”

When virtualization is used for security and isolation between VMs, it is important that the hypervisor itself isn’t a vulnerability vector.  The speaker used a known vulnerability in the Linux KVM (kernel virtual machine) implementation — KVM supports hotplugging but allow you to unplug a device that shouldn’t be unplugged (the emulated real time clock), resulting in an exploitable dangling pointer — allowing them to run shellcode on the host.  I didn’t see this talk in person but reportedly the talk concluded with a “demonstration against a live KVM instance”.  Note that this doesn’t represent a vulnerability inherent to all hypervisors; just of an unpatched version of KVM.

B. “Beyond files undeleting: OWADE”

As described by the speakers:  “To reconstruct the users online activity from his hard-drive, we have developed OWADE (Offline Windows Analysis and Data Extraction), an open-source tool that is able to perform the advanced analysis required to extract the sensitive data stored by Windows, the browsers, and the instant messaging software.  OWADE decrypts and geolocates the historical WiFi data stored by Windows, providing a list of WiFi points the computer has accessed (including the locations of the access points to within 500 feet) and when each point was last accessed. It can also recover all the logins and passwords stored in popular browsers (Internet Explorer, Firefox, Safari, and Chrome) and instant messaging software (Skype, MSN live, Gtalk, etc.). Finally, it can reconstruct the users online activity by reconstructing their browsing history from various sources: browsers, the Windows registry, and the Windows certificate store.  In certain cases, OWADE is even able to partially recover the users data even when the user has utilized the browsers private mode.”

C. “Legal Aspects of Cybersecurity–(AKA) CYBERLAW: A Year in Review, Cases, issues, your questions my (alleged) answers”

The speaker provided a fascinating tour of cyber law precedent set over the previous year — for example, the decision in the Google wardriving case that just because a wireless network is unencrypted doesn’t mean that the general public is allowed to sniff traffic on the network; doing so may still violate the federal Wiretap Act if “the networks were configured to prevent the general public from gaining access to the data packets without the assistance of sophisticated technology.”  (I’m still trying to figure out what “configured to prevent” means in this case — does it mean the SSID wasn’t broadcast in the beacon frames?)

D. “SSL And The Future Of Authenticity”

The speaker spoke disparagingly about certificate-based authenticity in SSL.  Astonishingly, the author discovered (by looking up and cold-calling one of the original authors of SSL) that certificates were a last-minute design hack and were not thoroughly considered or evaluated.  As a result we have the certificate system we have today, where once you decide to trust a certificate authority you can never revoke that trust; see for example the Comodo hack earlier this year that resulted in false but valid certificates being issued in Iran for major sites (Google, Yahoo, Skype).  The speaker released a browser plugin that uses P2P-like multipath collaboration to determine the authenticity of the credentials presented by a remote site.  It will be interesting to see if the plugin catches on.

E. “Femtocells: A poisonous needle in the operator’s hay stack”

The speaker noted the rise of femtocells (home base stations to which your cellular phone can directly connect to make phone calls and transfer data) and described a fatal flaw in their design and deployment:  Whoever deployes such a device is able to overwrite the firmware on the femtocell and can interpose as a man-in-the-middle on voice and data communications; critically, the link between the phone and the femtocell is encrypted but the link between the femtocell and the cellular backend is *not*.  (The speakers demonstrated this using a real femtocell.)  Also, since the femtocell is a trusted element in the cellular network, it can both collect subscriber/location information from other femtocells on the network, and it can be used as a platform to DoS or otherwise attack the cellular network infrastructure.

F. “Lives On The Line: Defending Crisis Maps in Libya, Sudan, and Pakistan”

The speaker described “crisis mapping” — an interesting use of SMS messages for those in need to communicate with emergency responders during situations of disaster or civil unrest.  From the speaker’s paper: “Days after a 7.0 magnitude earthquake decimated the capital city of Haiti, a small team of technologists acquired the SMS shortcode 4636 and published the number throughout the disaster affected area. The project, which came to be known as Mission 4636, received over 50,000 SMS messages from citizens on the ground — messages containing calls for help from newly formed camps in open spaces such as sports fields and the locations of people trapped inside buildings.  The messages, most of which were received in Haitian Kreyol, were translated by an online team of over 1000 members of the Haitian diaspora collected through Facebook, then geolocated by additional online volunteers to pinpoint the location where the messages originated.  The processed messages were then forwarded to relief agencies on the ground[.]  Those reports enabled the response agencies to develop situational awareness on the ground and determine where aid was most needed.”  The speaker highlights unsolved vulnerabilities in crisis mapping (organization and authentication; platform choice and location; message collection, processing and presentation) and called for standardization work to address these vulnerabilities.

G. “Apple iOS Security Evaluation: Vulnerability Analysis and Data Encryption”

This work is of interest to anyone doing application development for the iOS platform; the paper surveys iOS security features including address space layout randomization (ASLR), code signing, sandboxing, and encryption.  Regarding encryption, the author concludes: “The Data Protection API in iOS is a well designed foundation that enables iOS applications to easily declare which files and Keychain items contain sensitive information and should be protected when not immediately needed. There are no obvious flaws in its design or use of cryptography. It is, however, too sparingly used by the built-in applications in iOS 4, let alone by third-party applications. This leaves the vast majority of data stored on a lost device subject to recovery if a remote wipe command is not sent in time.  The default iOS simple four-digit passcodes can be guessed in under 20 minutes using freely available tools, which will allow the attacker will physical access to the device to also decrypt any files or items in the Keychain that are protected using the Data Protection API. For this reason, it is crucial that sufficiently complex passcodes be used on all iOS devices.  Even with sufficiently complex passcodes, there are a number of sensitive passwords that may be recovered from a lost device, including passwords for Microsoft Exchange accounts, VPN shared secrets, and WiFi WPA passwords. This should be taken into account and these passwords should be changed if an iOS device storing them is lost.”

H. “Security When Nano-seconds Count”

The speaker described the computing architecture (bleeding edge) and security implications (no security whatsoever) of high-frequency trading computers on Wall Street.  This is an environment where microsecond delays in processing or communications can result in huge amounts of dollar losses.  In the speaker’s words: “For nearly all installations, the usual perimeter defensive mechanisms will be completely absent.  You won’t find a firewall, you won’t see routers with ACLs, you won’t see IDS and frankly, anything that you’d recognize as a security tool.  The essential reason that security devices are largely (if not wholly) absent from most implementations is that the best the IT Security industry can offer falls short.  Most commercial firewalls process data and add a few milliseconds of additional latency.  In the vast majority of interconnection scenarios, a few milliseconds isn’t that much of a problem.  In the case of low latency trading, it’s about 100,000 times too slow.  In addition to products which simply do not support this mode of operation, there’s a skills gap in the practitioner space….”

I. “Bit-squatting: DNS Hijacking without exploitation”

The speaker argues that bit flips in memory are more common than you’d think — given the number of devices and the amount of RAM deployed in the world, the speaker estimates a bit-flip rate of approximately 600k bit-flips per hour.  To test whether bit flips were occurring in practice, the speaker registered 31 “bit-flipped” domains such as “mic2osoft.com” (one bit away from “microsoft.com”).  Surprisingly, the author saw about 50 web requests per day to these domains (after manually filtering out web vulnerability scanners, search engine crawlers, and other web spiders) that could be attributed to memory bit-flips.  I’m not convinced of the rigor of the authors’ work, but the result is neat and certainly warrants further investigation.

J. “Chip & PIN is definitely broken: Credit Card skimming and PIN harvesting in an EMV world”

The speakers presented truly terrifying work.  The “chip and PIN” system uses a smartcard credit card in combination with a user-supplied PIN to authenticate a credit or debit transaction.  Previous work showed that chip-and-PIN cards can be used successfully without knowing the PIN; this work demonstrated that skimmers (to capture the card data plus the PIN) are easy to build and use.  The “terrifying” part is the authors’ observation that banks are shifting liability to the consumer now that the PIN is used for authentication: “the cardholder is assumed to be liable unless they can unquestionably prove they were not present for the transaction, did not authorize the transaction, and did not inadvertently assist the transaction through PIN disclosure.”  They noted a case in June 2011 where a Canadian bank refused to void a fraudulent $81,276 transaction because “our records show that this was a chip-and-PIN transaction. This means [the customer] personal card and personal PIN number were used in carrying out this transaction. As a result, [the customer] is liable for the transaction.”  Two of the four authors (all European) had fraudulent activity on their chip-and-PIN cards in the month preceding their talk.

K. “Corporate Espionage for Dummies: The Hidden Threat of Embedded Web Servers”

In the past I’ve seen work on how much information is stored in a network on auxiliary devices such as printers, photocopiers, and VoIP systems.  The authors revisited this topic, looking specifically for embedded web servers on a network, and published a tool (BrEWS) to identify and enumerate these devices on a network.  What I found interesting about their work was their use of Shodan, a queryable commercial database that can be used to find publicly visible embedded web servers (by searching for unique strings in the web server headers returned by these devices).  The authors note that Google and other companies try to block results for such vulnerable systems, but someone who knows where to look (Shodan in this case; I was previously unaware of that service) can easily and inexpensively buy the search results of interest.

I’m told that the full proceedings (including videos) are usually posted on the BH and DC web sites later in the year:
https://www.blackhat.com/html/bh-us-11/bh-us-11-home.html
https://www.defcon.org/html/defcon-19/dc-19-index.html

Final thoughts:

OVERALL IMPRESSION: I’ve previously attended only academic security conferences (CCS, NDSS, USENIX Security, DSN) and had been told to expect something different at BH and DC (i.e., haxx0rz).  I wasn’t disappointed (there were haxx0rz aplenty) although overall I was less impressed than I had expected to be.  Much of the work was simply incomplete — the CFP for both conferences closed in May and it turns out to be common practice for speakers to submit incomplete/work-in-progress ideas while planning to complete the work (or demonstrate the cool result) by the time August rolls around…but unfortunately many people weren’t able to complete the work or show the cool result.  Add to that the “chilling effect” described above and overall I felt the conference leadership really needs to address the quality-of-work problem.  Still, I’m glad I attended both conferences & I’ve walked away with stronger skills and knowledge.

BH VALUE: The most valuable thing about Black Hat was the vendor floor.  Unlike other vendor floors I’ve seen, this one had genuine engineers and techie types manning the booths — meaning that attendees could ask nitty-gritty questions about the products and services hawked by the vendors and get useful answers.  Many of the major CNO or security players had a booth so I was able to get a feel for the state of the industry and especially the state of commercial products to support network defense and computer forensics.  BH also ran a contest that required you to visit at least 15 of the vendor booths which turns out to be a great way to force you to talk with folks you wouldn’t have otherwise.  On the downside, a problem with BH is that they run multiple tracks (around 8 tracks simultaneously over the two days) meaning you miss many of the talks you would want to see; fortunately many of the slides and papers are available on the conference CD.

The Black Hat briefings are expensive (~$2000) but they one of the leading venues for open CNO discussions.

DC VALUE: The most valuable thing about Def Con was what happened outside the talks.  The DC organizers work hard to give the event a spontaneous hacker vibe and to encourage spontaneous hacking (defined as “curiosity and exploration of cool things”).  So there was a room filled with soldering irons and electronics puzzles and challenges; there was a lockpicking room where you could buy lockpicking sets and locks to practice upon; there were contests everywhere and parties and movies every night and long lines and fifteen thousand reasonably hygienic people packed into a series of small rooms.  They even annually run an amateur radio exam session at DC (I took the exams and qualified for a General-class amateur radio license; I missed the Extra-class license by two questions out of 50).

Def Con is cheap ($150), is interesting, and is in a fun location (Vegas).  Of particular value is that many of the BH talks are repeated at DC.  Next year is their 20th conference and should be an especially good year to attend.

November 13, 2009

CCS 2009

Filed under: Reviews — JLG @ 9:31 PM

16th Conference on Computer and Communication Security (CCS’09)
Chicago, Illinois
November 9-13, 2009

CCS is one of the top international security conferences (example topics: detecting kernel rootkits, RFID, privacy and anonymization networks, botnets, cryptography).  It is held annually in November.  This year there were 315 submitted papers from 31 countries, of which 18% were accepted after peer review.

I’ve attended CCS twice (2006 and 2009).  It is one of the best conferences I’ve ever attended — I find that the speakers describe practical, cutting edge, informative results; I keep up with old acquaintances and meet new ones; I keep sharp and up-to-date as a research scientist.

Here are some of the major themes from this year:

* ASCII-compliant shellcode:  My favorite paper of the conference is “English Shellcode” where the authors developed a tool that takes malicious software as input and converts it into REAL ENGLISH PHRASES (taken from Wikipedia and Project Gutenberg) that execute natively on 32-bit x86.  If you read no other paper this year, you simply must read this paper, it is wack incredulous.  There was another paper that uses only valid ASCII characters for shellcode on the ARM architecture.  These demonstrations are important because ASCII (and especially English ASCII) is likely to be passed through by network intrusion detection systems.  The favorite paper is here:

http://www.cs.jhu.edu/~sam/ccs243-mason.pdf

* Cloud computing:  Few authors of cloud-related papers seemed to address the cloudiness of their work, instead (and disappointingly) discussing generic distributed computing principles under a cloud umbrella.  The best cloud talk I saw was Ian Foster, an invited speaker at the cloud security workshop, who described the transition from grid computing to cloud computing thus: grid was about federation, cloud is about infrastructure and hosting.  He pointed out that the grid folks did a good job of developing (e.g., medical research) applications and executing analyses, but that it is the advent of data distribution and sharing in the cloud that is a game-changer in cloud computing.

* Anonymous communication:  There were several talks analyzing the efficacy of anonymization networks (mix networks, remailers, Tor, onion routing).  My takeaway is that these techniques work very well for latency-insensitive traffic (such as email), only moderately well for latency-sensitive traffic (such as web browsing), and not very well yet for high-bandwidth traffic (such as VoIP).  My favorite work was a poster on “Preventing SSL Traffic Analysis with Realistic Cover Traffic” (Nabil Schear and Nikita Borisov) where the authors change the statistical profile of your encrypted traffic such that existing analyses (such as measuring keystroke latencies) are impossible.

* Off-client emulation:  Several speakers described a technique for client-server applications (such as game clients running on customers’ home computers) that help to ensure the correctness, robustness, or speed of the client application.  It’s impractical to run a complete copy of the client on the server (because one server handles many clients) so the authors generally create minimalist versions of the client (for example, a game client that contains no rendering code) that are server-efficient.  In the game example, the client would send the user’s commands (“turn left, walk forward”) to the server, where the minimalist client would verify that those commands didn’t result in an invalid state (such as walking through a wall) that would indicate cheating by the player.

* Function-call graphs:  These are well-known techniques for tracing how an application executes (create a graph of the control flow of an application).  The technique kept popping up during the conference: using them to identify when someone has violated your software license and included your source code in their application; using them inside a hypervisor to identify when a kernel rootkit is present in a virtual machine due to the different hypercalls).  One attendee I had lunch with was very critical of the function-call graph technique (using an argument I didn’t really follow) but otherwise the technique seems useful.

* Power grids:  The currently-hot topic in security research is power grids and smart meters.  There are at least projects at Penn State, Carnegie Mellon, Johns Hopkins, and I’m certain many other places.  There was a tutorial, a paper, and several posters all discussing security issues in the power grid.  The most interesting aspect to me was attacks against state estimators: the researchers described techniques to manipulate the system components involved in measuring and predicting the state of generators, transmission lines, etc.  However, the research community still suffers from a dearth of real-world information of how these networks operate and where the real vulnerabilities might be.

* RFID:  As we already know, it is possible to do RFID well but none of the actual deployed RFID implementations do it well.  One classic observation by a speaker was of the RFID-enabled drivers licenses issued in Washington State (in advance of the Winter Olympics) that include a KILL command that’s supposed to be set with a unique PIN but in reality is unset (using a default PIN)…meaning that anyone with a transmitter and sufficient power could kill a device.

* Ethical standards for security researchers:  One paper raised an ethical issue in its appendix (how can we do security research inside Amazon’s cloud computing infrastructure in a manner that doesn’t violate their terms of service?) and some researchers from the Stevens Institute have published a report and are organizing a workshop to investigate ethical standards for security researchers.  I didn’t really agree with many of the points made (my ethical line is drawn much further to the left: security researchers should have few constraints) but it was a hotly discussed and debated issue during the session breaks.

Wolfram Schulte at Microsoft Research gave an invited workshop talk on their Singularity OS project (reinventing the OS from scratch; using software-enforced isolation instead of relying on hardware memory management techniques).  It’s an interesting project but impractical since it would require a widescale by developers in such a way that very little development would happen for awhile.  The work was inspired by his team’s frustration on using best-practices formal verification (etc.) techniques for software development — or, taken another way, it was so frustrating when a blue-sky team tried to use existing techniques to develop and prove major software projects that they gave up.  That doesn’t bode well for using those techniques extensively in any real-world software development project (although they can still be very useful and insightful…just frustrating).

Also a shout-out to my student Brendan O’Connor for delivering a well-received talk on stock markets for reputation at the digital identity workshop.

November 14, 2008

Information Assurance Conference

Filed under: Reviews — JLG @ 8:53 PM

In November 2008 I attended an “Information Assurance Conference” in Arlington, Virginia. This was a non-refereed two-day workshop of 30-minute talks on policy-level IA issues in the DoD and homeland security environments. The most interesting takeaways were:

  • If you are an organization that wants information assurance, give someone the high-level independent power to veto (or vet) which applications are allowed to use the network.

The U.S. Marine Corps has an outstanding example of this power used successfully: “the HQMC IA division will be the single point of contact within the marine corps for IA program, policy matters and oversight…[Mr. Ray Letteer] has authority to approve or disapprove an application or system for connection to [all Marine Corps core networks].” And, according to Ray (the speaker), the USMC really has given him the teeth to enforce his team’s IA policies.

Such a position of course requires diplomacy and tact: Ray mentions that he carefully vets the classifications of potential vulnerabilities to make sure only applications with demonstrable and unmitigatable vulnerabilities are ultimately banned from the network; he describes his role as translating geek-speak to the senior officers to convey the need for the restrictions his team enforces.

After a cursory look I feel that this USMC approach could serve as an best-practices reference model for many other large organizations. Another speaker noted that the traditional corporate and DoD approach is to have local administration (each division-sized entity has its own IA unit as part of its IT function), whereas the military is moving toward a single unifying enforcement point staffed by well-trained operators. (I asked “isn’t homogeneity terrifying?”; other speakers responded that homogeneity doesn’t have to mean single-point-of-failure — they are not talking about one point of deployment, they are talking about unified policy across all points of deployment.)

  • If you need an ROI (return on investment) story to sell an IA strategy to your management, you’re in luck.

Three speakers emphasized the availability of ROI metrics. Joe Jarzombek described the free software assurance tools that are available from the Department of Homeland Security. As part of that effort DHS published seven articles on making a business case for software assurance (sample title “A Common Sense Way to Make the Business Case for Software Assurance”; click on the “Business Case” link at the above site) and recently held a workshop on the topic.

Two other speakers suggested taking a nonstandard approach in selling security investments to your upper management: instead of justifying your existence, focus on demonstrating your continued competence. For example, present graphical weekly metrics of how many port scans you thwarted or how many new security vulnerabilities were announced by antivirus companies that you prevented from affecting your network.

Or, pick some of the low-hanging fruit to impress the bosses: Dr. Eric Cole of Lockheed-Martin mentioned a client engagement where his team was asked to suggest architectural changes to a network that was operating at 99% utilization. After looking at the network traffic, his team simply blocked 74% of the outgoing connections (i.e., those connections which could not be traced to a business purpose). Nobody complained, and the utilization was reduced to 55% at no cost to the customer.

  • If you are not a member of senior management, you need to learn to speak the language of senior management.

This theme came up over and over during the workshop. “Speak the language of executives — translate your geek-speak into business objectives!” All I can say is: I agree.

Four other quick notes from the workshop:

Whitelisting: One speaker mentioned a trend toward whitelisting web sites as a means of IA in military computer networks. (Whitelisting is enumerating the list of acceptable sites and denying access to any other sites.) I hadn’t heard that before — can anyone confirm you’re seeing this?

COTS: Is COTS still on the rise? Some speakers and attendees noted a trend toward COTS software and hardware, chiefly for the purchase costs and especially the (comparatively low) maintenance costs. Others noted that there remain many applications, especially in classified domains, where commercial vendors are unwilling to tweak their product to fit the needs of the space, and/or there is too much inertia or turf-war to switch away from specialized development systems.

Metadata: I was delighted to see a talk about metadata by Carol Farrant, whose team is interested in collecting, analyzing, and using metadata in data management for the intelligence and military communities. Of the technologies I heard discussed during the workshop, this is the one whose core technologies are arguably the least developed in the research and commercial environments. Unfortunately her team is underfunded and understaffed, so she is actively seeking volunteers to help move things along. (She notes that in the past year she’s seen more volunteer interest on the topic than on anything else in her career.) This might be an opportunity for an academic to have a big influence on metadata use and tool development.

FPGAs: I’ve been a fan of programmable logic since working with FPGAs in Dr. Richard Chapman’s research lab at Auburn. The final speaker of the workshop, Jonathan Ellis, claimed that the moment is at hand for reconfigurable logic to be used the way it was always intended — specifically, actually reprogramming the chips (frequently) during normal operations. Vendors are currently working to make this possible (if I heard correctly: although the chips can support multiple independent execution units on them, they currently have to be completely wiped to be reprogrammed. Not for long.) FPGAs have come a long way in 10 years: he asserts that software toolkits for ease of programming and implementation — arguably the biggest barrier to their widespread use — are right around the corner.  He also noted that the current thinking is if you are building 100,000 or fewer units of something like cell phones, it’s more cost-effective and time-efficient to pump out FPGAs (instantly available and upgradable) than to send off for ASIC fabrication (expensive, two month lead time).

I thank the hosts of this event, Technology Training Corporation, for sending me a complementary pass to attend the workshop. (This workshop was similar to the “cyber security conference” I attended in June.) Overall I would likely not attend this workshop again, as I (as a practitioner of basic and advanced research) am not really in their target audience. People who I think would be interested are people involved in policy-level marketing and sales for large government contractors, Marc Krull, and government employees involved with large program development and management.

October 9, 2008

NSRC industry day

Filed under: Reviews — JLG @ 10:12 PM

This week I attended the 5th annual industry day at the Networking and Security Research Center (NSRC) at Penn State University. The event was similar in format to other industry days I’ve attended (CMU, Stony Brook) but with a more focused core of industry guests, primarily from telecom companies and large government contractors.

My main interest was in the work of professors Trent Jaeger and Patrick McDaniel of the Systems and Internet Infrastructure Security (SIIS) laboratory. Their students are working on several projects of interest to Jagged, including:

Another NSRC focus is on wireless networking research (cellular, sensor, 802.11, vehicular, you name it). An upside of their work is that it is strongly focused on real-world problems reported by companies — for example, CDMA2000-WiMAX internetworking. A related downside is that it wasn’t clear what academic (basic research) lessons could be drawn from some of the work; some of the results felt limited in scope and applicability to only a specific problem.

All the posters from the industry day are available here:

http://nsrc.cse.psu.edu/id08.html

The most interesting and controversial talk at the event was a keynote by Mr. Steven Chabinsky, the deputy director of the Joint Interagency Cyber Task Force. He advanced the idea that we as a nation have let ourselves be “seduced” by technology, by plowing ahead with deployments of untested and unreliable technology at critical infrastructure points without first fully understanding (or mitigating) the risks and consequences of failure. He called on us as researchers and companies to consider the full spectrum of threat, vulnerability, and consequence in our technological innovations. A lively discussion ensued after the talk regarding the economic incentives to deploy unreliable technology: several of the topics were:

  • Will better policy decisions be made when cyber risks are better understood? The speaker described a current lack of capabilities to quantify risk either as an absolute or a comparative measurement. This is especially true in low-risk but extremely-high-damage scenarios such as directed attacks against components of the power grid. I felt this observation makes an excellent point, and highlights a mental gap between the way that engineers think of technology and the way that decisionmakers compare among technologies. Perhaps the government should fund some new studies along these lines?
  • Where should the government draw the line between regulation and deregulation? There are several non-regulatory actions the government could take to constructively assist companies in developing hardened products (say, that control water processing plants), such as making supplemental development grants available to companies whose technology will be used in critical infrastructure. On one hand, I feel that government should more actively oversee and regulate (and pay for) these kinds of technologies. But perhaps the problem is more complex than I realize — e.g., perhaps one gets a qualitatively better product through open-market competition than one would through contract specification and regulatory compliance. Anyone have an opinion on this?

Mr. Chabinsky’s point was underscored later in the day in a talk on the Ohio EVEREST voting study. Patrick McDaniel discussed how the Help America Vote Act effectively caused an insufficiently-tested prototype technology (electronic voting machines) for a low-profit-margin customer (the government) to be thrust into mandatory and widespread use in a critical environment (the legitimacy of our democracy) in only a few years. He concluded (as concluded by Avi Rubin and others) that current systems are fundamentally flawed and unsecurable. In light of the above discussion, these fundamental flaws represent a failure of technologists (as well as many others) — both (a) in our inability to architect reliable systems and (b) in our inability to adequately inform public policy officials of the true readiness of proposed technologies.

This latter problem — coherently describing and conveying the capabilities and limitations of computer systems in a non-expert human-comprehensible manner — is one of the topics that has long interested me, especially in the context of information sharing in sensitive or classified environments. Anyone want to join us in working on this problem?

August 26, 2008

High end computing workshop

Filed under: Reviews — JLG @ 3:44 PM

In August 2008 I attended the HEC FSIO workshop on file system and I/O (FSIO) research in support of high-end computing (HEC).

This HEC focus was interesting for a systems guy like me — think “systems that run detailed atmospheric simulations for weather prediction” and like environments where such words as “parallel”, “(peta)scale”, and “throughput” are bandied about. (Sample presentation title: Improving scalability in parallel file systems for high end computing.)

The primary attendees and presenters were academic PIs funded under a joint NSF/DOE program called HECURA. This program chooses a new theme each year for its solicitations: last year’s was compilers; this fall’s will be FSIO (as it was three years ago). All presentations from this workshop are available here:

The work was all interesting but old; most of the work had been presented and discussed at the great conferences of yore. What I ended up enjoying the most from this workshop was an “Industry Storage Device Research Panel” with two fabulous presentations:

The above two talks are a great introduction to, respectively, the future of magnetic storage & the future of alternatives to magnetic storage.

The most interesting thing I learned is DOE’s archival storage model. If you want to archive something, you FTP PUT it onto an enormous server containing everything else that’s been archived in the last 60 years. If you want to retrieve it, you FTP GET it. (I didn’t learn how you locate the item you want, but there must be a standard naming scheme or an index — if you know please send me a note.) I chatted briefly with Mark Gary, data storage group leader at LLNL, about the differences between that model and all the digital preservation issues we touched upon in the class I co-taught this Spring (metadata generation, textual normalization, ontology standardization, language translation, QoS, security, access methods, historical ingest, etc.) Mark made the point that their KISS approach, while limited in functionality at first glance, both works well and continues to do exactly what their users need.

« Newer PostsOlder Posts »