Jagged Thoughts | Dr. John Linwood Griffin

November 20, 2012

Gabbing to the GAB

Filed under: Opinions,Work — JLG @ 12:00 AM

Earlier this month the (ISC)² U.S. Government Advisory Board (GAB) invited me to present my views and opinions to the board.  What a neat opportunity!

The GAB is a group of mostly federal agency Chief Information Security Officers (CISOs) or similar executives.  Officially it comprises “10-20 senior-level information security professionals in their respective region who advise (ISC)² on industry initiatives, policies, views, standards and concerns” and whose goals include offer deeper insights into the needs of the information security community and discuss matters of policy or initiatives that drive professional development.

In terms of content, in addition to discussing my previous work on storage systems with autonomous security functionality, I advanced three of my personal opinions:

  1. Before industry can develop the “cybersecurity workforce of the future” it needs to figure out how to calculate the return on investment (ROI) for IT/security administration.  I suggested a small initial effort to create an anonymized central database for security attacks and the real costs of those attacks.  If such a database was widely available at nominal cost (or free) then an IT department could report on the value of their actions over the past year: “we deployed such-and-such a protection tool, which blocks against this known attack that caused over $10M in losses to a similar organization.”  Notably, my suggested approach is constructive (“here’s what we prevented”) rather thannegative (“fear, uncertainty, and doubt / FUD”).  My point is that coming at the ROI problem from a positive perspective might be what makes it work.
  2. No technical staff member should be “just an instructor” or “just a developer.”  Staff hired primarily as technical instructors should (for example) be part of an operational rotation program to keep their skills and classroom examples fresh.  Likewise, developers/programmers/etc. should spend part of their time interacting with students, or developing new courseware, or working with the sales or marketing team, etc.  I brought up the 3M (15%) / Hewlett-Packard Labs (10%) / Google (20%) time model and noted that there’s no reason that a practical part-time project can’t also be revenue-generating; it just should be different (in terms of scope, experience, takeaways) from what the staff member does the rest of their time.  My point is that treating someone as “only” an engineer (developer, instructor, etc.) does a disservice not just to that person, but also to their colleagues and to their organization as a whole.
  3. How will industry provide the advanced “tip-of-the-spear” training of the future?  One curiosity of mine is how to provide on-the-job advanced training.  Why should your staff be expected to learn only when they’re in the classroom?  Imagine if you could provide your financial team with regular security conundrums — “who should be on the access control list (ACL) for this document?” — that you are able to generate, monitor, and control.  Immediately after they take an action (setting the ACL) then your security system provides them with positive reinforcement or constructive criticism as appropriate.  My point is that if your non-security-expert employees regularly deal with security-relevant problems on the job, then security will no longer be exceptional to your employees.

I had a blast speaking.  The GAB is a group of great folks and they kept me on my toes for most of an hour asking questions and debating points.  It’s not every day that you get to engage high-level decision makers with your own talking points, so my hope is that I gave them some interesting viewpoints to think about — and perhaps some new ideas on which to take action inside their own agencies and/or to advise the government.

November 1, 2012

2012 Conference on Computer and Communications Security

Filed under: Reviews,Work — JLG @ 12:00 AM

In October I attended the 19th ACM Conference on Computer and Communications Security (CCS) in Raleigh, North Carolina.  It was my fourth time attending (and third city visited for) the conference.

Here are some of my interesting takeaways from the conference:

The point of Binary Stirring is to end up with a completely different (but functionally equivalent) executable code segment, each time you load a program.  The authors double “each code segment into two separate segments—one in which all bytes are treated as data, and another in which all bytes are treated as code.  …In the data-only copy (.told), all bytes are preserved at their original addresses, but the section is set non-executable (NX).  …In the code-only copy (.tnew), all bytes are disassembled into code blocks that can be randomly stirred into a new layout each time the program starts.”  (The authors measured about a 2% performance penalty from mixing up the code segment.)

But why mix the executable bytes at all?  Binary Stirring is intended to protect against clever “return-oriented programming” (ROP) attacks by eliminating all predictable executable code from the program.  If you haven’t studied ROP (I hadn’t before I attended the talk) then it’s worth taking a look, just to appreciate the cleverness of the attack & the challenge of mitigating it.  Start with last year’s paper Q: Exploit Hardening Made Easy, especially the related work survey in section 9.

Regarding ORAM, imagine a stock analyst who stores gigabytes of encrypted market information “in the cloud.”  In order to make a buy/sell decision about a particular stock (say, NASDAQ:TSYS), she would first download a few kilobytes of historical information about TSYS from her cloud storage.  The problem is that an adversary at the cloud provider could detect that she was interested in TSYS stock, even though the data is encrypted in the cloud.  (How?  Well, imagine that the adversary watched her memory access patterns the last time she bought or sold TSYS stock.  Those access patterns will be repeated this time when she examines TSYS stock.)  The point of oblivious RAM is to make it impossible for the adversary to glean which records the analyst downloads.

  • Fully homomorphic encryption:  The similar concept of fully homomorphic encryption (FHE) was discussed at some of the post-conference workshops.  FHE is the concept that you can encrypt data (such as database entries), store them “in the cloud,” and then have the cloud do computation for you (such as database searches) on the encrypted data, without decrypting.

When I first heard about the concept of homomorphic encryption (circa 2005, from some of my excellent then-colleagues at IBM Research) I felt it was one of the coolest things I’d encountered in a company filled with cool things.  Unfortunately FHE is still somewhat of a pipe dream — like ORAM, it’ll be a long while before it’s efficient enough to solve any practical real-world problems — but it remains an active area of interesting research.

  • Electrical network frequency (ENF):  In the holy cow, how cool is that? category, the paper “How Secure are Power Network Signature Based Time Stamps?” introduced me to a new forensics concept: “One emerging direction of digital recording authentication is to exploit an potential time stamp originated from the power networks. This time stamp, referred to as the Electrical Network Frequency (ENF), is based on the fluctuation of the supply frequency of a power grid.  … It has been found that digital devices such as audio recorders, CCTV recorders, and camcorders that are plugged into the power systems or are near power sources may pick up the ENF signal due to the interference from electromagnetic fields created by power sources.”  Wow!

The paper is about anti-forensics (how to remove the ENF signature from your digital recording) and counter-anti-forensics (how to detect when someone has removed the ENF signature).  The paper’s discussion of ENF analysis reminded me loosely of one of my all-time favorite papers, also from CCS, on remote measurement of CPU load by measuring clock skew as seen through TCP (transmission control protocol) timestamps.

  • Resource-freeing attacks (RFA):  I also enjoy papers about virtualization, especially regarding the fair or unfair allocation of resources across multiple competing VMs.  In the paper “Resource-Freeing Attacks: Improve Your Cloud Performance (at Your Neighbor’s Expense)”, the authors show how to use antisocial virtual behavior for fun and profit: “A resource-freeing attack (RFA) [improves] a VM’s performance by forcing a competing VM to saturate some bottleneck. If done carefully, this can slow down or shift the competing application’s use of a desired resource. For example, we investigate in detail an RFA that improves cache performance when co-resident with a heavily used Apache web server. Greedy users will benefit from running the RFA, and the victim ends up paying for increased load and the costs of reduced legitimate traffic.”

A disappointing aspect of the paper is that they don’t spend much time discussing how one can prevent RFAs.  Their suggestions are (1) use a dedicated instance, or (2) build better hypervisors, or (3) do better scheduling.  That last suggestion reminded me of another of my all-time favorite research results, from last year’s “Scheduler Vulnerabilities and Attacks in Cloud Computing”, wherein the authors describe a “theft-of-service” attack:  A virtual machine calls Halt() just before the hypervisor timer fires to measure resource use by VMs, meaning that the VM consumes CPU resources but (a) isn’t charged for them and (b) is allocated even more resources at the expense of other VMs.

  • My favorite work of the conference:  The paper is a little hard to follow, but I loved the talk on “Scriptless Attacks – Stealing the Pie Without Touching the Sill”.  The authors were interested in whether an attacker could still perform “information theft” attacks once all the XSS (cross-site scripting) vulnerabilities are gone.  Their answer: “The surprising result is that an attacker can also abuse Cascading Style Sheets (CSS) in combination with other Web techniques like plain HTML, inactive SVG images or font files.”

One of their examples is that the attacker can very rapidly shrink then grow the size of a text entry field.  When the text entry field shrinks to one pixel smaller than the width of the character the user typed, the browser automatically creates a scrollbar.  The attacker can note the appearance of the scrollbar and infer the character based on the amount the field shrank.  (The shrinking and expansion takes place too fast for the user to notice.)  The data exfiltration happens even with JavaScript completely disabled.  Pretty cool result.

Finally, here are some honorable-mention papers in four categories — work I enjoyed reading, that you might too:

Those who cannot remember the past, are condemned to repeat it:

Sidestepping institutional security:

Why be a white hat?  The dark side is where all the money is made:

Badware and goodware:

Overall I enjoyed the conference, especially the “local flavor” that the organizers tried to inject by serving stereotypical southern food (shrimp on grits, fried catfish) and hiring a bluegrass band (the thrilling Steep Canyon Rangers) for a private concert at Raleigh’s performing arts center.