Jagged Thoughts | Dr. John Linwood Griffin

January 20, 2019

ShmooCon 2018

Filed under: Opinions,Reviews,Work — JLG @ 11:51 PM

Note: This report is one year out of date. I attended ShmooCon in Washington, DC, last year, wrote up my thoughts, decided to wait until the videos were available before posting my thoughts, and then plum forgot about it. I see that this year’s convention was this weekend, so now seems like a good opportunity to finally make good on posting…

In general I was underwhelmed. This trip was my second time there, and ShmooCon seems to be more thought experiment (“here’s some things I thought of, and some first steps in that direction”) than hey-look-at-this-cool-result-you-can-learn-from. But most attendees seemed to enjoy the content — and for me, talking one-on-one to folks was the highlight of the con (as usual) — and it was as hard as ever to score tickets — so ShmooCon is clearly not waning in popularity.

There were some cool takeaways for me, including:

  • A talk by someone who set up ~$400/mo worth of cloud instances (~80 nodes across ~8 hosting providers, one in each zone they provide) to collect metrics on things-that-scan-the-entire-IPv4-address-space (#noisynet). It was nice to see some quantitative numbers on what the minimum cost for entry is for RYOing your own round-the-world distributed collection of servers; $400 sounded surprisingly affordable. He also pointed out that the upstream bandwidth cost of sending a SYN packet to every IPv4 address is around 280 GB, which also sounds surprisingly affordable.
  • A talk by a lawyer (#blinkblink) on how in the US there is a different legal standard (so far) for requiring you to unlock your phone with a passcode (something you know) vs requiring you to unlock with your thumbprint or face (something you are). In particular, if you use these technologies, you should probably also learn how to shut them off in a hurry (such as by triggering iOS Emergency Mode by pressing the lock button 5 times rapidly).
  • A talk on robot attack and defense (#robotsattack) that was much more subtle than the kinds of attacks you’d expect. For example psychological attacks (the speaker retested the infamous Milgram experiment and found that people are pretty susceptible to being railroaded by a robot) and social engineering (she found that robots-that-pretend-to-deliver-boxes-of-cookies were much more likely to be allowed unescorted into locked spaces).
  • I didn’t see the talk, but I heard people discussing EFF’s talk about their investigation into a malware espionage campaign (#duckduckapt). Their whitepaper (info at https://www.eff.org/press/releases/eff-and-lookout-uncover-new-malware-espionage-campaign-infecting-thousands-around) has interesting details such as how they pinpointed a physical building that appears to be responsible for the malware’s command-and-control (C2) infrastructure by identifying the C2 test nodes, probing which SSIDs were visible from those nodes, and cross-referencing those SSIDs on a Wi-Fi geolocation service.

One regret is that I missed the plenary session in which the future of cryptocurrency was debated. In one cryptocurrency conversation I had over the weekend, my observation that serious companies are putting serious money into serious blockchain R&D was countered by someone else’s observation that there may not be enough power in the world to run all the blockchain infrastructure if it continues to grow uncontained. I can’t yet predict what blockchain has in store for us but I do wish that I received 0.01 BTC each time someone voices a strong opinion about blockchain.

Finally, if you only watch one talk from ShmooCon (when they’re up on the tubes, in ~3 weeks?), I recommend:

  • A somewhat extemporaneous talk by a seemingly-well-known exploiter/0day developer (#forging). Listening to the speaker talk casually about how everything-you-thought-was-pure-and-good-in-the-world-actually-isn’t was jaw-dropping, even for someone who was a computer security researcher in a previous life. Example: He discovered a way to turn a laptop camera on and off so quickly that the LED charging circuit that would have turned on the light never turned on, and he passed along an anecdote of how widely an organization opens its wallet once he uses that trick to provide a screenshot of their target from the target’s computer.

December 26, 2018

First Man

Filed under: Opinions,Reviews — JLG @ 9:14 AM

Dear Dr. Hansen,

I was inspired to read your book after seeing First Man in the theater this year. I didn’t realize your Auburn connection until reaching the end of the afterword.

As I read the book I kept remarking to my wife how impressed I was with the phenomenal level of detail you wove into the story. It grabbed me from the moment I picked up the book in the bookstore: I opened it to a random page that happened to be the LM descent, and found myself breathlessly reading about the alarms and fuel and landmarks and preparation and professionalism and touchdown. In the end I had to read your book one chapter at a time, stopping at each new chapter heading from the exhaustion of detail I felt.

I’ve abstractly known what it meant for hundreds of thousands of people to have worked on the Apollo and earlier programs but it finally clicked into concrete realization as I read your descriptions of guidance alignments and overpressurized fuel lines and crushed foam and low sun angles and loping gaits — how teams of teams, collectively of incomprehensible size, engineered those landings. Your research and writing transported me to the 1960s, sitting in an offsite support room with a slide rule and earnestness and naïveté and a mission, in much the same way that reading Patrick O’Brian transports me onto the deck in a rolling sea at the height of the Napoleonic Wars.

And, as someone who wanted to be an astronaut his whole life, I walked away from your book understanding for the first time the terrible burden that “heroism” placed on the astronaut corps of that era.

I commend you on your work. Thank you.

November 30, 2014

How to Peer Review

Filed under: Opinions,Reviews — JLG @ 8:25 AM

I tried to title this blog post “7 things to keep in mind during peer review (#5 will shock you)” but just couldn’t bring myself to do it.

It’s paper review season; I’ve been working on reviewing papers for a conference for which a former IBM colleague invited me to be on the program committee.  Then this morning a long-time reader of this blog submitted a question:

First time on an academic CFP review team. Have the papers. Any tips for what to do?

Do I ever.  Here’s my advice:

  1. Don’t wait until the last minute to do your reviews.  I try to do at least 1/day so that I make progress and am able to give each paper enough time to do a good job.
  2. The words of the day are ‘constructive criticism’. Especially for papers you grade as ‘reject’, give the authors suggestions that, if adopted, would have moved the paper towards the ‘accept’ column.
  3. Don’t be biased by poor English skills. Judge the merit of the ideas. If needed, the program committee chair can assign a ‘shepherd’ to work with the authors to improve the phrasing and presentation. (There is usually a way for you to provide reviewer comments that are not given to the authors — you can flatly say “if we accept this paper then such-and-such must be fixed before publication.”
  4. Be fair, in that it’s easy to say “well that’s obvious” and assign a paper a low score. Was it obvious *before* you read the paper? Every manuscript doesn’t have to be groundbreaking. Rather, every paper should advance the community’s understanding of important issues/concepts in a way that can be externally validated and built upon in future work.
  5. The Golden Rule applies: I try to give the kind of feedback (or: do the kind of thoughtful evaluation) that I wished other reviewers did on the papers I submitted.  That doesn’t mean I accept everything, of course; historically speaking I think I recommend ‘accept’ for only about 20% of papers.
  6. If you are working with junior academic types it can be a good idea to farm out a paper or two to them, both to give them experience/exposure and to give you sometimes a better evaluation of the paper than you might have done yourself. Be sure to convey the importance of confidentiality and ethics (not stealing unpublished work). Also be sure to read the paper yourself, and file your own review if there are important points not stated by your ‘external reviewer’.
  7. Peer review is part of the fuel that makes our scientific engines work. I’ve always thought of it as an honor to be asked to review a paper (and a *big* honor the few times I’ve been asked to be on a Program Committee). So I try to deliver reviews that ‘feed the scientific engines’, so to speak.

August 8, 2013

KB1YBA/AE, or How I Spent My Weekend In Vegas

Filed under: Opinions,Reviews — JLG @ 8:15 PM

Last week I attended Black Hat USA 2013, BSidesLV 2013, and DEF CON 21 in fabulous Las Vegas, Nevada, thanks to the generosity of my employer offering to underwrite the trip.  Here were the top six topics of discussion from the weekend:

1. Not surprisingly, PRISM was the main topic of conversation all weekend.  Depending on your perspective, PRISM is either

“an internal government computer system used to facilitate the government’s statutorily authorized collection of foreign intelligence information from electronic communication service providers under court supervision” (Director of National Intelligence statement, June 8, 2013)

or

“a surveillance program under which the National Security Agency [NSA] vacuums up information about every phone call placed within, from, or to the United States [and where] the program violates the First Amendment rights of free speech and association as well as the right of privacy protected by the Fourth Amendment” (American Civil Liberties Union statement, June 11, 2013)

The NSA director, Gen. Keith Alexander, used the opening keynote at Black Hat to explain his agency’s approach to executing the authorities granted by Section 215 of the USA PATRIOT act and Section 702 of the Foreign Intelligence Surveillance Act.  His key points were:

  • The Foreign Intelligence Surveillance Court (FISC) does not rubber-stamp decisions, but rather is staffed with deeply-experienced federal judges who take their responsibilities seriously and who execute their oversight thoroughly.  Along similar lines, Gen. Alexander stated that he himself has read the Constitution and relevant federal law, that he has given testimony both at FISC hearings and at Congressional oversight hearings, and that he is completely satisfied that the NSA is acting within the spirit and the letter of the law.
  • Members of the U.S. Senate, as well as other executive branch agencies, have audited (and will continue to audit) the NSA’s use of data collected under Section 215 and Section 702.  These audits have not found any misuse of the collected data.  He offered that point as a rebuttal to the argument that the Government can abuse its collection capability—i.e., the audits show that the Government is not abusing the capability.
  • Records collected under Section 215 and Section 702 are clearly marked to indicate the statutory authority under which it is collected; this indication is shown on screen (a “source” field for the record) whenever the records are displayed.  Only specially trained and tested operators at the NSA are allowed to see the records, and that only a small number of NSA employees are in this category.  The collected data are not shared wholesale with other Government agencies but rather are shared on a case-by-case basis.
  • The NSA has been charged with (a) preventing terrorism and (b) protecting U.S. civil liberties.  If anyone can think of a better way of pursuing these goals, they are encouraged to share their suggestions at ideas@nsa.gov.

In the end I was not convinced by Gen. Alexander’s arguments (nor, anecdotally speaking, was any attendee I met at either Black Hat or DEF CON).  I walked away from the keynote feeling that the NSA’s collection of data is an indiscriminate Government surveillance program, executed under a dangerous and unnecessary veil of secrecy, with dubious controls in place to prevent abuse of the collected data that, if abused, would lead to violations of civil rights of U.S. citizens.  In particular, if this program had existed on September 11, 2001, I harbor no doubt that the statutory limits (use or visibility of collected data) would have been exceeded in the legislative and executive overreaction to the attacks.  This forbidden fruit is just too ripe and juicy.  As such I believe the Section 215 and Section 702 statutory limits will inexorably be exceeded if these programs—i.e., the regular exercise of the federal Government’s technical capability to indiscriminately collect corporate business records about citizen activities—continue to exist.

I do appreciate how the NSA is soliciting input from the community on how the NSA could better accomplish its antiterrorism directive.  Unfortunately, Pandora’s Box is already open; I can’t help but feel disappointed that my Government chose secretly to “vacuum up information” as its first-stab approach to satisfying the antiterrorism directive.  As I wrote in a comment on the Transportation Security Administration (TSA)’s proposed rule to allow the use of millimeter wave scanning in passenger screening:

I fly monthly.  Every time I fly, I opt out of the [millimeter wave] scanning, and thus I have no choice but to be patted down.  I shouldn’t have to submit to either.  In my opinion and in my experience, the TSA’s intrusive searching of my person without probable cause is unconstitutional, period.

I appreciate that the TSA feels that they’re between a rock and a hard place in responding to their Congressional directive, but eroding civil liberties for U.S. citizens is not the answer to the TSA’s conundrum.  Do better.

Eroding civil liberties for U.S. citizens is not the answer to the NSA’s conundrum.  Do better.

2. Distributed Denial of Service (DDoS) attacks.  There were at least five Black Hat talks principally about DDoS, including one from Matthew Prince on how his company handled a three hundred gigabit per second attack against a customer.  The story at that link is well worth reading.  Prince’s talk was frightening in that he forecast how next year we will be discussing 3 terabit/sec, or perhaps 30 terabit/sec, attacks that the Internet will struggle to counter. The DDoS attacks his company encountered required both a misconfigured DNS server (an open DNS resolver that is able to contribute to a DNS amplification attack; there are over 28 million such servers on the Internet) and a misconfigured network (one that does not prevent source address spoofing; Prince reports there are many such networks on the Internet).

Another interesting DDoS talk was Million Browser Botnet by Jeremiah Grossman and Matt Johansen.  The essence is that you write your botnet code in Javascript, then buy Javascript ads on websites…resulting in each reader of those websites becoming a node in your personal botnet (as long as they’re displaying the page).  Wow.

3. CreepyDOL.  To quote the Ars Technica article about this work: “You may not know it, but the smartphone in your pocket is spilling some of your deepest secrets to anyone who takes the time to listen. It knows what time you left the bar last night, the number of times per day you take a cappuccino break, and even the dating website you use. And because the information is leaked in dribs and drabs, no one seems to notice. Until now.”  The researcher, Brendan O’Connor, stalked himself electronically to determine how much information an attacker could glean by stalking him electronically.  In his presentations he advanced a twofold point:

(a) it’s easy to track people when they enable wifi or Bluetooth on their phones (since it sprays out its MAC address in trying to connect over those protocols), and

(b) many services (dating websites, weather apps, apple imessage registration, etc.) leak information in clear text that can be correlated with the MAC address to figure out who’s using a particular device.

O’Connor’s aha! moment was that he created a $50 device that can do this tracking.  You could put 100 of these around the city and do a pretty good job of figuring out where a person of interest is and/or where that person goes, for relatively cheap ($5,000) and with no need to submit official auditable requests through official channels.

The researcher also brought up an excellent point about how the chilling effect of Draconian laws like the Computer Fraud and Abuse Act (CFAA) make it impossible for legitimate computer security researchers to perform their beneficial-to-society function.  If the CFAA had existed in the physiomechanical domain in the 1960s then Ralph Nader could have faced time in federal prison for the research and exposition he presented in Unsafe At Any Speed: The Designed-In Dangers of The American Automobile—and consumers might never have benefitted from the decades of safety improvements illustrated in this video.  Should consumer risks in computer and network security systems should be treated any differently than consumer risks in automotive systems?  I’m especially curious to hear arguments from anybody who thinks “yes.”

4. Home automation (in)security.  In a forthcoming blog post I will describe the astonishingly expensive surprise expense we incurred this summer to replace the air conditioner at our house.  (“What’s this puddle under the furnace?” asked Evelyn.  “Oh, it’ll undoubtedly be a cheap and easy fix,” replied John.)

As part of the work we had a new “smart” thermostat installed to control the A/C and furnace.  The thing is a Linux-based touchscreen, and is amazing—I feel as though I will launch a space shuttle if I press the wrong button—and of course it is Wi-Fi enabled and comes with a service where I can control the temperature from a smartphone or from a web application.

And, of course, with great accessibility comes great security concerns.  Once the thermostat was up and running on the home network I did the usual security scans to see what services seemed to be available (short answer: TCP ports 22 and 9999).  Gawking at this new shuttle control panel got me interested in where the flaws might be in all these automation devices, and sure enough at Black Hat there were a variety of presentations on the vulnerabilities that can be introduced by consumer environmental automation systems:

Clearly, home automation and/or camera-enabled insecurity was a hot topic this year.  I was glad to see that the installation manual for our new thermostat (not camera-enabled, I think) emphasizes that it should be installed behind a home router’s firewall; it may even have checked during installation that it received an RFC 1918 private address to ensure that it wasn’t directly routable from the greater Internet.

5. Femtocells and responsible disclosureTwo years ago I wrote about research that demonstrated vulnerabilities in femtocells (a.k.a. microcells), the little cellular base stations you can plug into your Internet router to improve your cellular reception in dead zones.  This year, Doug DePerry and Tom Ritter continued the femtocell hacking tradition with a talk on how they got root access on the same device I use at home.  The researchers discovered an HDMI port on the bottom of the device, hidden under a sticker, and sussed out that it was actually an obfuscated USB port that provided console access.  Via this console they were able to modify the Linux kernel running on the device and capture unencrypted voice and SMS traffic.  The researchers demonstrated both capabilities live on stage, causing every attendee to nervously pull out and turn off their phones.  They closed by raising the interesting question of why any traffic exists unencrypted at the femtocell—why doesn’t the cellular device simply create an encrypted tunnel to a piece of trusted back-end infrastructure?  They also asked why deploy femtocells at all, instead of simply piggybacking an encrypted tunnel over ubiquitous Wi-Fi.

Regarding responsible disclosure, the researchers notified Verizon in December 2012 about the vulnerability.  Verizon immediately created a patch and pushed it out to all deployed femtocells, then gave the researchers a green light to give the talk as thanks for their responsible disclosure.  Several other presenters reported good experiences with having responsibly disclosed other vulnerabilities to other vendors, enough so that I felt it was a theme of this year’s conference.

6. Presentation of the Year:Adventures in Automotive Networks and Control Units” by Charlie Miller and Chris Valasek.  It turns out it’s possible to inject traffic through the OBD-II diagnostic port that can disable a vehicle’s brakes, stick the throttle wide open, and cause the steering wheel to swerve without any driver input.  Miller and Valasek showed videos of all these happening on a Ford Escape and a Toyota Prius that they bought, took apart, and reverse engineered.  It’s only August but their work gets my vote for Security Result of 2013.  Read their 101-page technical report here.

Wow.

I mean, wow.  The Slashdot discussion of their work detailed crafty ways that this attack could literally be used to kill people.  The exposed risks are viscerally serious; Miller showed a picture from testing the brake-disabling command wherein he crashed uncontrolled through his garage, crushing a lawnmower and causing thousands of dollars of damage to the rear wall.  (In a Black Hat talk the day before, Out of Control: Demonstrating SCADA Device Exploitation, researchers Eric Forner and Brian Meixell provided an equally visceral demonstration of the risks of Internet-exposed and non-firewalled SCADA controllers by overflowing a real fluid tank using a real SCADA controller right on stage.  I for one look forward to this new age of security-of-physical-systems research where researchers viscerally demonstrate the insecurity of physical systems.)

Regardless, other than those gems I was unimpressed by this year’s Black Hat (overpriced and undergood) and felt “meh” about DEF CON (overcrowded and undernovel).  Earlier in the year I was on the fence about whether to attend the Vegas conferences, having been underwhelmed last year, which prompted my good friend and research partner Brendan to observe that if I had a specific complaint about these conferences then I should stop whining about it and instead do something to help fix the problem.  In that spirit I volunteered to be on the DEF CON “CFP review team,” in hopes that I could help shape the program and shepherd some of the talks.  Unfortunately I was not selected to participate (not at all surprising, since I work indirectly for The Man).

In my offer to volunteer I offered these specific suggestions toward improving DEF CON, many of which are equally relevant to improving Black Hat:

I’d like to see the DEFCON review committee take on more of a “shepherding” role, as is done with some academic security conferences — i.e., providing detailed constructive feedback to the authors, and potentially working with them one-on-one in suggesting edits to presentations or associated whitepapers.

I think there are things the hacker community can learn from the academic community, such as:

* You have to answer the core question of why the audience should care about your work and its implications.

* It’s one thing to present a cool demo; it’s another to organize and convey enough information that others can build upon your work.

* It only strengthens your work if you describe others’ related work and explain the similarities and differences in your approach and results.

Of course there are plenty of things the academic community can learn from the hacker community!  I’m not proposing to swoop in with a big broom and try to change the process that’s been working fine for DEFCON for decades.  In fact I’m curious to experience how the hacker community selects its talks, so I can more effectively share that information with the academic community.  (For example I spoke at last year’s USENIX Association board meeting on the differences between the USENIX annual technical conference and events like DEFCON, Shmoocon, and HOPE, and I commented on lessons USENIX could take away from hacker cons.)

But each year I’ve been disappointed at how little of a “lasting impression” I’ve taken away from almost all of the DEFCON talks.  A “good presentation” makes me think about how *I* should change the way I approach my projects, computer systems, or external advocacy. I wish more DEFCON talks (and, frankly, Black Hat talks) were “good presentations.”  I’m willing to contribute my effort to your committee to help the community get there.

One academic idea you might be able to leverage is that whenever you’re published, you’re expected to serve on program committees (or as a reviewer) in future years.  (The list of PC members is usually released at the same time as the RFP, and the full list of PC members and reviewers are included in the conference program, so it’s both a service activity and resume fodder for those who participate.)  So perhaps you could start promulgating the idea that published authors at BH and DC are expected to do service activities (in the form of CFP review team membership) for future conferences.

Finally, KB1YBA/AE in the title of this post refers to the culmination of a goal I set for myself eighteen years ago.  There are three levels of achievement (three license classes) that you can attain as a U.S. ham radio enthusiast:

  • Technician class.  Imagine, if you will, a much younger John.  In 1995, at the urging of my old friend K4LLA, I earned technician (technically “technician plus”) class radiotelephone privileges by passing both a written exam and a 5-words-per-minute Morse code transcription exam.  [Morse code exams are no longer required to participate in amateur radio at any level in the United States.]  At that time I set a goal for myself that someday I would pass the highest-level (extra class) exam.
  • General class.  In 2011, at the urging of my young friend K3QB, I passed the written exam to earn general class privileges.  With each class upgrade you are allowed to transmit on a broader range of frequencies.  With this upgrade I was assigned the new call sign KB1YBA by the Federal Communications Commission.
  • Amateur Extra class.  At DEF CON this year, with the encouragement of my former colleague NK1B, I passed the final written exam and earned extra class privileges.  It will take a few weeks before the FCC assigns me a new call sign, but in the meantime I am allowed to transmit on extra-class frequencies by appending an /AE suffix onto my general-class call sign when I identify myself on-air.  For example: “CQ, CQ, this is KB1YBA temporary AE calling CQ on [frequency].”

I don’t mean to toot my own horn, but it feels pretty dang good to fulfill a goal I’ve held for half my life.  Only 17% [about 120,000] of U.S. ham radio operators have earned Amateur Extra class privileges.

May 26, 2013

Security B-Sides Boston 2013

Filed under: Reviews — JLG @ 11:07 AM

Security B-Sides is an odd duck series of workshops.  Are you:

  1. Traveling to attend (or already living near) a major commercial security conference (RSA in San Francisco, Black Hat in Las Vegas, or SOURCE in Boston)?
  2. Not particularly interested in attending any of the talks in the commercial security conference you’ve already paid hundreds of dollars to attend?
  3. Unconcerned with any quality control issues that may arise in choosing a conference program via upvotes on Twitter?

Then you should attend B-Sides.

Okay, so it’s not as grim as I lay out above.  Earlier this month I attended Security B-Sides Boston (BSidesBOS 2013) on USSJoin’s imprimatur.  I felt the B-Sides program itself was weak, the hallway conversations were good, the keynotes were great, and the post-workshop reception was excellent.

But if I were on the B-Sides steering committee I would have B-Sides take place either immediately before or immediately after its symbiotic commercial conference.  In academic conferences you will often see a “core” conference with 1-day workshops before or after or both, meaning that attendees can optionally participate, without requiring separate travel, and without interfering with the conference they’ve already paid hundreds of dollars to attend.

My takeaways from the B-Sides workshop came from the two keynote talks.  Dr. Dan Geer (chief information security officer at In-Q-Tel)’s talk was one of the best keynotes I’ve ever seen.  Some of his thought-provoking points included:

  • It’s far cheaper to keep all your data than to do selective deletion.  He implied that there is an economic incentive at work whose implications we need to understand:  As long as it’s cheaper to just keep everything (disks are cheap, and now cloud storage is cheap), people are going to just keep everything.  I’d thought about the save-everything concept before, but not from an economic perspective.
  • When network intrusions are discovered, the important question is often “how long has this been going on?” instead of  “who is doing this?”  He implied that recovery was often more important than adversarial discovery (i.e., most people just want to revert affected systems to a known-good state, make sure that known holes are plugged, and move forward.)  And the times could be staggering; he noted a Symantec report that the average zero-day exploit is in use for 300 days before it is discovered.
  • Could the U.S. corner the vulnerability market?  Geer made the fascinating suggestion that the U.S. buy every vulnerability on the market (offering 10 times market rates if needed) and immediately release them publicly.  His goal is to collapse the information asymmetry that has built up because of the economics of selling zero-day attacks.  He pined for the halcyon days of yore when zero-day attacks were discovered by hobbyists and released for fun (leading to “market efficiency” where everyone was on the same playing field when it came to technology decisions) rather than the days of today when they are sold for profit (leading to asymmetry, where known vulnerabilities are no longer public).
  • “Security is the state of unmitigatable surprise.  Privacy is where you have the effective capacity to misrepresent yourself.  Freedom in the context of the Internet is the ability to reinvent yourself when you want.”  He suggested that each of us should have as many distinct, curated online identities as we can manage — definitely an interesting research area.  He made the fascinating suggestion of “try to erase things sometime,” for example by creating a Facebook profile…then later trying to delete it and all references to it.
  • Observability is getting out of control and is not coming back.  He commented that facial recognition is viable at 500 meters, and iris identification at 50 meters.
  • All security technology is dual use; technology itself is neutral and should be treated as such.  During my early days as a government contractor I similarly railed against the automatic (by executive order) top secret classifications applied to cyber weaponry and payloads — because doing so puts the knowledge out of reach of our network security defenders.  As it turns out, One Voice Railing usually isn’t the most effective way to change entrenched bureaucratic thinking.  (I haven’t really figured out what is the most effective way.)
  • “Your choice is one big brother or many little brothers.  Choose wisely.”  This closing line is open to deep debate and interpretation; I’ve already had several interesting conversations about what Geer meant and what he’s implying.  My position is that his earlier points (e.g., observability is out of control and is not coming back) demonstrate that we’ve already crossed the Rubicon of “no anonymity, no privacy” — without even realizing it — and that it’s far too late to go back to a time where no brother will watch you.  Can anything be done?  I’m very interested in continuing to debate this question.

Mr. Josh Corman (director of security intelligence at Akamai) gave the second keynote.  Some of his interesting points included:

  • Our dependence on software and IT is growing faster than our ability to secure it.  Although this assertion isn’t new, it always brings up an interesting debate: if you can’t secure the software, then what can you do instead?  (N-way voting?  Graceful degradation?  Multiple layers of encryption or authentication?  Auditing and forensic analyses?  Give up?)  A professor I knew gave everybody the root password on his systems, under the theory that since he knew it was insecure then he would only to use the computer as a flawed tool rather than as a vital piece of infrastructure.  Clearly the professor’s Zen-like approach wouldn’t solve everyone’s security conundrums, but the simplicity and power of his approach makes me think that there are alternative, unexplored, powerful ways to mitigate the imbalance of insecure and increasingly critical computer systems.
  • HDMoore’s Law: Casual attacker power grows at the rate of Metasploit.  This observation was especially interesting: not only do defenders have to worry about an increase in vulnerabilities but they need to worry about an increase in baseline attacker sophistication, as open-source security-analysis tools grow in capability and complexity.
  • “The bacon principle: Everything’s better with bacon.”  His observation here is that it is especially frustrating when designers introduce potential vulnerability vectors into a system for no useful reason.  As an example, he asks why an external medical devices needs to be configurable using Bluetooth when the device (a) doesn’t need to be frequently reconfigured and (b) could be just as easily configured using a wired [less permissive] connection.  The only thing Bluetooth (“bacon”) adds to such a safety-critical device is insecurity.
  • Compliance regulations set the bar too low.  Corman asserts that the industry’s emphasis on PCI compliance (the payment card industry data security standard) means that we put the most resources towards protecting the least important information (credit card numbers).  It’s a double whammy:  Not only is there an incentive to only protect PCI information and systems, but there is no incentive to do better than the minimal set of legally-compliant protections.
  • Is it time for the security community to organize and professionalize?  Corman railed against “charlatans” who draw attention to themselves (for example, by appearing on television) without having meaningful or true things to say.  He implied that the security community should work together to define and promulgate criteria, beyond security certifications, that could provide a quality control function for people claiming to represent security expertise and best practices.  (A controversial proposal!)  A decade ago I explored related conversations about the need to create licensed professional software engineers, both to incent members of our community to adhere to well-grounded and ethical principles in their practice & to provide the community and the state with engineers who assume the responsibility and risk over critical systems designs.
  • “Do something!”  Corman closed by advocating for the security community to come together to shape the narrative of information security — especially in terms of lobbying to influence Governmental oversight and regulation — instead of letting other people do the lobbying and define the narrative.  He gave the example of unpopular security legislation like SOPA and PIPA: “you can either DDoS it [after the legislation is proposed] or you can supply draft language [to help make it good to begin with].”  I felt this was a great message for a keynote talk, especially in how it matches the influential message I heard from a professor at Carnegie Mellon (Dr. Philip Koopman) who fought successfully against adoption of the Uniform Computer Information Transactions Act and who exhorted me and my fellow students to be that person who stands up and fights on important issues when others remain silent.

All in all not a bad way to spend $20 and a Saturday’s worth of time.

November 1, 2012

2012 Conference on Computer and Communications Security

Filed under: Reviews,Work — JLG @ 12:00 AM

In October I attended the 19th ACM Conference on Computer and Communications Security (CCS) in Raleigh, North Carolina.  It was my fourth time attending (and third city visited for) the conference.

Here are some of my interesting takeaways from the conference:

The point of Binary Stirring is to end up with a completely different (but functionally equivalent) executable code segment, each time you load a program.  The authors double “each code segment into two separate segments—one in which all bytes are treated as data, and another in which all bytes are treated as code.  …In the data-only copy (.told), all bytes are preserved at their original addresses, but the section is set non-executable (NX).  …In the code-only copy (.tnew), all bytes are disassembled into code blocks that can be randomly stirred into a new layout each time the program starts.”  (The authors measured about a 2% performance penalty from mixing up the code segment.)

But why mix the executable bytes at all?  Binary Stirring is intended to protect against clever “return-oriented programming” (ROP) attacks by eliminating all predictable executable code from the program.  If you haven’t studied ROP (I hadn’t before I attended the talk) then it’s worth taking a look, just to appreciate the cleverness of the attack & the challenge of mitigating it.  Start with last year’s paper Q: Exploit Hardening Made Easy, especially the related work survey in section 9.

Regarding ORAM, imagine a stock analyst who stores gigabytes of encrypted market information “in the cloud.”  In order to make a buy/sell decision about a particular stock (say, NASDAQ:TSYS), she would first download a few kilobytes of historical information about TSYS from her cloud storage.  The problem is that an adversary at the cloud provider could detect that she was interested in TSYS stock, even though the data is encrypted in the cloud.  (How?  Well, imagine that the adversary watched her memory access patterns the last time she bought or sold TSYS stock.  Those access patterns will be repeated this time when she examines TSYS stock.)  The point of oblivious RAM is to make it impossible for the adversary to glean which records the analyst downloads.

  • Fully homomorphic encryption:  The similar concept of fully homomorphic encryption (FHE) was discussed at some of the post-conference workshops.  FHE is the concept that you can encrypt data (such as database entries), store them “in the cloud,” and then have the cloud do computation for you (such as database searches) on the encrypted data, without decrypting.

When I first heard about the concept of homomorphic encryption (circa 2005, from some of my excellent then-colleagues at IBM Research) I felt it was one of the coolest things I’d encountered in a company filled with cool things.  Unfortunately FHE is still somewhat of a pipe dream — like ORAM, it’ll be a long while before it’s efficient enough to solve any practical real-world problems — but it remains an active area of interesting research.

  • Electrical network frequency (ENF):  In the holy cow, how cool is that? category, the paper “How Secure are Power Network Signature Based Time Stamps?” introduced me to a new forensics concept: “One emerging direction of digital recording authentication is to exploit an potential time stamp originated from the power networks. This time stamp, referred to as the Electrical Network Frequency (ENF), is based on the fluctuation of the supply frequency of a power grid.  … It has been found that digital devices such as audio recorders, CCTV recorders, and camcorders that are plugged into the power systems or are near power sources may pick up the ENF signal due to the interference from electromagnetic fields created by power sources.”  Wow!

The paper is about anti-forensics (how to remove the ENF signature from your digital recording) and counter-anti-forensics (how to detect when someone has removed the ENF signature).  The paper’s discussion of ENF analysis reminded me loosely of one of my all-time favorite papers, also from CCS, on remote measurement of CPU load by measuring clock skew as seen through TCP (transmission control protocol) timestamps.

  • Resource-freeing attacks (RFA):  I also enjoy papers about virtualization, especially regarding the fair or unfair allocation of resources across multiple competing VMs.  In the paper “Resource-Freeing Attacks: Improve Your Cloud Performance (at Your Neighbor’s Expense)”, the authors show how to use antisocial virtual behavior for fun and profit: “A resource-freeing attack (RFA) [improves] a VM’s performance by forcing a competing VM to saturate some bottleneck. If done carefully, this can slow down or shift the competing application’s use of a desired resource. For example, we investigate in detail an RFA that improves cache performance when co-resident with a heavily used Apache web server. Greedy users will benefit from running the RFA, and the victim ends up paying for increased load and the costs of reduced legitimate traffic.”

A disappointing aspect of the paper is that they don’t spend much time discussing how one can prevent RFAs.  Their suggestions are (1) use a dedicated instance, or (2) build better hypervisors, or (3) do better scheduling.  That last suggestion reminded me of another of my all-time favorite research results, from last year’s “Scheduler Vulnerabilities and Attacks in Cloud Computing”, wherein the authors describe a “theft-of-service” attack:  A virtual machine calls Halt() just before the hypervisor timer fires to measure resource use by VMs, meaning that the VM consumes CPU resources but (a) isn’t charged for them and (b) is allocated even more resources at the expense of other VMs.

  • My favorite work of the conference:  The paper is a little hard to follow, but I loved the talk on “Scriptless Attacks – Stealing the Pie Without Touching the Sill”.  The authors were interested in whether an attacker could still perform “information theft” attacks once all the XSS (cross-site scripting) vulnerabilities are gone.  Their answer: “The surprising result is that an attacker can also abuse Cascading Style Sheets (CSS) in combination with other Web techniques like plain HTML, inactive SVG images or font files.”

One of their examples is that the attacker can very rapidly shrink then grow the size of a text entry field.  When the text entry field shrinks to one pixel smaller than the width of the character the user typed, the browser automatically creates a scrollbar.  The attacker can note the appearance of the scrollbar and infer the character based on the amount the field shrank.  (The shrinking and expansion takes place too fast for the user to notice.)  The data exfiltration happens even with JavaScript completely disabled.  Pretty cool result.

Finally, here are some honorable-mention papers in four categories — work I enjoyed reading, that you might too:

Those who cannot remember the past, are condemned to repeat it:

Sidestepping institutional security:

Why be a white hat?  The dark side is where all the money is made:

Badware and goodware:

Overall I enjoyed the conference, especially the “local flavor” that the organizers tried to inject by serving stereotypical southern food (shrimp on grits, fried catfish) and hiring a bluegrass band (the thrilling Steep Canyon Rangers) for a private concert at Raleigh’s performing arts center.

August 3, 2012

Black Hat USA 2012 and DEF CON 20: The future of insecurity

Filed under: Reviews,Work — JLG @ 12:00 AM

I returned to the blistering dry heat of Las Vegas for a second year in a row to attend Black Hat and DEF CON.

The most interesting talk to me was a panel discussion at Black Hat that provided a future retrospective on the next 15 years of security.  Some of the topics discussed:

  • What is the role of the private sector in computer and network security?  One panelist noted that the U.S. Constitution specifies that the government is supposed “to provide for the common defense” — presumably including all domestic websites, commercial networks and intellectual property, and perhaps even personal computers — instead of only claiming to protect the .gov (DHS) and .mil (NSA) domains as they do today.  Another panelist suggested that, as in other sectors, the government should publish “standards” for network and communications security such that individual companies can control the implementation of those standards.
  • Social engineering and the advanced persistent threat.  At a BSidesLV party, someone I met asked whether I felt the APT was just a buzzword or whether it was real.  (My answer was “both”.)  Several speakers played with new views on the APT, such as “advanced persistent detection” (defenders shouldn’t be focused on vulnerabilities; rather they should look at an attacker’s motivation and objectives) and “advanced persistent fail” (real-world vulnerabilities survive long after mitigations are published).
  • How can you discover what evil lurks in the hearts of men and women?  One panelist speculated that we would see the rise of long-term [lifetime?] professional background checks for technological experts.  Current background checks for U.S. government national security positions use federal agents to search back 7-10 years.  I got the impression that the panelist foresees a rise in private-sector background checks (or checks against private databases of personal information) as a prerequisite for hiring decisions across the commercial sector.
  • How can you protect against a 120 gigabit distributed denial of service (DDoS) attack?  A panelist noted that a large recent DDoS hit 120 Gbit/sec, up around 4x from the largest DDoS from a year or two ago.  The panelist challenged the audience to think about how “old” attacks, which used to be easy to mitigate, become less so at global scale when the attacker leverages cloud infrastructure or botnet resources.
  • Shifting defense from a technical basis into a legal, policy, or contractual basis.  So far there hasn’t been an economically viable way to shift network security risks (or customer loss/damage liability) onto a third party — I believe many organizations would willingly exchange large sums of money to be released from these risks, but so far no third party seems willing to accept that bet.  The panel wondered whether (or when) the insurance industry will develop a workable model for computer security.
  • Incentives for computer security.  Following up on the point above, a panelist noted that it is difficult to incent users to follow good security practices.  The panelist asserted how E*TRADE gave away 10,000 security tokens but still had trouble convincing their users to use them as a second factor for authentication.  Another panelist pointed to incentives in the medical insurance industry — “take care of your body” and enjoy lower premiums — and wondered how to provide similar actionable incentives to take care of your network.
  • Maximizing your security return-on-investment (ROI).  A panelist asserted that the best ROI is money spent on your employees:  Developing internal experts in enterprise risk management, forensics and incident response skills, etc.
  • Assume you will be breached.  I’ve also been preaching that message: Don’t just protect, but also detect and remediate.  A panelist suggested you focus on understanding your network and your systems, especially with respect to configuration management and change management.

When asked to summarize the next 15 years of security in five words or fewer, the panelists responded:

  1. Loss of control.
  2. Incident response and cleaning up.
  3. Human factors.

Beyond the panel discussion, some of the work that caught my attention included:

  • Kinectasploit.  Jeff Bryner presented my favorite work of the weekend, on “linking the Kinect with Metasploit [and 19 other security tools] in a 3D, first person shooter environment.”  I have seen the future of human-computer interaction for security analysts — it is Tom Cruise in Minority Report — and the work on Kinectasploit is a big step in us getting there.
  • Near field communications insecurity.  Charlie Miller (“An analysis of the Near Field Communication [NFC] attack surface”) explained that “through NFC, using technologies like Android Beam or NDEF content sharing, one can make some phones parse images, videos, contacts, office documents, even open up web pages in the browser, all without user interaction. In some cases, it is even possible to completely take over control of the phone via NFC, including stealing photos, contacts, even sending text messages and making phone calls” and showed a live demo of using an NFC exploit to take remote control of a phone.
  • Operating systems insecurity.  Rebecca Shapiro and Sergey Bratus from Dartmouth made the fascinating observation that the ELF (executable and linker format) linker/loader is itself a Turing-complete computer: “[we demonstrate] how specially crafted ELF relocation and symbol table entries can act as instructions to coerce the linker/loader into performing arbitrary computation. We will present a proof-of-concept method of constructing ELF metadata to implement [Turing-complete] language primitives and well as demonstrate a method of crafting relocation entries to insert a backdoor into an executable.”  The authors’ earlier white paper provides a good introduction to what they call “programming weird machines”.
  • Wired communications insecurity.  Collin Mulliner (“Probing mobile operator networks”) probed public IPv4 address blocks known to be used by mobile carriers and found a variety of non-phone devices, such as smart meters, with a variety of enabled services with obtainable passwords.
  • Governmental infrastructure insecurity.  My next-to-favorite work was “How to hack all the transport networks of a country,” presented by Alberto García Illera, where he described a combination of physical and electronic penetration vectors used “to get free tickets, getting control of the ticket machines, getting clients [credit card] dumps, hooking internal processes to get the client info, pivoting between machines, encapsulating all the traffic to bypass the firewalls” of the rail network in his home country.
  • Aviation communications insecurity.  There were three talks on aviation insecurity, all focused on radio transmissions or telemetry (the new ADS-B standard for automated position reporting, to be deployed over the next twenty years) sent from or to an aircraft.

Last year I tried to attend as many talks as I could but left Vegas disappointed — I found that there is a low signal-to-noise ratio when it comes to well-executed, well-presented work at these venues.  The “takeaway value” of the work presented is nowhere near as rigorous or useful as that at research/academic conferences like CCS or NDSS.  But it turns out that’s okay; these venues are much more about the vibe, and the sharing, and the inspiration (you too can hack!), than about peer-reviewed or archival-quality research.  DEF CON in particular provides a pretty fair immersive simulation of living inside a Neal Stephenson or Charlie Stross novel.

This year I spent more time wandering the vendor floor (Black Hat) and acquiring skills in the lockpick village (DEF CON), while still attending the most-interesting-looking talks andshows.  By lowering my “takeaway value” expectations a bit I ended up enjoying my week in Vegas much more than expected.

June 21, 2012

USENIX ATC 2012

Filed under: Reviews — JLG @ 11:47 PM

Last week I attended the USENIX Annual Technical Conference and its affiliated workshops, all here in sunny Boston, Massachusetts.  Here were my takeaways from the conference:

  • Have increases in processing speed and resources on individual computer systems changed the way that parallelizable problems should be split in distributed systems?  In a paper on the “seven deadly sins of cloud computing research”, Schwarzkopf et al. note (for sin #1) that “[even] If we satisfy ourselves that parallel processing is indeed necessary or beneficial, it is also worth considering whether distribution over multiple machines is required.  As Rowstron et al. recently pointed out, the rapid increase in RAM available in a single machine combined with large numbers of CPU cores per machine can make it economical and worthwhile to exploit local, rather than distributed, parallelism.”  The authors acknowledge that there is an advantage to distributed computation, but they note that “Modern many-core machines…can easily apply 48 or more CPUs to processing a 100+GB dataset entirely in memory, which already covers many practical use cases.”  I enjoyed this observation and I hope their observation inspires (additional) research on adaptive systems that can automatically partition a large cloud-oriented workload into a “local” component, sized appropriately for the resources on an individual node, and a “distributed” component for parallelization beyond locally-available resources.  Would it be worth pursuing the development such an adaptive system, especially in the face of unknown workloads and heterogeneous resource availability on different nodes—or is the problem intractable relative to the benefits of local multicore processing?
  • Just because you have a bunch of cores doesn’t mean you should try to utilize all of them all of the time.  Lozi et al. identify two locking-related problems faced by multithreaded applications:  First, when locks are heavily contended (many threads trying to acquire a single lock) the overall performance suffers; second, each thread incurs cache misses when executing over the critical section protected by the lock.  Their interesting solution involves pinning that critical section onto a dedicated core that does nothing but run that critical section.  (The authors cite related work that does similar pinning of critical sections onto dedicated hardware; I found it especially useful to read Section 1 of Lozi’s paper for its description of related work.)  The authors further created a tool that modifies the C code of legacy applications “to replace lock acquisitions by optimized remote procedure calls to a dedicated server core.”  One of my favorite technical/research questions over the past decade is, given ever-increasing numbers of cores, “what will we do with all these cores?”—and I’ve truly enjoyed that the answer is often that using cores very inefficiently is often the best thing you can do for overall application/system performanceAnanthanarayanan et al. presented a similar inefficiency argument for small clustered jobs executing in the cloud: “Building on the observation that clusters are underutilized, we take speculation to its logical extreme—run full clones of jobs to mitigate the effect of outliers.”  One dataset the authors studied had “outlier tasks that are 12 times slower than that job’s median task”; their simulation results showed a 47% improvement in completion time for small jobs at a cost of just 3% additional resources.

Figure 5 from “netmap: a novel framework for fast packet I/O” [Rizzo 2012]

  • There are still (always?) plenty of opportunities for low-level optimization.  Luigi Rizzo presented best-paper-award-winning work on eliminating OS and driver overheads in order to send and receive packets at true 10 Gbps wire speed.  I love figures like the one shown above this paragraph, where a system both (a) blows away current approaches and (b) clearly maxes out the available resources with a minimum of fuss.  I found Figure 2 from this paper to be especially interesting:  The author measured the path and execution times for a network transmit all the way down from the sendto() system call to the ixgbe_xmit() function inside the network card driver, then used these measurements to determine where to focus his optimization efforts—those being to “[remove] per-packet dynamic memory allocations, removed by preallocating resources; [batch] system call overheads, amortized over large batches; and [eliminate] memory copies, eliminated by sharing buffers and metadata between kernel and userspace, while still protecting access to device registers and other kernel memory areas.”  None of these techniques are independently novel, but the author claims novelty in that his approach “is tightly integrated with existing operating system primitives, not tied to specific hardware, and easy to use and maintain.”  Given his previous success creating dummynet (now part of FreeBSD), I am curious to see whether his modified architecture makes its way into the official FreeBSD and Linux codebases.
  • Capturing packets is apparently difficult, despite the years of practice we’ve had in doing so.  Two papers addressed aspects of efficiency and efficacy for packet capture:  Taylor et al. address the problem that “modern enterprise networks can easily produce terabytes of packet-level data each day, which makes efficient analysis of payload information difficult or impossible even in the best circumstances.”  Taylor’s solution involves aggregating and storing application-level information (DNS and HTTP session information) instead of storing the raw packets and later post-processing them.  Section 5 of the paper presents an interesting case study of how the authors used their tool to identify potentially compromised hosts on the University of North Carolina at Chapel Hill’s computer network.  Papadogiannakis et al. assert that “intrusion detection systems are susceptible to overloads, which can be induced by traffic spikes or algorithmic singularities triggered by carefully crafted malicious packets” and designed a packet pre-processing system that “gracefully responds to overload conditions by storing selected packets in secondary storage for later processing”.
  • /bin/true can fail!  Miller et al. described their system-call-wrapping software that introduces “gremlins” as part of automated and deterministic software testing.  A gremlin causes system calls to return legitimate but unexpected responses, such as having the read() system call return only one byte at a time with each repeat call to read().  (The authors note that “This may happen if an interrupt occurs or if a slow device does not have all requested data immediately available.”)  During the authors’ testing they discovered that “the Linux dynamic loader [glibc] failed if it could not read an executable or shared library’s ELF header in one read().”  As a result, any program requiring dynamic libraries to load—including, apparently, /bin/true—fail whenever system calls are wrapped in this manner.  This fascinating result calls to mind Postel’s Law.

Table 1 from “Software Techniques for Avoiding Hardware Virtualization Exits” [Agesen 2012]

  • Switching context to and from the hypervisor is still (somewhat) expensive.  The table above shows the “exit latency” for hardware virtualization processor extensions, which I believe Agesen et al. measured as the round-trip time to trap from guest mode, to the virtual machine monitor (VMM), then immediately back to the guest.  I was surprised that the numbers are still so high for recent-generation architectures.  Worse, the authors verbally speculated that, given the plateau with the Westmere and Sandy Bridge architectures, we may not see much further reduction in exit latency for future processor generations.  To address this high latency, Agesen described a scheme where the VMM attempts to find clusters of instructions-that-will-exit, and where the VMM handles such clusters collectively using only a single exit (instead of returning to guest mode after each handled instruction).  One example of such a cluster: “Most operating systems, including Windows and Linux, [update] 64 bit PTEs using two back-to-back 32 bit writes to memory….This results in two costly exits”; Agesen’s scheme collapses these into a single exit.  The authors are also able to handle complicated cases such as control flow logic between clustered instructions that exit.  I like a lot of work on virtual machines, but I especially liked this work for its elegance and immediate practicality.
  • JLG’s favorite work: We finally have a usable interactive SSH terminal for cellular or other high-latency Internet connections.  Winstein et al. presented long-overdue work on making it easy to interact with remote computers using a cell phone.  The authors’ key innovation was to have the client (on the phone) do local echo, so that you see what you typed without having to wait 4 seconds (or 15 seconds or more) for the dang network to echo your characters back to you.  The authors further defined a communications protocol that preserves session state even over changes in the network (IP) address, meaning that your application will no longer freeze when your phone switches from a WiFi network to a cellular network.  Speaking metaphorically, this work is akin to diving into a pool of cool refreshing aloe cream after years of wading through hot and humid mosquito-laden swamps.  (As you can tell, I find high-latency networks troublesome and annoying.)  The authors themselves were surprised at how much interest the community has shown in their work: “Mosh is free software, available from http://mosh.mit.edu. It was downloaded more than 15,000 times in the first week of its release.”
  • Several papers addressed the cost savings available to clients willing to micromanage their cloud computing allocations or reservations; dynamic reduction of resources is important too.  Among the more memorable:  Ou et al. describe how to use microbenchmark and application benchmarks to identify hardware heterogeneity (and resulting performance variation) within the same instance types in the Amazon Elastic Compute Cloud.  “By selecting better-performing instances to complete the same task, end-users of Amazon EC2 platform can achieve up to 30% cost saving.”  I also learned from Ou’s co-author Prof. Antti Ylä-Jääski that it is permissible to chain umlauted letters; however, I’ve been trying to work out exactly how to pronounce them.  Zhu et al. challenge the assumption that more cache is always better; they show that “given the skewed popularity distribution for data accesses, significant cost savings can be obtained by scaling the caching tier under dynamic load patterns”—for example, a “4x drop in load can result in 90% savings” in the amount of cache you need to provision to handle the reduced load.

In addition to the USENIX ATC I also attended portions of several workshops held coincident with the main conference, including HotStorage (4th workshop on hot topics in storage and file systems), HotCloud (4th workshop on hot topics in cloud computing), Cyberlaw (workshop on hot topics in cyberlaw), and WebApps (3rd conference on web application development).

At HotStorage I had the pleasure of watching Da Zheng, a student I work with at Johns Hopkins, present his latest work on “A Parallel Page Cache: IOPS and Caching for Multicore Systems“.

Since 2010 USENIX has held a “federated conferences week” where workshops like these take place in parallel with the main conference & where your admission fee to the main conference covers your admission to the workshops.  This idea works well, especially since USENIX in its modern incarnations has only a single track of talks.  Unfortunately, however, I came to feel that the “hot” workshops are no longer very hot—I was surprised at the dearth of risky, cutting-edge, groundbreaking, not-fully-fleshed-out ideas presented and the resulting lack of heated controversial (interesting) discussion.  (I did not attend the full workshops, however, so it could simply be that I missed the exciting papers and the stimulating discussions.)

I also sat in on the USENIX Association’s annual membership meeting.  Attendance at this year’s conference was well down from what I remember of the USENIX conference I attended a decade ago (USENIX conferences have been running since the 1970s), and I got the sense that USENIX is scrambling to figure out how to stay relevant, funded/sponsored, attractive to potential authors, and interesting to potential attendees.  The newly-elected USENIX board asked for community feedback so I sent the following thoughts:

Hi Usenix board,

I appreciate the opportunity to participate in your annual meeting earlier today. I wasn’t even aware that the meeting was taking place until one of the board members walked around the hallways trying to get people to attend. I otherwise would have thought that the “Usenix annual meeting” was some boring event like voting for FY13 officers or something. Maybe I missed a memo? Perhaps next year mention the annual meeting in the email blast you sent out beforehand. (When I was at ShmooCon earlier this year the conference organizer actually ran a well-attended session, as part of the conference, about how they run a conference — logistics, costs, etc. — and in that session encouraged the same kind of feedback that you all solicited during the meeting.)

One of the points I brought up in the meeting is that Usenix should place itself at the forefront of cool technology. If there is something awesome that people are using, Usenix could be one of the first to adopt it — both in terms of using the technology during a session (live simulcast, electronic audience interaction, etc.) and in terms of the actual technology demonstrations at the conference. Perhaps you could have a “release event” where people agreed to escrow their latest cool online/network/storage/cloud/whatever tools for a few months and then release them all in bulk at the conference? You could make it into a media/publicity event and give awards for the most Usenix-spirity tools.

My wife suggests that you hold regional gatherings simultaneously as part of a large distributed conference. So perhaps it’s difficult/expensive for me to fly out to WA for Usenix Security, but maybe it’d be easy for me to attend the “northeast regional Usenix Security conclave” where all the sessions from WA are simulcast to MA (or NY or DC) — e.g. hosted at a university somewhere. Even better would be to have a few speakers present their work *here* and have that simulcast to the main conference in WA. [Some USENIX members] would complain about this approach, but think of all the news lately about the massive online university courses — perhaps Usenix could be a leader/trendsetter/definer in massive online conferences.

If I can figure out how, I’d like to be involved in bridging the gap between the “rigorous” academic conference community and the “practical” community conference community. I’m giving a talk next month at a community conference, so I will try to lay the groundwork there by describing to the audience what conferences like Usenix are like. You know, YOU could try to speak at these conferences — e.g. [the USENIX president] could arrange to give an invited talk at OScon, talking about how ordinary hacker/programmer types can submit interesting work to Usenix and eventually see their name at the top of a real conference paper.

Anyway, I appreciate the challenges you face in staying relevant in the face of change. I agree with the lady from the back of the room who expressed that it doesn’t matter what Usenix was 25 years ago, what’s important is what Usenix is now and in the next few years. Good luck!

Within hours the USENIX president replied:

Thanks for your thoughtful mail and especially your desire to be part of the solution — that’s what we love about our members!

I think we got a lot of good feedback last night and there are some great ideas here — the idea of simulcasting to specific remote locales where we have regional events is intriguing to me. You’ve given the board and the staff much to think about and I hope we don’t let you down!

February 14, 2012

NDSS 2012

Filed under: Reviews — JLG @ 1:03 AM

Last week I attended the Network and Distributed System Security (NDSS) Symposium in San Diego, California.  NDSS is generally considered one of the top-tier academic security conferences.  This was my second year in a row attending the symposium.

Things I learned or especially enjoyed are in boldface text below (you may safely skip the non-boldfaced text in the list):

  1. What you publicly “Like” on Facebook makes it easy for someone to profile probable values for attributes you didn’t make public, including gender, relationship status, and age.  For example, as described by Chaabane et al., “single users share more interests than married ones.  In particular, a single user has an average of 9 music interests whereas a married user has only 5.79.”
  2. Femtocells could represent a major vector for attacking or disrupting cellular infrastrcture.  I’ve used femtocells; they are great for having strong conversations in weak coverage areas, but to quote the conclusion of Golde et al.: “Deployed 3G femtocells already outnumber traditional 3G base stations globally, and their deployment is increasing rapidly. However, the security of these low-cost devices and the overall architecture seems poorly implemented in practice. They are inherently trusted, able to monitor and modify all communication passing through them, and with an ability to contact other femtocells through the VPN network…[However, it is well known that it is possible to get root level access to these devices.  We] evaluated and demonstrated attacks originating from a rogue femtocell and their impact on endusers and mobile operators. It is not only possible to intercept and modify mobile communication but also completely impersonate subscribers. Additionally, using the provided access to the operator network, we could leverage these attacks to a global scale, affect the network availability, and take control of a part of the femtocell infrastructure…We believe that attacks specifically targeting end-users are a major problem and almost impossible to mitigate by operators due to the nature of the current femtocell architecture. The only solution towards attacks against end-users would be to not treat the femtocell as a trusted device and rely on end-to-end encryption between the phone and the operator network. However, due to the nature of the 3G architecture and protocols and the large amount of required changes, it is probably not a practical solution.”
  3. Appending padding bytes onto data, before encrypting the data, can be dangerous.  “Padding” is non-useful data appended to a message simply to ensure a minimum length message or to ensure the message ends on a byte-multiple boundary.  (Some encryption functions require such precisely sized input.)  As described by AlFardan and Paterson: “the padding oracle attack…exploits the MAC-then-Pad-then-Encrypt construction used by TLS and makes use of subtle timing differences that may arise in the cryptographic processing carried out during decryption, in order to glean information about the correctness or otherwise of the plaintext format underlying a target ciphertext.  Specifically, Canvel et al. used timing of encrypted TLS error messages in order to distinguish whether the padding occurring at the end of the plaintext was correctly formatted according to the TLS standard or not.  Using Vaudenay’s ideas, this padding oracle information can be leveraged to build a full plaintext recovery attack.”  AlFardan and Paterson’s paper described a padding oracle attack against two implementations of the DTLS (datagram transport layer security) protocol, resulting in full or partial plaintext recovery.
  4. If you want people to click on your malicious link, keep it short and simple.  Onarlioglu et al. presented the results of their user-behavior study that showed, among other results: “When the participants did not have the technical knowledge to make an informed decision for a test and had to rely on their intuition, a very common trend was to make a guess based on the ‘size’, the ‘length’, or the ‘complexity’ of the artifacts involved. For example, a benign Amazon link was labeled as malicious by non-technical participants on the basis that the URL contained a crowded parameter string. Some of the comments included: ‘Too long and complicated.’, ‘It consists of many numbers.’, ‘It has lots of funny letters.’ and ‘It has a very long name and also has some unknown code in it.’. Many of these participants later said they would instead follow a malicious PayPal phishing URL because ‘It is simple.’, ‘Easy to read.’, ‘Clear obvious link.’ and it has a ‘Short address’. One participant made a direct comparison between the two links: ‘This is not dangerous, address is clear. [Amazon link] was dangerous because it was not like this.’. Interestingly, in some cases, the non-technical participants managed to avert attacks thanks to this strategy. For example, a number of participants concluded that a Facebook post containing a code injection attack was dangerous solely on the grounds that the link was ‘long’ and ‘confusing’…the majority of the non-techie group was not aware of the fact that a shortened URL could link to any destination on the web. Rather, they thought that TinyURL was the website that actually hosted the content.”
  5. There needs to be more transparency into how lawful telephone interception systems are constructed and deployed.  At CCS two years ago a paper by Sherr et al. was presented that described a control-plane-DoS attack on CALEA systems; here Bates et al. propose a cryptography-based forensics engine for audit and accounting of CALEA systems.  As described by Bates et al.: “The inability to properly study deployed wiretap systems gives an advantage to those who wish to circumvent them; those who intend to illegally subvert a surveillance system are not usually constrained by the laws governing access to the wiretaps. Indeed, the limited amount of research that has looked at wiretap systems and standards has shown that existing wiretaps are vulnerable to unilateral countermeasures by the target of the wiretap, resulting in incorrect call records and/or omissions in audio recordings.”  Given the amount of light that people are shining on other infrastructure-critical systems such as smart meters and SCADA control systems, perhaps the time is ripe for giving lawful-intercept and monitoring systems the same treatment.
  6. There are still cute ideas in hardware virtualization.  Sun et al. presented work (that followed the nifty Lockdown work by Vasudevan et al. at Carnegie Mellon, of which I was previously unaware) on using the ACPI S3 sleep mode as a BIOS-assisted method for switching between “running” OSes.  The idea is that when you want to switch from your “general web surfing” OS to your “bank access” OS, you simply suspend the first VM (to the S3 state) and then wake the second VM (from its sleep state).  Lockdown did the switch in 20 seconds using the S4 sleep mode; Sun et al.’s work on SecureSwitch does the switch in 6 seconds using the S3 sleep mode but requires some hardware modifications.  Given my interest in hardware virtualization, I particularly enjoyed learning about these two projects.  I also liked the three other systems-security papers presented in the same session: Lin et al. presented forensics work on discovering data structures in unmapped memory; El Defrawy et al. presented work on modifying low-end microcontrollers to provide inexpensive roots of trust for embedded systems; and Tian et al. presented a scheme for one virtual machine to continuously monitor another VM’s heap for evidence of buffer overflow.
  7. Defensive systems that take active responses, such as the OSPF “fight-back” mechanism, can introduce new vulnerabilities as a result of these responses.  In some of my favorite work from the conference, Nakibly et al. described a new “Disguised LSA” attack against the Open Shortest Path First (OSPF) interior gateway protocol.  The authors first describe the OSPF “fight-back” mechanism: “Once a router receives an instance of its own LSA [link state advertisement] which is newer than the last instance it originated, it immediately advertises a newer instance of the LSA which cancels out the false one.”  However, “[The] OSPF spec states that two instances of an LSA [link state advertisement] are considered identical if they have the same values in the following three fields: Sequence Number, Checksum, and Age…all three relevant fields [are] predictable.”  In the Disguised LSA attack the authors first send a forged LSA (purportedly from a victim router) with sequence number N (call this “LSA-A”), then one second later send another forged LSA with sequence number N+1 (LSA-B).  When the victim router receives LSA-A it will fight back by sending a new LSA with sequence number N+1 (LSA-C).  But when the victim receives LSA-B it will ignore it as being a duplicate of LSA-C.  Meanwhile, any router that receives LSA-B before LSA-C will install (the attacker’s) LSA-B and discard (the victim’s) LSA-C as a duplicate.  Not all routers in an area will be poisoned by LSA-B, but the authors’ simulation suggests that 90% or more routers in an AS could be poisoned.  In other disrupting-networks work, Schuchard et al. presented a short paper on how an adversary can send legitimate but oddly-formed BGP messages to cause routers in an arbitrary network location to fall into one of a “variety of failure modes, ranging from severe performance degradation to the unrecoverable failure of all active routing sessions”; and Jiang et al. demonstrated that “a vulnerability affecting the large majority of popular DNS implementations which allows a malicious domain name [such as those used in] malicious activities such as phishing, malware propagation, and botnet command and control [to] stay resolvable long after it has been removed from the upper level servers”, even after the TTL for the domain name expires in DNS caches.
  8. Hot topic 1:  Three papers discussed negative aspects of location privacy in cellular networks.  Kune et al. describe both an attack to determine the TMSI (temporary mobile subscriber identity) assigned to a telephone number in a GSM network, and a technique for monitoring PCCH (paging channel) traffic from a particular cell tower to determine if the subscriber is in the vicinity of (within a few kilometers of) that tower.  Bindschaedler et al. show empirically that recent research on “mix zones” — geographic areas in which users can mix or change their device identifiers such as IP and MAC addresses to hide their movement and ongoing communications — is not yet effective as a privacy preservation mechanism for cellular users.  Finally, in the words of Qian et al.: “An important class of attacks against cellular network infrastructures, i.e., signaling DoS attack, paging channel overload, and channel exhaustion attack, operates by sending low rate data traffic to a large number of mobile devices at a particular location to exhaust bottleneck resources…[We demonstrate] how to create a hit-list of reachable mobile IP addresses associated with the target location to facilitate such targeted DoS attacks.”  Of particular interest: “We show that 80% of the devices keep their device IPs for more than 4 hours, leaving ample time for attack reconnaissance” and that often on UMTS networks in large U.S. cities an attacker could “locate enough IPs to impose 2.5 to 3.5 times the normal load on the network.”
  9. Hot topic 2:  Three papers discussed privacy preservation in cloud-based searching.  Chen et al. presented an interesting architecture where a private cloud and public cloud work are used together to perform a search over sensitive DNA information: “Inspired by the famous “seed-and-extend” method, our approach strategically splits a mapping task: the public cloud seeks exact matches between the keyed hash values of short read substrings (called seeds) and those of reference sequences to roughly position reads on the genome; the private cloud extends the seeds from these positions to find right alignments. Our novel seed-combination technique further moves most workload of this task to the public cloud. The new approach is found to work effectively against known inference attacks, and also easily scale to millions of reads.”  Lu, in addition to having the best opening to a Related Work section that I’ve ever read — “This section overviews related work; it can be skipped with no lack of continuity” — demonstrates “how to build a system that supports logarithmic search over encrypted data.”  This system “would allow a database owner to outsource its encrypted database to a cloud server. The owner would retain control over what records can be queried and by whom, by granting each authorized user a search token and a decryption key. A user would then present this token to cloud server who would use it to find encrypted matching records, while learning nothing else. A user could then use its owner-issued decryption key to learn the actual matching records.”  Finally, Stefanov et al. presented sort-of-cloud-related work on optimizing “Oblivious RAM”: “The goal of O-RAM is to completely hide the data access pattern (which blocks were read/written) from the server. In other words, each data read or write request will generate a completely random sequence of data accesses from the server’s perspective.”
  10. Hot topic 3: Five papers discussed smartphone and/or app insecurity.  In work that had my jaw hitting the floor regarding the security design of production apps, Schrittwieser et al. “analyze nine popular mobile messaging and VoIP applications and evaluate their security models with a focus on authentication mechanisms. We find that a majority of the examined applications use the user’s phone number as a unique token to identify accounts, which further encumbers the implementation of security barriers. Finally, experimental results show that major security flaws exist in most of the tested applications, allowing attackers to hijack accounts, spoof sender-IDs or enumerate subscribers.”  Davi et al. described a control-flow integrity checker for smartphones with ARM processors: asserting “the basic safety property that the control-flow of a program follows only the legitimate paths determined in advance. If an adversary hijacks the control-flow, CFI enforcement can detect this divagation and prevent the attack.”  Zhou et al. analyzed the prevalence of malware in five Android app markets, including the official market and four popular alternative markets.  Two papers (Bugiel et al. and Grace et al.) address privilege-escalation problems in Android, where malicious applications are able to gain unapproved privileges either (Bugiel et al.) by colluding with other differently-privileged applications or (Grace et al.) by invoking APIs unexpectedly exported by the Android framework.  The presenter for the latter paper showed a video of a malicious application sending an SMS message and rebooting(!) the phone, both without holding any user-granted permissions.

There were three keynote speakers, whose messages were: (1) you’re a security professional so you need to be involved in publicly advocating one way or the other on security-related social issues; (2) the future [30 years ahead] will be a strange and interesting place and, since you’re a security researcher, you’ll help us get there; and (3) passwords are passé; if you’re not using two- (or more-)factor authentication then you’re not a good security practitioner.

I like NDSS because of the practical nature of the (mostly academic) work presented:  Much of the work feels innovative enough to advance the science of security, yet relevant and practical enough to immediately be integrated as useful extensions to existing commercial products.  Only 17% of submitted manuscripts were accepted for publication, so the quality of work presented was good.  Unfortunately, attendance is low — someone told me there were 210 people there, but I never heard the official count — so it is not as good a see-old-friends-and-make-new-ones event as, say, CCS.

February 4, 2012

Jagged Technology debrief

Filed under: Reviews — JLG @ 6:41 PM

Jagged Technology, LLC — tagline “We may be rough, but we’re sharp.” — was the name of the (first) company that I founded.

The website read:  “Jagged Technology executes R&D and provides consulting services in the fields of computer science, information processing, and computer systems software and hardware engineering.  Our areas of expertise include storage systems, security architectures, network components, and virtualization and operating systems.  Please contact [us] to discuss teaming, collaboration, sales, and employment opportunities.”

The company existed from late 2007 through early 2009.  My business plan was to pursue Small Business Innovative Research (SBIR) contract awards, following in the footsteps of the successful group I had previously worked for.  (That group had been part of a small government contracting business that had won numerous SBIR awards, but by the time I worked for them they had been acquired by a larger company and were no longer eligible to pursue SBIRs.)

I was wildly unsuccessful — other than in fulfilling my dream to found a business and work for myself — but I enjoyed every minute of it.  There were a few things I did right:

  • I earned revenue!  Not very much, and I certainly came nowhere near turning a profit, but I was able to score some consulting work (evaluating storage system technologies and formulating research and product objectives) meaning that it wasn’t entirely a pipe dream.
  • I had great mentors.  Before starting Jagged I had the privilege to work with and learn from Dr. Tiffany Frazier and Dr. Chuck Morefield.  Both were successful government-contracting entrepreneurs; Tiffany inspired me to dream big; Chuck took the time to sit down with me to describe what had worked for him when he took the plunge.  While running Jagged I relied on Dr. Greg Shannon‘s advice and assistance in creating winnable (if not winning) SBIR proposals and partnerships.
  • I had good ideas.  After I dissolved the company and started working at my next job, I talked with one of the program managers on the phone (about the next round of SBIR solicitations).  The PM recognized my name and mentioned that he’d really liked what I submitted during the previous round.  (I managed not to scream “so why didn’t you fund it?!?”)

Of course, there were many things I did wrong, including:

  • Not having a contract in hand before starting the company.  Nearly every successful government contractor I’ve talked with has said that they already had a contract lined up before filing the paperwork.  Doing so is especially important because it takes forever (often 9 months or longer) between when you first submit a proposal and when you start getting paid for the work.  I assumed that I’d be able quickly to win one or more contracts and that business would grow rapidly after that.
  • Not having a reputation or past performance with the customers I targeted.  The ratio of submitted proposals to winning proposals was around 30:2 on many of the programs to which I proposed.  Often the program manager was already working with a contractor to develop the SBIR topic and announcement (so there’s 1 of the 2 awards likely already earmarked) so in essence I was competing against 29 other companies — many of whom were actual established companies with existing products, intellectual property, and history of contract performance.  And if the PM has actually worked with any of the other 29 companies on a previous program, then my already unfavorable odds get significantly worse.
  • Overemphasizing the technical aspect of a proposal instead of all evaluated aspects.  SBIRs are evaluated equally on technical merit, feasibility of successful execution, and commercialization strategy.  My technical writeups were strong but the other two components (67% of the evaluation criteria) were usually weak; I typically listed only one or two people (me and a technical colleague) as lined up to execute the work and only one or two ideas on how I would partner with larger companies to commercialize the work.  My early proposals were especially unbalanced since I entered entrepreneurship with the idea that if I proposed a superior mousetrap then the money would beat a path to my door.  It would have made a significant difference in my chances for success if I had experienced partners as co-founders.
  • Focusing my business plan on the SBIR program instead of building a diverse revenue stream.  It is hard for me today not to be severely disappointed in the SBIR program.  I poured significant time and money and opportunity into trying to come up with solutions to the government’s and military’s most urgent technological challenges; when I finally received a rejection notice (usually 9 to 12 months later) and I requested my legally required debrief, typically all I would get was a one-line memo saying that the evaluators “didn’t like my commercialization plan.”  Well, what didn’t they like and how can I do better next time?  Anyway, since I didn’t have contract in hand at the beginning (in retrospect) I would have been better served by focusing on commercially salable technology development — and sales — and using the SBIR program as a way to fund development of related but noncritical offshoot ideas.
  • Not having opened a new cellular telephone account for business purposes.  To this day I get people calling my personal cell phone to try to sell widgets or web hosting to Jagged Technology.  On the plus side, registering your home address for a small business gets you signed up for all sorts of fascinating catalogs for boxes and shipping containers, plastic parts, and scientific glassware — I do enjoy receiving these catalogs, and it’s good to know that if I need a pallet of hazardous materials placards I know exactly whom to call.

The best thing to come out of having started Jagged Technology is a better understanding of how business works — for example, the complexity of balancing your product development schedule and projected contract or sales awards with the available resources (time, people, money) on hand to pursue the work; or the need to work in advance to establish a business area (for example by feeding ideas to program managers, and by visibly contributing to related product or community efforts) instead of simply responding to opportunities; or why a CEO spends his or her days talking with the chief finance officer, chief marketing officer, and chief operating officer instead of spending those days talking with project leads and engineers and salespeople.  It’s not all about better mousetraps.

Just as I feel that I’m a better car driver after having taken the Motorcycle Safety Foundation RiderCourse, I feel as though I’m a better researcher, mentor, and employee today thanks to having immersed myself in founding Jagged Technology and having worked hard for eighteen months to make it successful.

And here’s hoping to more success (or at least more revenue) next time!

Older Posts »