Jagged Thoughts | Dr. John Linwood Griffin

July 9, 2008

Cyber security conference

Filed under: Reviews — JLG @ 11:48 PM

In June 2008 I attended a “Cyber Security Conference” in Arlington, Virginia.  The format was two days of invited 35-minute presentations by big names in the government and government-contractor space.  I only attended day two so I missed half the discussion.  Here are some of the major themes from today’s twelve speakers:

  • Targeted phishing (a.k.a. “spear phishing” or “whaling”—can we as a community agree to stop coming up with terrible nouns like these?) was mentioned more often by more people than any other cyber security problem.  Targeted phishing is a social engineering attack where someone learns enough about you (or your work environment) to send you a custom-made email.  One example involved a newly-promoted CFO, where the evildoers read about the CFO’s promotion in a newspaper and wrote a letter from “HR” asking (successfully) for personal information, passwords, etc., in order to set up the new executive’s computer account.  Four of the speakers mentioned phishing as one of the top problems they are facing on corporate and government networks…
  • …which reminds me how two speakers complained that spending/effort on cyber security is not well-balanced among the actual risks.  Joshua Corman of IBM phrased it nicely by pointing out that cyber attacks merely for the sake of attacking (“prestige” attacks) ended in 2004; attacks since then appear to have been driven either by financial (“profit”) or, more recently, activist (“political”) motives.  The problem is that the bulk of cyber security efforts/dollars are going to thwart attackers that are easy to identify (worms, spam) leaving us exposed to more discreet attackers.  (Of course, nobody had a ready solution for how to identify and thwart these discreet attackers—a discrete problem.)
  • However, two speakers independently mentioned anomaly detection as an it-continues-to-be-promising approach to cyber security, while acknowledging that the false positive problem continues to plague real-world systems.  One of the core problems I’d like to see studied involves the characterization of real-world network traffic (especially in military environments).  Specifically, for how long after training does an anomaly detection model remain valid in an operational system: seconds? hours? weeks?

Two talks I really enjoyed were from Boeing and Lockheed-Martin, in which a speaker from each talked about the organization and internal defense strategy (applied cyber security?) of his corporate network.  I appreciate when companies are willing to share these kinds of operational details to make reseachers’ jobs easier: storage companies take note!  Unfortunately the talks were light on details but provided some interesting insight on email defense (#1: Outlook helpfully hides the domain name, aiding a phisher’s task, so write filters to block addresses like “jaggedtechno1ogy.com” at the corporate mail server; #2: many spams or phishing attacks come from newly-created domains, so write filters for this too—I’ve mentioned previously that we should perhaps tolerate some inconvenience for the sake of computer defense, and these are good examples of that).  Two questions I’d like someone to answer:

  1. How can we coax corporate network managers to be willing to evaluate active response systems (e.g., attack the attacker) on production networks?  It is probably much easier to do there (legally) than on government networks.
  2. When will corporate networks deploy the security support services (admission control, identity verification, key management) that allow application programmers to focus on their core competencies instead of being security experts?  C’mon, folks, it’s 2008.


Three people have mentioned that question #1 is unlikely to have an answer:

What are the corresponding real-world analogies?  When is it legal for me, personally, to respond to a physical threat?  Only when there is serious threat of harm to myself or someone else (or, in some states, my property). Otherwise, call the policy (or the military). I doubt cyber-society will act much different. But, this does beg the question of where are the cyberpolicy and cyberDoD!

And everyone agrees that question #2 needs to happen, like, yesterday:

I think that the best answer as to why it hasn’t happened is related to cost. And, in this case, cost is directly related to usability for the sysadmins. If they can do username / password and be done with it, then they will. And they will only move to other measures if/when they are required to (e.g., corporate policy, liability concerns, etc). However, if one could find a way to overlay this security goodness onto an existing network in a way that is no harder (and perhaps even easier) than username / passwords, then they might want to do it. Esp if this overlay then allowed for a tangible benefit in terms of increased security of everything else.

Thanks, Greg and Bryan.

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.