Jagged Thoughts | Dr. John Linwood Griffin

August 30, 2012

High-sodium passwords

Filed under: Opinions,Work — JLG @ 12:00 AM

Recently I’ve had some interesting conversations about passwords and password policies.

In general I despise password policies, or at least I despise the silly requirements made by most policies.  As I wrote in TCS’s recent Better Passwords, Usable Security white paper, “Why do you require your users’ passwords to look as though somebody sneezed on their keyboard? … Is your organization really better protected if you require your users to memorize a new 14-character password every two months? I argue no!”

In the BPUS white paper — which is behind a paywall, and I understand how that means it’s unlikely you’ll ever read it — I argue for three counterintuitive points:

  1. Password policies should serve your users’ needs, not vice versa.
  2. Passwords shouldn’t be your sole means of protection.
  3. Simpler passwords can be better than complex ones.

Beyond these points, it is also important to implement good mechanisms for storing and checking passwords.

Storing passwords: In June of this year there was a flurry of news articles about password leaks, including leaks at LinkedIneHarmony, and Last.fm.  The LinkedIn leak was especially bad because they didn’t “salt” their stored password hashes.  Salting works as follows:

  • An authentication system typically stores hashes of passwords, not cleartext passwords themselves.  Storing the hash originally made it hard for someone who stole the “password file” to actually obtain the passwords.
  • When you type in your password, the authentication system first takes a hash of what you typed in, then compares the hash with what’s stored in the password file.  If your hash matches the stored hash, you get access.
  • But attackers aren’t dumb.  An attacker can create (or obtain) a “rainbow table” containing reverse mappings of hash value to password.  For example, the SHA-1 hash of “Peter” is “64ca93f83bb29b51d8cbd6f3e6a8daff2e08d3ec”.  A rainbow table would map “64ca93f83bb29b51d8cbd6f3e6a8daff2e08d3ec” back to “Peter”.
  • Salt can foil this attack.  Salt is random characters that are appended to a password before the hash is taken.  So, using the salt “89h29348U#^^928h35″, your password “Peter” would be automatically extended to “Peter89h29348U#^^928h35”, which hashes to “b2d58c2785ada702df68d32744811b1cfccc5f2f”.  For large truly-random salts, it is unlikely that a rainbow table already exists for that salt — taking the reverse-mapping option off the table for the attacker.
  • Each user is assigned a different set of random characters for generating the salted hash, and these would be stored somewhere in your authentication system.  Nathan’s set of random characters would be different from Aaron’s.
  • A big win of salt is that it provides compromise independence.  Even if an attacker has both the password/hash file and the list of salts for each user, the attacker still has to run a brute-force attack against every cleartext password he wants to obtain.

If you don’t salt your passwords, then anyone who can get access to the leaked file can likely reverse many of the passwords, very easily.  This password recovery is especially a problem since many users reuse passwords across sites (I admit that I used to do this on certain sites until fairly recently).

Checking passwords: But it turns out that salt may no longer be solving the world’s password woes.  A colleague sent me a link to a post-LeakedIn interview arguing that cryptographic hashes are passé.  At first I felt that the interviewee was blowing smoke, and wrote the following observations to my colleague:

He confuses the notion of “strong salt” with “strong hash”.

(A) strong salt: you add a lot of random characters to your password before hashing…as a result the attacker has to run a brute force attack against the hash for a looooong time (many * small effort) in order to crack the password.

(B) strong hash: you use a computationally-intensive function to compute the hash…as a result the attacker has to run a brute force attack against the hash for a looooong time (few * large effort) in order to crack the password.

In both cases you get the desirable “looooong time” property.  You can also combine 1 and 2 for an even looooonger time (and in general looooonger is better, though looooong is often long enough).

There can be some problems with approach #2 — the biggest is non-portability of the hash (SHA-1 is supported by pretty much everything; bcrypt isn’t necessarily), another could be remote denial of service attacks against the authentication system (it will have a much higher workload because of the stronger hash algorithm, and if you’re LinkedIn you have to process a lot of authentications per second).

Conclusion: The problem with LinkedIn was the lack of salted passwords.

But I kept thinking about that article, and related posts, and eventually had to eat some of my words (though not all of them).  Looooong is often not long enough.

The best discussion I found on the topic was in the RISKS digest, especially this post and its two particularly interesting references.

My point (A) above may be becoming increasingly less valid due to the massive increases in cracking speed made possible by running crackers directly on GPUs.  Basically, using a salt + password means that you should be using a large/strong enough salt to evade brute force attacks.  So that raises the concern that some people aren’t using a large/strong enough salt.

Beyond salting there are always ways to increase the security of a password-based authentication system.  Instead of a stronger hash, you could require users to type 20 character passwords, or you could require two passwords, etc.

But back to my original point, longer or complex passwords aren’t always the best choice.  That is especially the case when you have two-factor authentication (or other protection mechanisms) — as long as you use the two factors everywhere.  (For example, one company I recently talked with deployed a strong two-factor authentication system for VPN connections but mistakenly left single-factor password authentication enabled on their publicly-accessible webmail server.)

August 6, 2012

Why you should consider graduate school

Filed under: Opinions — JLG @ 11:43 PM

Are you interested in graduate school?  Here’s an hour’s worth of reasons you should consider going:

I gave this talk, “Why You Shouldn’t Write Off Higher Education, Young Grasshopper,” at the H.O.P.E. (Hackers On Planet Earth) Number 9 conference in New York City on July 13, 2012.

My abstract was:

This talk is addressed to that kid in the back who’s wearing a Utilikilt and a black t-shirt that says “I Hack Charities,” who asks, “Why would I bother going to grad school? I’m self-taught, college was a waste of my time, and universities only exist to train wage slaves.” John will draw from personal experience to describe how in graduate school:

1. You get to do what you love.
2. You get to make large structured contributions to the community.
3. You experience personal growth while surrounded by amazing people.
4. You’re part of a meritocracy and a close-knit social circle.
5. The door is open for interesting opportunities afterward.

Included will be a discussion on how hackers can get in.

This talk is one of a series of talks I’ve given about the post-secondary experience, especially as it relates to computer engineering and related disciplines:

  • Life after high school.  Since 1994 I’ve annually visited my Dad’s high school mathematics classes in Alabama to talk with his students about what job opportunities, college opportunities, and travel opportunities are available in the years to come.  I’ve also spoken with middle school students in Maryland and elementary school students in Pennsylvania.
  • Why you shouldn’t write off higher education, young grasshopper.  A talk oriented towards hackers but applicable to anyone considering graduate school (masters or doctoral level), especially in a technical field.  First delivered in 2012.
  • Through the looking glass: What’s next after a systems Ph.D.  A talk for doctoral students who are curious about what general opportunities are available in the years to come.  I’ve given this talk at Carnegie Mellon, at Johns Hopkins, and at the University of North Carolina.  First delivered in 2004.  (See also my computer systems Ph.D. job search page.)
  • What’s next after a systems Ph.D.: A six-month retrospective on corporate research.  A surprisingly bitter talk for doctoral students who are curious about jobs in corporate research laboratories (they’re great jobs but are in many ways not what I expected…don’t let this talk convince you not to take a corporate job).  I’ve given this talk at Carnegie Mellon.  First delivered in 2005.

If you are interested in having me talk with your students (or friends, grandnieces, etc.) on any of these (or related) topics, you are very welcome to contact me.  See my contact information on my home page.

Mad props to Brendan for (a) convincing me to submit my name to the H.O.P.E. speaker committee in the first place, (b) rewriting my abstract so it would be interesting to the speaker committee, and (c) helping shape my talk so it would be interesting to the H.O.P.E. attendees.  The talk was well attended, I think I provided some valuable information to some interested folks, and I had a great set of interested folks come up and talk one-on-one with me in the Q&A session after the talk.

Thanks also to the many people who helped me prepare for H.O.P.E. by talking with me about their own perspectives on graduate school, especially Steve Bellovin, Randal Burns, Angelos Keromytis, Fabian Monrose, Drew Morin, Margo Seltzer, Andreas Terzis, and the anonymous students and industry colleagues who shared their experiences and/or plans.  I also benefited greatly from reading Mor Harchol-Balter’s advice on applying to Ph.D. programs in computer science.

August 3, 2012

Black Hat USA 2012 and DEF CON 20: The future of insecurity

Filed under: Reviews,Work — JLG @ 12:00 AM

I returned to the blistering dry heat of Las Vegas for a second year in a row to attend Black Hat and DEF CON.

The most interesting talk to me was a panel discussion at Black Hat that provided a future retrospective on the next 15 years of security.  Some of the topics discussed:

  • What is the role of the private sector in computer and network security?  One panelist noted that the U.S. Constitution specifies that the government is supposed “to provide for the common defense” — presumably including all domestic websites, commercial networks and intellectual property, and perhaps even personal computers — instead of only claiming to protect the .gov (DHS) and .mil (NSA) domains as they do today.  Another panelist suggested that, as in other sectors, the government should publish “standards” for network and communications security such that individual companies can control the implementation of those standards.
  • Social engineering and the advanced persistent threat.  At a BSidesLV party, someone I met asked whether I felt the APT was just a buzzword or whether it was real.  (My answer was “both”.)  Several speakers played with new views on the APT, such as “advanced persistent detection” (defenders shouldn’t be focused on vulnerabilities; rather they should look at an attacker’s motivation and objectives) and “advanced persistent fail” (real-world vulnerabilities survive long after mitigations are published).
  • How can you discover what evil lurks in the hearts of men and women?  One panelist speculated that we would see the rise of long-term [lifetime?] professional background checks for technological experts.  Current background checks for U.S. government national security positions use federal agents to search back 7-10 years.  I got the impression that the panelist foresees a rise in private-sector background checks (or checks against private databases of personal information) as a prerequisite for hiring decisions across the commercial sector.
  • How can you protect against a 120 gigabit distributed denial of service (DDoS) attack?  A panelist noted that a large recent DDoS hit 120 Gbit/sec, up around 4x from the largest DDoS from a year or two ago.  The panelist challenged the audience to think about how “old” attacks, which used to be easy to mitigate, become less so at global scale when the attacker leverages cloud infrastructure or botnet resources.
  • Shifting defense from a technical basis into a legal, policy, or contractual basis.  So far there hasn’t been an economically viable way to shift network security risks (or customer loss/damage liability) onto a third party — I believe many organizations would willingly exchange large sums of money to be released from these risks, but so far no third party seems willing to accept that bet.  The panel wondered whether (or when) the insurance industry will develop a workable model for computer security.
  • Incentives for computer security.  Following up on the point above, a panelist noted that it is difficult to incent users to follow good security practices.  The panelist asserted how E*TRADE gave away 10,000 security tokens but still had trouble convincing their users to use them as a second factor for authentication.  Another panelist pointed to incentives in the medical insurance industry — “take care of your body” and enjoy lower premiums — and wondered how to provide similar actionable incentives to take care of your network.
  • Maximizing your security return-on-investment (ROI).  A panelist asserted that the best ROI is money spent on your employees:  Developing internal experts in enterprise risk management, forensics and incident response skills, etc.
  • Assume you will be breached.  I’ve also been preaching that message: Don’t just protect, but also detect and remediate.  A panelist suggested you focus on understanding your network and your systems, especially with respect to configuration management and change management.

When asked to summarize the next 15 years of security in five words or fewer, the panelists responded:

  1. Loss of control.
  2. Incident response and cleaning up.
  3. Human factors.

Beyond the panel discussion, some of the work that caught my attention included:

  • Kinectasploit.  Jeff Bryner presented my favorite work of the weekend, on “linking the Kinect with Metasploit [and 19 other security tools] in a 3D, first person shooter environment.”  I have seen the future of human-computer interaction for security analysts — it is Tom Cruise in Minority Report — and the work on Kinectasploit is a big step in us getting there.
  • Near field communications insecurity.  Charlie Miller (“An analysis of the Near Field Communication [NFC] attack surface”) explained that “through NFC, using technologies like Android Beam or NDEF content sharing, one can make some phones parse images, videos, contacts, office documents, even open up web pages in the browser, all without user interaction. In some cases, it is even possible to completely take over control of the phone via NFC, including stealing photos, contacts, even sending text messages and making phone calls” and showed a live demo of using an NFC exploit to take remote control of a phone.
  • Operating systems insecurity.  Rebecca Shapiro and Sergey Bratus from Dartmouth made the fascinating observation that the ELF (executable and linker format) linker/loader is itself a Turing-complete computer: “[we demonstrate] how specially crafted ELF relocation and symbol table entries can act as instructions to coerce the linker/loader into performing arbitrary computation. We will present a proof-of-concept method of constructing ELF metadata to implement [Turing-complete] language primitives and well as demonstrate a method of crafting relocation entries to insert a backdoor into an executable.”  The authors’ earlier white paper provides a good introduction to what they call “programming weird machines”.
  • Wired communications insecurity.  Collin Mulliner (“Probing mobile operator networks”) probed public IPv4 address blocks known to be used by mobile carriers and found a variety of non-phone devices, such as smart meters, with a variety of enabled services with obtainable passwords.
  • Governmental infrastructure insecurity.  My next-to-favorite work was “How to hack all the transport networks of a country,” presented by Alberto García Illera, where he described a combination of physical and electronic penetration vectors used “to get free tickets, getting control of the ticket machines, getting clients [credit card] dumps, hooking internal processes to get the client info, pivoting between machines, encapsulating all the traffic to bypass the firewalls” of the rail network in his home country.
  • Aviation communications insecurity.  There were three talks on aviation insecurity, all focused on radio transmissions or telemetry (the new ADS-B standard for automated position reporting, to be deployed over the next twenty years) sent from or to an aircraft.

Last year I tried to attend as many talks as I could but left Vegas disappointed — I found that there is a low signal-to-noise ratio when it comes to well-executed, well-presented work at these venues.  The “takeaway value” of the work presented is nowhere near as rigorous or useful as that at research/academic conferences like CCS or NDSS.  But it turns out that’s okay; these venues are much more about the vibe, and the sharing, and the inspiration (you too can hack!), than about peer-reviewed or archival-quality research.  DEF CON in particular provides a pretty fair immersive simulation of living inside a Neal Stephenson or Charlie Stross novel.

This year I spent more time wandering the vendor floor (Black Hat) and acquiring skills in the lockpick village (DEF CON), while still attending the most-interesting-looking talks andshows.  By lowering my “takeaway value” expectations a bit I ended up enjoying my week in Vegas much more than expected.