Jagged Thoughts | Dr. John Linwood Griffin

December 19, 2012

Eighteen hours later

Filed under: Homeownership — JLG @ 10:12 PM

Not even 18 hours after I wrote the previous post, our water heater’s relief valve decided to relieve itself.

Water heater pressure and temperature relief valve.

Water heater pressure and temperature relief valve. The valve releases water into the pipe (shown), which in turn releases water onto your basement floor (not shown).

Fortunately for us it didn’t flood very much.  The water was about an inch deep in the middle of the basement (a concrete floor with no sump or drain) and was dry along much of the basement walls.  I was easily able to reach the water supply cut-off valve and stanch the flow.  Most worrisome is that the bottom of our expensive new furnace was submerged—though thankfully none of the internal components got wet—so I’ll need to keep an eye out for rusting for awhile.

The plumber’s working theory is:

  1. We had the temperature set higher than necessary (about 135 degrees);
  2. the old temperature control module somehow caused the temperature to rise much hotter than the setting;
  3. the relief valve opened due to overheating; and
  4. the relief valve stuck open due to sediment in the tank.

We now have a shiny new relief valve, several shiny new water sensor alarms, and a shiny new bill for emergency service from our plumber.

Other things that were helpful to have on hand:

  • Wet/dry vacuum.  We have a model that is able to empty itself automatically via a garden hose attached to the pump exhaust.  Without this feature you’ll spend 10 minutes emptying the bucket for every 1 minute you spend vacuuming up the flood.
  • Dehumidifier.  We have a model that automatically pumps its contents into the basement sink via a supplied 18-foot hose.
  • Box fan.
  • Shop towels.
  • Large garbage bags into which to put the sodden contents of the cardboard boxes that just yesterday you moved from shelves onto the floor “just for a couple days.”

While waiting for the plumber we spent a day with the water heater shut off.  And holy smokes, Boston tap water is cold this time of year.

It could have been much, much worse.  I went to the basement yesterday morning just to see if there was any seepage through the basement walls from the heavy rain outside.  (And good news: I didn’t see any seepage from the walls.)  If it hadn’t been raining we probably wouldn’t have noticed anything amiss until the flooding shorted out the main electrical panel, which at that point would have represented tens of thousands of dollars of damage and the loss of our “claims free discount” from the insurance company.

$10 water sensor alarms are your friend.

December 16, 2012

So you’ve bought a house!

Filed under: Homeownership — JLG @ 9:14 PM

Once you’ve bought a house there is a temptation to be very Munroevian about the whole matter:

The Munroevian approach to homeownership. (Source: xkcd.com/905/)

It’s been pretty fun geeking out about home maintenance options, making plans for repairs and additions, and even picking up a hammer myself now and then.  There are several surprisingly informative websites with details about how houses work, including:

  • Inspectapedia:  for example, this article about the insulation we just had installed.
  • Check This House:  for example, this article about the importance of second-floor air return ducting (a potential long-term maintenance item for our house).

Six months after closing I understand a little better the #1 question in buying a house—“how much house can I afford?”—or, more to the point, “how big a monthly housing expense can I afford?”

Monthly housing expense = mortgage payment + homeowners insurance + property tax – interest tax deduction + maintenance [or association fees]

Mortgage payment:

Conventional wisdom says to make your mortgage payment as large as possible.  It will be painful now but less painful over time, especially as your earnings rise over the course of your career.  That easing is because your payment will stay the same over the lifetime of your mortgage:  If you have a 30-year mortgage and you make $1000/month payments today (on principal and interest), you’ll still be making exactly $1000/month payments in 29 years.

The effects of inflation will mean that, in 30 years, your $1000/month payment will only feel like a $400/month payment.  (Note however that it is statistically unlikely that you will hold the same mortgage for 30 years—I’ve read several times that mortgages average about 7 years before the house is sold or refinanced.)

Low interest rates are good except for one thing:  I worry about resale value if interest rates rise significantly.  At 3.5% interest rates, a buyer who can afford $1000/month payments can buy a $225,000 house.  But at 7.0% rates, that same buyer can only buy a $150,000 house.

When interest rates go up, many prospective buyers won’t be able to afford to pay as much for your house as you did.  Will we have problems selling our house without taking a bath?

Homeowners insurance:

We pay $90/month.  Annoyingly, our loan documents require a $1,000 deductible for the policy—i.e., we’re not allowed to crank up the deductible to lower our rate.

One thing that surprised me is that none of the “big boys” (Allstate, State Farm, GEICO, etc.) write insurance policies in Massachusetts.  I had a similar problem trying to get renter’s insurance in Florida.  Perhaps we should move to some milquetoast state with uniform laws and no propensity for natural disasters?

Property taxes:

You can find out what we (or your neighbors, or pretty much anybody) pay for property taxes by looking them up on the county tax assessor’s website; these are matters of public record.

As with many jurisdictions, Boston has a residential tax exemption—your taxes are reduced by $130/month if the property serves as your principal residence.  So budget for additional expense if you plan to rent out your house.

Tax deduction:

I can’t imagine the federal mortgage interest tax deduction surviving much longer.  My guess is that it will be phased out over the next few years.  Without the deduction our monthly housing expense will increase by $300/month.

Also, as with rising interest rates, I suspect a tax deduction phase-out will have a depressing effect on home resale prices.

Maintenance:

The expensivity of home maintenance has surprised me.   So far we’ve spent money in three categories:

A. Required maintenance:  $10,000 for new roof shingles.

The home inspectors and roofing experts who evaluated the house initially gave us a one to two year window for replacing the roof.  However, when we had roofers up to do minor repairs (repointing the chimney, recementing the vents, cleaning the gutters) they found cracking asphalt and other problems that prompted us to schedule the replacement immediately.  The new roof will hopefully be good for about 20 years.

B. Opportunity-cost improvements:  $3,600 for whole-house insulation and $3,900 for an oil-to-gas conversion of the furnace.

Massachusetts has an astounding program called Mass Save where you can receive an interest-free loan to defray the up-front costs of energy-efficiency improvements to your house.  The improvements will pay for themselves within three years (in the form of reduced utility bills), plus the house is more comfortable afterwards.  It’s a total win-win-win program for homeowners.

There are also incentive rebate programs for efficiency improvements.  The insulation work actually cost $5,600 (minus a $2,000 rebate from Mass Save); the furnace conversion cost $4,700 (minus a $800 rebate from our gas utility company).

We could have waited a year to make these improvements—the oil heater and indoor fuel oil tank were only about ten years old—but with the rebates and interest-free loan there was no reason not to jump on these, especially with the possibility that the program might not be renewed in future years.

The old 275-gallon oil tank, taking up space in our basement.

With the oil tank removed, there is space aplenty to eventually install a demand water heater.

C. Functional improvements:  $6,000 for electrical work and exhaust ventilation.

Our house is over 100 years old and (not surprisingly) didn’t have outlets or lights or exhaust fans everywhere we wanted them.  Worse, we were occasionally tripping breakers by running too many appliances on a single circuit.

We could have waited a year or two before performing this work, but I wanted to have the new wires pulled before having the insulation work done on the exterior walls and in the attic.  (The electricians said they could certainly do it even after the insulation was put in but that it would be “messier”.)

Also, it is a perpetual source of happiness for me to walk into the kitchen and see:

New externally-vented range hood. We use it daily. The white square of paint is where the over-the-range microwave used to be hung.

or into the bathroom and see:

New bathroom electrical outlet (one of two). Previously the bidet power cord ran along the bathroom floor, via an ugly grey extension cord, into the outlet by the sink.

Every time I take a shower I look over at the safer, neater, convenient bathroom outlet and feel the joy of homeownership.  (We also solved four other extension-cord problems elsewhere in the house, each of which bring joy in turn.)

D. Deferred maintenance and improvements:  Water heater replacement, carpentry, repointing the basement walls.

Our water heater is nine years old and has a nine year warranty.  I don’t believe it’s been cleaned nor flushed regularly, nor had the sacrificial anode replaced, so given the lack of maintenance I worry that it could start leaking—that would be a big problem since there is no floor drain in the basement—so I plan to replace it in 2013 with a demand water heater.

Demand water heaters need a fat gas pipe.  They consume up to 200,000 BTUs/hour or more; in comparison, our new high-efficiency furnace that consumes only up to 60,000 BTUs/hour and a typical gas stove and oven consume up to 65,000 BTUs/hour.  Our current gas pipe is thin, old, and lined (basically not up to snuff) so I’ve submitted an application to the gas company to lay a larger pipe in the spring.  I’ve requested a future-proofed pipe large enough to accommodate those three appliances plus a potential upgrade to a gas clothes dryer and a natural gas grill.

The lesson learned for me is if you’re buying a house, keep at least an extra $10,000 in reserve to cover any urgent maintenance items.  In other words, don’t completely exhaust your financial reserves by making a larger-than-needed down payment or purchasing new furniture too quickly.

In aviation there is a concept of prepaying into a maintenance fund every time you fly your own aircraft.  You know that you’re required to pay for major maintenance every 2,000 flight hours—at a cost of tens of thousands of dollars—so you divide that cost by 2,000 and prepay $10 into your maintenance fund for every hour you fly.

I’ve seen similar recommendations about prepaying for home maintenance.  You know that you’ll periodically have to pay for roofing work, new water heaters, and whatnot, so forecast out when you’ll make those repairs and start prepaying into a maintenance fund.  (If you buy into a condo association, part of your condo association fees are earmarked for exactly this purpose.)

There are a couple other housing-related websites I’ve been reading regularly, including The Mortgage Professor and (perhaps of local interest only) the Massachusetts Real Estate Law Blog.  The professor relates a story of an ill-prepared homeowner, who asked:

“I hadn’t been in my house 3 weeks when the hot water heater stopped working. Only then did I realize that I hadn’t been given the name of the superintendent…who do I see to get it fixed?”

One of the challenges we’ve faced is finding good contractors.  Here’s what I’ve learned about finding good contractors:

  • Get three quotes.  Not because you’re trying to find the absolute lowest cost, but rather that you’ll hear three different perspectives on what they think you should do.  For example, I had three heating contractors in to discuss the oil-to-gas conversion.  One suggested that I simply replace the burner on my existing furnace; one suggested that I install a new 100,000-BTU gas furnace; one suggested that I install a new 60,000-BTU gas furnace because of the square footage of the house.  Those three conflicting opinions gave me a lot of information to mull over; in the end I chose option #3 and it’s worked out perfectly.
  • Ask your neighbors for recommendations.  Several folks in my community recommended a particular roofing company; I ended up hiring them and was thoroughly satisfied with their work and professionalism.
  • Join Angie’s List for recommendations.   I hesitated to join at first—whining about how it costs money!—but in the end I figured was only hurting myself by not joining.  I ended up hiring an electrical contractor that I found on Angie’s List and was thoroughly satisfied with their work and professionalism.

And here’s what I’ve learned about hiring contractors:

  • Read the installation manuals yourself.  I wasn’t happy about how the heating contractor didn’t bother to configure the DIP switches on my new furnace.  (Specifically, he didn’t set the furnace’s fan speed to match the tonnage of the air conditioner’s compressor; he claimed it wasn’t important because he’d never done it before.)  So, I read up on furnace fan speeds and compressors myself, make the correct setting myself, and now find myself self-satisfied with better air conditioning performance.
  • Do your own homework before the contractors arrive.  I asked potential electricians about adding an exhaust fan in our half-bathroom.  One of them suggested that I buy the fan and he’d install it.  I asked why; he explained that if it were up to him he’d just buy the cheapest fan available, but he felt I’d likely be interested in a higher-end fan.  And he was correct!  After I scoured the Internet for information on exhaust fans I identified one of the low-sone (quiet) fans as the one I wanted, and we’re much happier with this choice than we would have with a louder fan.  (Note: I also installed a wall-switch timer on the exhaust fan—a great idea that I learned about while doing my homework on options for fans.)
  • Keep track of your paid invoices.  Some work you perform might increase your basis in the property (see IRS Publication 523), which could reduce the amount of tax you (might) pay when you sell the house.
  • Be ready to be flexible.  The heating contractor said it’d be done in one weekend, but it ended up taking a month and a half before the last of the work (sealing the old hole in our chimney) was complete.  The roofing contractor gave a two-week window in which they’d do the work, then ended up doing the work three days before the start of the window.  The insulation contractors said it’d be a two day job, but it ended up being a three-day job spread out over two weeks.  Fortunately, all of our contractors have taken pride in their work—so we’ve been left largely happy with the work that’s been done.

December 14, 2012

I can’t keep on renaming my dog

Filed under: Opinions,Work — JLG @ 12:00 AM

A clever meme hit the Internet this week:

“Stop asking me to change my password. I can’t keep on renaming my dog.”

If you (or the employees you support) aren’t using a password manager, clear off your calendar for the rest of the day and use the time to set one up.  It’s easy security.  Password managers make it simple to create good passwords, to change your passwords when required, to use different passwords on every site, and to avoid reusing old passwords.

The upside to using a password manager:

  • You only need to remember two strong passwords.  (One to log into the computer running the password manager, and one for the password manager itself.)

The downside to using a password manager:

  • All your eggs are in one basket.  (Therefore you need to pay close attention to choosing a good master password, protecting that password, and backing up your stored passwords.)

Generally speaking a password manager works as follows:

  1. You provide the password manager with a master passphrase.
  2. The password manager uses your master passphrase to create (or read) an encrypted file that contains your passwords and other secrets.

(For deeper details, see KeePass’s FAQ for a brief technical explanation or Dashlane’s whitepaper for a detailed technical explanation.  For example, in the KeePass FAQ the authors describe how the KeePass product derives 256-bit Advanced Encryption Standard [AES] keys from a user’s master passphrase, how salt is used to protect against dictionary attacks, and how initialization vectors are used to protect multiple encrypted files against known-plaintext attacks.  Other products likely use a similar approach to deriving and protecting keys.)

Password managers often also perform useful convenience functions for you—inserting stored passwords into your web browser automatically; generating strong passwords of any desired length; checking your usernames against hacker-released lists of pwned websites; evaluating the strength of your existing passwords; leaping tall buildings in a single bound; etc.

The root of security with password managers is in protecting your master password.  There are three main considerations to this protection:

(A) Choose a good passphrase. 

I’m intentionally using the word “passphrase” instead of “password” to highlight the need to use strong, complex, high-entropy text as your passphrase.  (Read my guidance about strong passphrases in TCS’s Better Passwords, Usable Security whitepaper.  Or if you don’t read that whitepaper, at least read this webcomic.)

Your master passphrase should be stronger than any password you’re currently using—stronger than what your bank requires, stronger than what your employer requires.  (However, it shouldn’t be onerously long—you need to memorize it, you will need to type it every day, and you will likely need to type it on mobile devices with cramped keyboards.)  I recommend a minimum of 16 characters for your master passphrase.

(Side note:  For similar reasons, another place where you should use stronger-than-elsewhere passphrases is with full-disk encryption products, such as TrueCrypt or FileVault, where you enter in a password at boot time that unlocks the disk’s encryption key.  As Microsoft’s #7 immutable law of security states, encrypted data is only as secure as its decryption key.)

(B) Don’t use your passphrase in unhygienic environments.

An interesting concept in computer security is the $5 wrench.  Attackers, like electricity, follow the path of least resistance.  If they’ve chosen you as their target, and if they aren’t able to use cryptographic hacking tools to obtain your passwords, then they’ll try other approaches—perhaps masquerading as an IT administrator and simply asking you for your password, or sending you a malicious email attachment to install a keylogger onto your computer, or hiding a pinhole spy camera in the light fixture above your desk.  So even with strong encryption you are still at risk to social engineering attacks targeting your passwords and password manager.

One way to reduce the risk of revealing your passphrase is to avoid typing it into computer systems over which you have neither control nor trust, such as systems in Internet cafes, or at airport kiosks, or at your Grandma Edna’s house.  To paraphrase public-service messages from the 1980s, when you give your passphrase to an untrusted computer you could be giving that passphrase to anyone who used that computer before you.

For situations where you simply must use a computer of dubious provenance—say, you’re on vacation, you take a wrong turn at Albuquerque, your wallet and laptop get stolen, and you have to use your password manager at an Internet cafe to get your credit card numbers and bank contact information—some password managers provide features like one time passwordsscreen keyboardsmultifactor authentication, and web-based access to help make lemonade out of life’s little lemons.

(C) Make regular backups of your encrypted file.

If you have a strong passphrase [(A)] and you keep your passphrase secret [(B)] then it doesn’t matter where copies of your encrypted file are stored.  The strong encryption means that your file won’t be susceptible to a brute-force or password-guessing attack even if an attacker obtains a copy of your file.  (Password management company LastPass had a possible security breach of their networks in 2011.  Even so, users with strong passphrases had “no reason to worry.”)  As such you are safely able to make backup copies of your encrypted file and to store those backups in a convenient manner.

Some password managers are designed to store your encrypted file on your local computer.  Other managers (notably LastPass) store your encrypted file on cloud servers managed by the same company, making it easier to synchronize the password file across all devices you use.  Still other managers integrate easily with third-party cloud storage providers (notably Dropbox) for synchronization across multiple devices, or support direct synchronization between two devices over a Wi-Fi network.  (In all remote-storage cases I’ve found, the file is always encrypted locally before any portion of the file is uploaded into the cloud.)

Whichever type of manager you use, be aware that that one file holds your only copy of all of your passwords—it is critical that you not lose access to the contents of the file.  Computers have crashed.  Password management companies have disappeared (Logaway launched on May 4, 2010, and ceased operations on February 2, 2012).  Cloud services havelost data and have experienced multi-day disruptions.  Protect yourself by regularly backing up your encrypted file, for example by copying it onto a USB dongle (whenever you change or add a password) or by printing a hard copy every month to stash in a safe deposit box.

If you maintain a strict separation between your home accounts and your work accounts—for example to keep your employer from snooping and obtaining your Facebook password—simply set up two password managers (one on your home laptop, the other on your work PC) using two unique passphrases as master keys.

Password manager software is easy to set up and use.  The biggest problem you’ll face is choosing from among the cornucopia of password managers.  A partial list I just compiled, in alphabetical order, includes: 1PasswordAnyPasswordAurora Password ManagerClipperzDataVaultDashlaneHandy PasswordKaspersky Password ManagerKeePassKeeper,LastPassNorton Identity SafeParanotic Password ManagerPassword AgentPassword SafePassword WalletPINsRoboFormSecret ServerSplashIDSticky PasswordTK8 Safe, and Universal Password Manager.  There is even a hardware-based password manager available.

Your top-level considerations in choosing a password manager are:

  1. Does it run on your particular OS or mobile device?  (Note that some password managers sometimes charge, or charge extra, to support synchronization with mobile devices.)
  2. Do you already use Dropbox on all your devices?  If not, consider a manager that provides its own cloud storage (LastPass, RoboForm, etc.)  If so, and only if you would prefer to manage your own encrypted file, choose a service that supports Dropbox (1Password, KeePass, etc.)

I don’t recommend or endorse any particular password manager.  I’ve started using one of the “premium” (paid) password managers and am astonished at how much better any of the managers are over what I’d been using before (an unencrypted manual text-file-based system that I’d hacked together last millennium).

November 20, 2012

Gabbing to the GAB

Filed under: Opinions,Work — JLG @ 12:00 AM

Earlier this month the (ISC)² U.S. Government Advisory Board (GAB) invited me to present my views and opinions to the board.  What a neat opportunity!

The GAB is a group of mostly federal agency Chief Information Security Officers (CISOs) or similar executives.  Officially it comprises “10-20 senior-level information security professionals in their respective region who advise (ISC)² on industry initiatives, policies, views, standards and concerns” and whose goals include offer deeper insights into the needs of the information security community and discuss matters of policy or initiatives that drive professional development.

In terms of content, in addition to discussing my previous work on storage systems with autonomous security functionality, I advanced three of my personal opinions:

  1. Before industry can develop the “cybersecurity workforce of the future” it needs to figure out how to calculate the return on investment (ROI) for IT/security administration.  I suggested a small initial effort to create an anonymized central database for security attacks and the real costs of those attacks.  If such a database was widely available at nominal cost (or free) then an IT department could report on the value of their actions over the past year: “we deployed such-and-such a protection tool, which blocks against this known attack that caused over $10M in losses to a similar organization.”  Notably, my suggested approach is constructive (“here’s what we prevented”) rather thannegative (“fear, uncertainty, and doubt / FUD”).  My point is that coming at the ROI problem from a positive perspective might be what makes it work.
  2. No technical staff member should be “just an instructor” or “just a developer.”  Staff hired primarily as technical instructors should (for example) be part of an operational rotation program to keep their skills and classroom examples fresh.  Likewise, developers/programmers/etc. should spend part of their time interacting with students, or developing new courseware, or working with the sales or marketing team, etc.  I brought up the 3M (15%) / Hewlett-Packard Labs (10%) / Google (20%) time model and noted that there’s no reason that a practical part-time project can’t also be revenue-generating; it just should be different (in terms of scope, experience, takeaways) from what the staff member does the rest of their time.  My point is that treating someone as “only” an engineer (developer, instructor, etc.) does a disservice not just to that person, but also to their colleagues and to their organization as a whole.
  3. How will industry provide the advanced “tip-of-the-spear” training of the future?  One curiosity of mine is how to provide on-the-job advanced training.  Why should your staff be expected to learn only when they’re in the classroom?  Imagine if you could provide your financial team with regular security conundrums — “who should be on the access control list (ACL) for this document?” — that you are able to generate, monitor, and control.  Immediately after they take an action (setting the ACL) then your security system provides them with positive reinforcement or constructive criticism as appropriate.  My point is that if your non-security-expert employees regularly deal with security-relevant problems on the job, then security will no longer be exceptional to your employees.

I had a blast speaking.  The GAB is a group of great folks and they kept me on my toes for most of an hour asking questions and debating points.  It’s not every day that you get to engage high-level decision makers with your own talking points, so my hope is that I gave them some interesting viewpoints to think about — and perhaps some new ideas on which to take action inside their own agencies and/or to advise the government.

November 1, 2012

2012 Conference on Computer and Communications Security

Filed under: Reviews,Work — JLG @ 12:00 AM

In October I attended the 19th ACM Conference on Computer and Communications Security (CCS) in Raleigh, North Carolina.  It was my fourth time attending (and third city visited for) the conference.

Here are some of my interesting takeaways from the conference:

The point of Binary Stirring is to end up with a completely different (but functionally equivalent) executable code segment, each time you load a program.  The authors double “each code segment into two separate segments—one in which all bytes are treated as data, and another in which all bytes are treated as code.  …In the data-only copy (.told), all bytes are preserved at their original addresses, but the section is set non-executable (NX).  …In the code-only copy (.tnew), all bytes are disassembled into code blocks that can be randomly stirred into a new layout each time the program starts.”  (The authors measured about a 2% performance penalty from mixing up the code segment.)

But why mix the executable bytes at all?  Binary Stirring is intended to protect against clever “return-oriented programming” (ROP) attacks by eliminating all predictable executable code from the program.  If you haven’t studied ROP (I hadn’t before I attended the talk) then it’s worth taking a look, just to appreciate the cleverness of the attack & the challenge of mitigating it.  Start with last year’s paper Q: Exploit Hardening Made Easy, especially the related work survey in section 9.

Regarding ORAM, imagine a stock analyst who stores gigabytes of encrypted market information “in the cloud.”  In order to make a buy/sell decision about a particular stock (say, NASDAQ:TSYS), she would first download a few kilobytes of historical information about TSYS from her cloud storage.  The problem is that an adversary at the cloud provider could detect that she was interested in TSYS stock, even though the data is encrypted in the cloud.  (How?  Well, imagine that the adversary watched her memory access patterns the last time she bought or sold TSYS stock.  Those access patterns will be repeated this time when she examines TSYS stock.)  The point of oblivious RAM is to make it impossible for the adversary to glean which records the analyst downloads.

  • Fully homomorphic encryption:  The similar concept of fully homomorphic encryption (FHE) was discussed at some of the post-conference workshops.  FHE is the concept that you can encrypt data (such as database entries), store them “in the cloud,” and then have the cloud do computation for you (such as database searches) on the encrypted data, without decrypting.

When I first heard about the concept of homomorphic encryption (circa 2005, from some of my excellent then-colleagues at IBM Research) I felt it was one of the coolest things I’d encountered in a company filled with cool things.  Unfortunately FHE is still somewhat of a pipe dream — like ORAM, it’ll be a long while before it’s efficient enough to solve any practical real-world problems — but it remains an active area of interesting research.

  • Electrical network frequency (ENF):  In the holy cow, how cool is that? category, the paper “How Secure are Power Network Signature Based Time Stamps?” introduced me to a new forensics concept: “One emerging direction of digital recording authentication is to exploit an potential time stamp originated from the power networks. This time stamp, referred to as the Electrical Network Frequency (ENF), is based on the fluctuation of the supply frequency of a power grid.  … It has been found that digital devices such as audio recorders, CCTV recorders, and camcorders that are plugged into the power systems or are near power sources may pick up the ENF signal due to the interference from electromagnetic fields created by power sources.”  Wow!

The paper is about anti-forensics (how to remove the ENF signature from your digital recording) and counter-anti-forensics (how to detect when someone has removed the ENF signature).  The paper’s discussion of ENF analysis reminded me loosely of one of my all-time favorite papers, also from CCS, on remote measurement of CPU load by measuring clock skew as seen through TCP (transmission control protocol) timestamps.

  • Resource-freeing attacks (RFA):  I also enjoy papers about virtualization, especially regarding the fair or unfair allocation of resources across multiple competing VMs.  In the paper “Resource-Freeing Attacks: Improve Your Cloud Performance (at Your Neighbor’s Expense)”, the authors show how to use antisocial virtual behavior for fun and profit: “A resource-freeing attack (RFA) [improves] a VM’s performance by forcing a competing VM to saturate some bottleneck. If done carefully, this can slow down or shift the competing application’s use of a desired resource. For example, we investigate in detail an RFA that improves cache performance when co-resident with a heavily used Apache web server. Greedy users will benefit from running the RFA, and the victim ends up paying for increased load and the costs of reduced legitimate traffic.”

A disappointing aspect of the paper is that they don’t spend much time discussing how one can prevent RFAs.  Their suggestions are (1) use a dedicated instance, or (2) build better hypervisors, or (3) do better scheduling.  That last suggestion reminded me of another of my all-time favorite research results, from last year’s “Scheduler Vulnerabilities and Attacks in Cloud Computing”, wherein the authors describe a “theft-of-service” attack:  A virtual machine calls Halt() just before the hypervisor timer fires to measure resource use by VMs, meaning that the VM consumes CPU resources but (a) isn’t charged for them and (b) is allocated even more resources at the expense of other VMs.

  • My favorite work of the conference:  The paper is a little hard to follow, but I loved the talk on “Scriptless Attacks – Stealing the Pie Without Touching the Sill”.  The authors were interested in whether an attacker could still perform “information theft” attacks once all the XSS (cross-site scripting) vulnerabilities are gone.  Their answer: “The surprising result is that an attacker can also abuse Cascading Style Sheets (CSS) in combination with other Web techniques like plain HTML, inactive SVG images or font files.”

One of their examples is that the attacker can very rapidly shrink then grow the size of a text entry field.  When the text entry field shrinks to one pixel smaller than the width of the character the user typed, the browser automatically creates a scrollbar.  The attacker can note the appearance of the scrollbar and infer the character based on the amount the field shrank.  (The shrinking and expansion takes place too fast for the user to notice.)  The data exfiltration happens even with JavaScript completely disabled.  Pretty cool result.

Finally, here are some honorable-mention papers in four categories — work I enjoyed reading, that you might too:

Those who cannot remember the past, are condemned to repeat it:

Sidestepping institutional security:

Why be a white hat?  The dark side is where all the money is made:

Badware and goodware:

Overall I enjoyed the conference, especially the “local flavor” that the organizers tried to inject by serving stereotypical southern food (shrimp on grits, fried catfish) and hiring a bluegrass band (the thrilling Steep Canyon Rangers) for a private concert at Raleigh’s performing arts center.

October 15, 2012

Public-key cryptography & certificate chaining

Filed under: Opinions,Work — JLG @ 12:00 AM

Of the many marvelous Calvin and Hobbes cartoons by Bill Watterson, one of the most marvelous (and memorable) is The Horrendous Space Kablooie.  Quoth Calvin, “That’s the whole problem with science.  You’ve got a bunch of empiricists trying to describe things of unimaginable wonder.”

I feel the same way about X.509, the name of the international standard defining public key certificates.  X.509?  It’s sort of hard to take that seriously — “X.509” feels better suited as the name of an errant asteroid or perhaps a chemical formula for hair restoration.

But I digress.  X.509 digital certificates are exchanged when you create a “secure” connection on the Internet, for example when you read your webmail using HTTPS.  The exchange happens something like this:

  • Your computer:  Hi, I’m a client.
  • Webmail server:  Howdy, I’m a server.  Here’s my X.509 certificate, including the public key you’ll use in the next step.
  • Your computer:  Fabulous.  I’ve calculated new cryptographic information that we’ll use for this session, and I’ve encrypted it using your public key; here it is.
  • (Further traffic is encrypted using the session cryptographic information.)

Several things happen behind the scenes to provide you with security:

  1. Your computer authenticates the X.509 certificate(s) provided by the server.  It checks that the server uses the expected web address.  It also verifies that a trusted third party vouches for the certificate (by checking the digital signature included in the certificate).
  1. Your computer verifies that there is no “man in the middle” attack in progress.  It does this by ensuring that the server has the private key associated with its certificate.  It does this by encrypting the session cryptographic information with the server’s public key.  If the server didn’t have the private key then it wouldn’t be able to encrypt and decrypt any further traffic.

Unfortunately the system isn’t perfect.  The folks who programmed your web browser included a set of trusted root certificates with the browser.  Those root certificates were issued by well-known certificate authorities [CAs] such as Verisign and RSA.  If an attacker breaches security at either a root CA or an intermediate CA, as happened with the 2011 Comodo and DigiNotar attacks, then an attacker could silently insert himself into your “secure” connection.  Yikes!  Efforts like HTTPS Everywhere and Convergence are trying to address this problem.

Public-key cryptography is pretty neat.  When you use public-key cryptography you generate two keys, a public key (okay to give out to everyone) and a private key (not okay).  You can use the keys in two separate ways:

  • When someone wants to send you a private message, they can encrypt it using your public key.  The encrypted message can only be decrypted using your private key.
  • When you want to publish a message, you can encrypt (sign) it using your private key.  Anyone who has your public key can decrypt (validate) your message.

In a public key infrastructure, a root CA (say, Verisign) uses its private key to sign the public-key certificates of intermediate certificate authorities (say, Thawte).  The intermediate CAs then use their private key to sign the public-key certificates of their customers (say, www.google.com).  When you visit Google’s site using HTTPS, Google provides you both their certificate and Thawte’s certificate.  (The chained relationship Verisign-Thawte-Google is sometimes called the “chain of trust”.)  Your browser uses the certificates provided by Google, plus the Verisign root certificate (bundled with the browser), to verify that the chain of trust is unbroken.

[I use Google as the example here, since you can visit https://www.google.com and configure your browser to show the certificates that Google provides.  However, I have no knowledge of Google’s contractual relationship with Thawte.  My assertions below about Google are speculative, but the overall example is valid.]

Recently I was asked “We have been trying to understand Certificate Chaining and Self Signing.  Would a company [like Google] be allowed to purchase one certificate from a Certificate issuer like Verisign and then issue its own signed additional certificates for additional use?”

Great question!  (Where “great question” is defined as “um, I don’t know, let me check into that.”)  It turns out the answer is no, a company’s certificate(s) cannot be used to sign other certificates.

Using Google as an example, the principal reason is that neither Verisign nor Thawte let Google act as an “intermediate certificate authority.”  It’s (1) likely against the license agreement under which Thawte signed Google’s certificate, and (2) prohibited by metadata fields inside both Thawte’s certificate and Google’s certificate:

  • Google’s certificate is prohibited from signing other ones because of a flag inside the certificate metadata.  (Specifically, their Version 3 certificate has an Extension called Certificate Basic Constraints that has a flag Is not a Certificate Authority.)  And Google can’t modify their certificate to change this flag, because then signature validation would fail (your browser would detect that Google’s modified certificate doesn’t match the original certificate that Thawte signed).
  • Certificates signed by Thawte’s certificate cannot be used as Certificate Authorities (CAs) because of a flag inside Thawte’s certificate.  (Specifically, their Version 3 certificate has an Extension called Certificate Basic Constraints that has an field Maximum number of intermediate CAs that’s set to zero, meaning that no verification program should accept any certificates that we signed using their key.)

If your company needs to issue its own signed certificates, for example to protect your internal servers, it’s relatively easy to do.  All you have to do is run a program that generates a root certificate.  You would then be like Verisign in that you could issue and sign as many other certificates as you wanted.  (The down side of your “private PKI” is that none of your users’ browsers would initially recognize your root certificate as a valid certificate.  For example, anyone surfing to a web page protected by certificates you signed would get a big warning page every time, at least until they imported your root certificate’s signature to their trusted-certificates list.)

The article I found most helpful in digging up this answer is here:
http://unitstep.net/blog/2009/03/16/using-the-basic-constraints-extension-in-x509-v3-certificates-for-intermediate-cas/

(The full name of the X.509 standard is the far worse ITU-T Recommendation X.509: Information technology – Open systems interconnection – The Directory: Public-key and attribute certificate frameworks.  One name with four hyphens, two colons, and the hyphenated equivalent of comma splicing?  Clearly rigorous scientific work.)

September 25, 2012

Better living through IPv6-istry

Filed under: Opinions,Work — JLG @ 12:00 AM

There have been many, many words written about the IPv4-to-IPv6 transition — probably around 340 undecillion words at this point — but perhaps my favorite words came in a recent Slashdot comment by AliasMarlowe:

I believe in the incremental approach to updates; it’s so much safer and usually easier.
So it’s going to be IPv5 for me, while you suckers make a mess of IPv6!

I’ve long been a fan of IPv6.  Deploying IPv6 has the obvious benefit of solving the IPv4 address exhaustion problem, as well as making it easier to do local subnetting, and site network architecture, and to some degree internet-scale routing.

But perhaps the greatest benefit of deploying IPv6 is the restoration of end-to-end transparency.  IPv6 obviates the need for network address translation (NAT).  With IPv6, when your Skype application wants to initiate a call to my Skype application, the apps can address each other directly without relying on hole punching, third-party relaying, or other “clever” NAT-circumvention techniques.

(End-to-end addressing may sound unimportant, but if we could restore this critical Internet design goal to practice then we could party like it’s 1979!)

I recently spoke with some of TCS’s computer network operations students about security considerations for IPv6 deployments.  They were surprised when I claimed that NAT is not needed in an IPv6 security plan; several students commented that the NAT on their home network router was the only thing protecting their computers from the evils of the Internet.

A common misperception!  There are generally two functions performed by your home network router (or your corporate upstream router, if so configured):

  1. Firewalling / stateful packet inspection.  This is a security function.
  2. IP masquerading / network address [and port] translation.  This is not a security function; it simply allows all the devices on your internal network to share a single external network (IP) address.

With IPv6 you can (and should) still deploy inline firewall appliances to perform function #1.  But with the plethora of available addresses in IPv6 — 18,446,744,073,709,551,616 globally routable addresses per standard local subnet — there is no overt need for masquerading.

Of course, masquerading provides ancillary benefits:  It somewhat hinders external traffic analysis, such as network mapping, by obfuscating the internal source and destination of traffic.  Combining masquerading with private IPv4 addressing also prevents internal addresses from being externally routable.

But similar benefits can be realized in IPv6 without masquerading and therefore without losing the benefits of end-to-end transparency.  For example IPv6 privacy extensions can obfuscate your internal network architecture and IPv6 unique local addresses can be used to isolate systems that shouldn’t be visible on external networks.

September 23, 2012

I LOVE FLYING

Filed under: Aviation — JLG @ 1:41 AM

The weather cooperated (somewhat) and I indeed got to fly last weekend for my second solo cross-country flight:

JLG cross-country solo over Newport, RI, September 16, 2012

By the end of the flight I was positively giddy; as I walked back to my car I texted Evelyn the above picture with the caption “I LOVE FLYING”.  Almost all of my flight training has left me grinning from ear-to-ear, but this flight was by far the most fun I’ve had yet.

(Almost all of my flight training has left me grinning:  The required night landings weren’t nearly as much fun as I thought they would be, especially since my instructor chose to test my performance under pressure — asking me to fly an unfamiliar approach to the runway, while simulating a landing light failure, all during a rushed and chaotic situation — and I didn’t handle it particularly gracefully.  But “trial by fire” was the whole point, and I feel that I learned from the experience and am better prepared to execute emergency landings at night.  I also did manage to land the airplane despite the chaos, though I’d drifted off the runway centerline and was still drifting as the wheels touched down.)

The cross-country flight was spectacular.  All the more so because I didn’t think I’d get to fly due to the weather:  There was a cloud layer (“ceiling”) around 4,000 feet along much of the route, well below the 5,000 foot minimum required by my flight school for cross-country flights.  Also, the surface winds were gusting to 16 knots at KBED and 18 knots at KGON, both above the 15-knot limit that my instructor chose for my original solo endorsement.  But my instructor waived both limits for the flight, citing his comfort level with how well I’ve been flying lately, and off I went at 3,000 feet.

It felt as though everything went right:

  • My navigation was great.  I chose to navigate primarily using VOR navigation, with dead reckoning as backup (following along on my aviation chart and looking for outside ground references to verify my position and course) and GPS as backup to the backup.  In the past my VOR navigation has been shaky, but this time it was rock solid — thanks to my instructor’s advice to set up the navigation radios before I even taxied the airplane, instead of hurriedly trying to dial them in when I need them.  My route was KBED to the GDM (Gardner, MA) VOR, to the PUT (Putnam, CT) VOR, to a landing at KGON (Groton, CT), thence direct to a landing at KEWB (New Bedford, MA), and back to KBED.
  • My landings were great.  Approaching KGON I twice asked the tower for a “wind check” to verify that the winds were still below the maximums to land; I was concerned both with the wind gusts and the “crosswind component” of the wind.  (Pilots prefer the wind to blow steadily and directly down the runway.  The winds at KGON were both gusty and at an angle to the runway; if the crosswind component of the gusts was greater than 8 knots then I was not authorized to land.)  I was prepared throughout the landing to abort if the winds started gusting, but ended up with a landing so smooth it felt as though there were no wind whatsoever.
  • The views were great.  Here are some pictures:

3,000 feet over Newport, RI

Wow.  Also:

2,500 feet over Waltham, MA (view towards Boston)

and

Boston, MA. Our house is off-frame to the right.

Wow.

This weekend I passed my private pilot knowledge test, scoring 54 correct (90%) out of 60 questions.  (A passing score is 70% or above.)  The questions I missed were on the following topics:

  • Hand-propping an airplane.  Engines without electric starters require someone to go out and manually spin the propeller “old-school” to get the engine going.  Since I don’t do any hand-propping I hadn’t even read the section of the Airplane Flying Handbook that explains the recommended procedure (“Contact!” etc.)
  • Tri-color visual approach slope indicator (VASI).  Does a tri-color VASI use a green, amber, or white light to indicate that you are on the correct glideslope?  I answered white (I suppose I was thinking about a pulsating VASI) instead of the correct green.  A tri-color VASI doesn’t even have a white light!  I’m not sure there are any airports in the Northeast that still have a tri-color VASI in use, but if so I’d love to see one.  EDIT: There are two nearby!  Falmouth Airpark (5B6, Falmouth, MA) and Richmond Airport (08R, West Kingston, RI) both report tri-color VASIs in use.
  • Characteristics of stable air masses.  One of the neat things about learning to fly is that you learn a lot of arcane or obscure facts about weather systems, fog, etc., that generally are only going to be useful to you if you plan to fly through ugly weather.  Apparently I didn’t learn enough arcane or obscure facts; I missed two weather-related questions.
  • Dropping items from the airplane.  It turns out it’s totally legit to drop things from an airplane!  (I incorrectly answered that you are only allowed to do so in an emergency.)  FAR 91.15: “No pilot in command of a civil aircraft may allow any object to be dropped from that aircraft in flight that creates a hazard to persons or property. However, this section does not prohibit the dropping of any object if reasonable precautions are taken to avoid injury or damage to persons or property.”  During my post-exam review my instructor mentioned that he has returned car keys to his wife using this method.

So I’m inexorably closer to being done!  I have a couple of in-school checkrides coming up with another instructor — the fifth instructor I will have flown with during my flight training — and if the weather cooperates I could take the final FAA oral test and checkride as early as October 10.

September 10, 2012

Left-brained flying

Filed under: Aviation — JLG @ 11:30 PM

I’m almost finished with flight training for my private pilot certificate!  That’s actually a little disappointing, because I’ve been enjoying the training so much; I probably won’t fly nearly this much after the lessons are over.

Two weeks ago I performed one of my two required solo cross-country flights.  When I first heard about this requirement I hoped that cross-country meant a flight from Boston to Seattle (cross country) or even to Winnipeg (cross countries) but it turns out it just means, in the exciting parlance of Title 14 of the Code of Federal Regulations, Part 61.109(a)(5)(ii), simply:

one solo cross country flight of 150 nautical miles total distance, with full-stop landings at three points, and one segment of the flight consisting of a straight-line distance of more than 50 nautical miles between the takeoff and landing locations.

I’d also hoped that I would be able to pick the airports for my cross-country training flights, but that choice too is regulated, this time by the flight school.  For my first flight my instructor assigned one of two approved routes, in this case KBED-KSFM-KCON-KBED — Bedford, MA, to Sanford, ME, to Concord, NH, and back to Bedford.

(Astute readers will calculate that route as only covering 143 nautical miles.  Although that distance doesn’t meet the requirement above, it’s okay since the school requires students perform a minimum of two solo cross-country routes; the next one will be a 179-mile trip to Connecticut and back.  I suspect the school chose those airports for the first “solo XC” because SFM and CON are both non-towered airports that are easy to find from the air — meaning good practice and confidence-building for the student — as well as because the student has flown round-trip to SFM with an instructor at least once.)

The most memorable aspect of my first solo cross-country flight was that everything happened so quickly!  Without an instructor around to act as a safety net, I had an ever-present feeling that I was forgetting something or missing something:

  • Oh shoot I forgot to reset the clock when I passed my last checkpoint; where exactly am I right now?
  • Oh shoot I haven’t heard any calls for me on this frequency lately, did I miss a message from air traffic control to switch to a different controller’s frequency?
  • Oh shoot I’ve been so busy with the checklist that it’s been over a minute since I looked outside the cockpit…is someone about to hit me?  Am I about to collide with a TV tower?

There were two genuine wide-eyed moments on the flight:

  1. Traffic on a collision course.  While flying northeast at 3,500 feet, air traffic control informed me that there was another plane, in front of me, headed towards me, at my altitude.  Yikes.  I hesitated while looking for the plane, until ATC notified me again, a little more urgently, that there was a plane in front of me at my altitude — except that it was a lot closer than it had been a moment before.  I (a) asked ATC for a recommendation, (b) heard them recommend that I climb 500 feet, (c) did so, forthwith.  Moments later I saw the plane, passing below me and just to my left — we would have missed each other, but not by much.
    Lesson learned:  What I should have done was immediately changed altitude and heading as soon as I got the first notification from ATC.  I delayed because I didn’t comprehend the severity of the situation; it’s pretty rare for someone to be coming right at you — this was the first time it’s happened to me.  Given the other pilot’s magnetic heading, that plane was flying at an altitude contrary to federal regulations, which would have been small consolation if we’d collided. (Sub-lesson learned, as my Dad taught me when learning to drive:  Think of the absolutely dumbest, stupidest, most idiotic thing that the other pilot could possibly do, and prepare for him to do exactly that.)
  2. Flight into a cloud.  During the SFM-CON leg I flew (briefly and unintentionally) into a cloud.  Yikes.  The worst part is I didn’t even see the cloud coming; visibility was slowly deteriorating all around me, so I was focusing mostly on the weather below and to my left, trying to determine when I should turn left to get away from the deteriorating weather.  All of a sudden, wham, white-out.  At the time I was flying at 4,500 feet with a ceiling supposedly at 6,500 feet in that area — at least according to my pre-flight weather briefing — so I’d expected to be well clear of the clouds.  (The clouds probably had been at 6,500 feet two hours before when I got the weather briefing.)
    Flying into clouds without special “instrument meteorological conditions” training is (a) prohibited by the FAA and (b) a bad idea; without outside visual references to stay straight-and-level you can pretty quickly lose your spatial orientation and crash.  During flight training you’re taught what to do if you unintentionally find yourself in a cloud: Turn around! is usually your best option:  Check your current heading, make a gentle 180-degree turn, keep watching your instruments to make sure you’re not gaining or losing altitude or banking too steeply, exit the cloud, unclench.  Fortunately, an opening quickly appeared in the cloud below me, so I immediately heaved the yoke forward and flew down out of the cloud, then continued descending to a safe altitude (safe above the ground and safe below the clouds).
    Lesson learned: I should have changed my flight plan to adapt as soon as I noticed the weather start to deteriorate.  First, I should have stopped climbing once I noticed that visibility was getting worse the more I climbed.  Second, given that my planned route was not the most direct route to the destination airport, I should have diverted directly toward the destination (where the weather looked okay) as soon as the weather started getting worse instead of continuing to fly my planned route.

Despite these eye-opening moments, the flight went really well.  My landings were superb — I am pleased to report that, now that I have over 150 landings in my logbook, I can usually land the plane pretty nicely — and once I settled into the swing of things I had time to look out the window, enjoy the view, and think about how much fun I’m having learning to fly.

Unfortunately I haven’t yet had the opportunity to repeat the experience.  I was scheduled to fly the longer cross-country flight the next weekend, but the weather didn’t cooperate.  I was then scheduled to fly the longer cross-country flight the next next weekend, but the weather didn’t cooperate.  So I’m hoping that next weekend (the next next next weekend, so to speak) will have cooperative weather.  Once I finish the second cross-country flight I will then pass the written exam, then take “3 [or more] hours of flight training with an authorized instructor in a single-engine airplane in preparation for the practical test,” then pass the practical test.  Then I’ll be a pilot!  Meanwhile, whenever I fly I am working on improving the coordination of my turns (using rudder and ailerons in the right proportions), making sure to clear to the left or right before starting a turn, and remembering to execute the before landing checklist as soon as I start descending to the traffic pattern altitude.

Overall, flying is easier than I expected it to be.  The most important rule always is fly the airplane.  No matter what is happening — if the propeller shatters, just as lightning strikes the wing causing the electrical panel to catch fire, while simultaneously your passenger begins seizing and stops breathing — maintain control of the airplane!

  • First, the propeller shattering means that you’ve lost your engine; follow the engine failure checklist.  In the Cessna Skyhawk, establish a 68-knot glide speed; this will give you the most time and most distance to land.  Then, look for the best place to land [you should already have a place in mind, before the emergency happens] and turn towards that place.
  • Now, attend to the electrical fire.  First, fly the airplane — maintain 68 knots, maintain a level heading, continue heading toward your best place to land.  Meanwhile, follow the electrical fire in flight checklist by turning off the master switch to turn off power to the panel.  Are you still flying the airplane?  Good, now turn off all the other switches (still flying?), close the vents (fly), activate the fire extinguisher (fly), then get back to flying.  (It may sound like I’m trying to be obtuse here, but that’s really the thought process you’re supposed to follow — if you don’t fly the airplane, it won’t matter in the end what you do to extinguish the fire.)
  • Now, ignore your passenger.  Your job is to get the airplane on the ground so that you can help or call for help.  Unfortunately you don’t have a radio anymore — you lost it when you flipped the master switch — so once you’re within range of your best place to land, execute an emergency descent and get down as quickly as possible.

My new instructor often tells me that I’m flying too tensely, especially on the approach to landing — he remarks that I tighten my shoulders, look forward with intense concentration, make abrupt control movements, and maintain a death grip on the steering wheel.  This tenseness is what I think of as “left-brained flying:”  I am too cerebral, utilitarian, and immediate in my approach to maneuvering and in handling problems in the air; it gets the job done (I fly, I turn, I land, etc.) but doesn’t result in a very artistic (or comfortable) flight.  I am working to be more of a “right-brained pilot,” reacting to the flow of events instead of to single events, making small corrections to the control surfaces and waiting to see their effect on my flight path; and in general relaxing and enjoying the flight instead of obsessing over the flight parameters.

August 30, 2012

High-sodium passwords

Filed under: Opinions,Work — JLG @ 12:00 AM

Recently I’ve had some interesting conversations about passwords and password policies.

In general I despise password policies, or at least I despise the silly requirements made by most policies.  As I wrote in TCS’s recent Better Passwords, Usable Security white paper, “Why do you require your users’ passwords to look as though somebody sneezed on their keyboard? … Is your organization really better protected if you require your users to memorize a new 14-character password every two months? I argue no!”

In the BPUS white paper — which is behind a paywall, and I understand how that means it’s unlikely you’ll ever read it — I argue for three counterintuitive points:

  1. Password policies should serve your users’ needs, not vice versa.
  2. Passwords shouldn’t be your sole means of protection.
  3. Simpler passwords can be better than complex ones.

Beyond these points, it is also important to implement good mechanisms for storing and checking passwords.

Storing passwords: In June of this year there was a flurry of news articles about password leaks, including leaks at LinkedIneHarmony, and Last.fm.  The LinkedIn leak was especially bad because they didn’t “salt” their stored password hashes.  Salting works as follows:

  • An authentication system typically stores hashes of passwords, not cleartext passwords themselves.  Storing the hash originally made it hard for someone who stole the “password file” to actually obtain the passwords.
  • When you type in your password, the authentication system first takes a hash of what you typed in, then compares the hash with what’s stored in the password file.  If your hash matches the stored hash, you get access.
  • But attackers aren’t dumb.  An attacker can create (or obtain) a “rainbow table” containing reverse mappings of hash value to password.  For example, the SHA-1 hash of “Peter” is “64ca93f83bb29b51d8cbd6f3e6a8daff2e08d3ec”.  A rainbow table would map “64ca93f83bb29b51d8cbd6f3e6a8daff2e08d3ec” back to “Peter”.
  • Salt can foil this attack.  Salt is random characters that are appended to a password before the hash is taken.  So, using the salt “89h29348U#^^928h35″, your password “Peter” would be automatically extended to “Peter89h29348U#^^928h35”, which hashes to “b2d58c2785ada702df68d32744811b1cfccc5f2f”.  For large truly-random salts, it is unlikely that a rainbow table already exists for that salt — taking the reverse-mapping option off the table for the attacker.
  • Each user is assigned a different set of random characters for generating the salted hash, and these would be stored somewhere in your authentication system.  Nathan’s set of random characters would be different from Aaron’s.
  • A big win of salt is that it provides compromise independence.  Even if an attacker has both the password/hash file and the list of salts for each user, the attacker still has to run a brute-force attack against every cleartext password he wants to obtain.

If you don’t salt your passwords, then anyone who can get access to the leaked file can likely reverse many of the passwords, very easily.  This password recovery is especially a problem since many users reuse passwords across sites (I admit that I used to do this on certain sites until fairly recently).

Checking passwords: But it turns out that salt may no longer be solving the world’s password woes.  A colleague sent me a link to a post-LeakedIn interview arguing that cryptographic hashes are passé.  At first I felt that the interviewee was blowing smoke, and wrote the following observations to my colleague:

He confuses the notion of “strong salt” with “strong hash”.

(A) strong salt: you add a lot of random characters to your password before hashing…as a result the attacker has to run a brute force attack against the hash for a looooong time (many * small effort) in order to crack the password.

(B) strong hash: you use a computationally-intensive function to compute the hash…as a result the attacker has to run a brute force attack against the hash for a looooong time (few * large effort) in order to crack the password.

In both cases you get the desirable “looooong time” property.  You can also combine 1 and 2 for an even looooonger time (and in general looooonger is better, though looooong is often long enough).

There can be some problems with approach #2 — the biggest is non-portability of the hash (SHA-1 is supported by pretty much everything; bcrypt isn’t necessarily), another could be remote denial of service attacks against the authentication system (it will have a much higher workload because of the stronger hash algorithm, and if you’re LinkedIn you have to process a lot of authentications per second).

Conclusion: The problem with LinkedIn was the lack of salted passwords.

But I kept thinking about that article, and related posts, and eventually had to eat some of my words (though not all of them).  Looooong is often not long enough.

The best discussion I found on the topic was in the RISKS digest, especially this post and its two particularly interesting references.

My point (A) above may be becoming increasingly less valid due to the massive increases in cracking speed made possible by running crackers directly on GPUs.  Basically, using a salt + password means that you should be using a large/strong enough salt to evade brute force attacks.  So that raises the concern that some people aren’t using a large/strong enough salt.

Beyond salting there are always ways to increase the security of a password-based authentication system.  Instead of a stronger hash, you could require users to type 20 character passwords, or you could require two passwords, etc.

But back to my original point, longer or complex passwords aren’t always the best choice.  That is especially the case when you have two-factor authentication (or other protection mechanisms) — as long as you use the two factors everywhere.  (For example, one company I recently talked with deployed a strong two-factor authentication system for VPN connections but mistakenly left single-factor password authentication enabled on their publicly-accessible webmail server.)

« Newer PostsOlder Posts »