Jagged Thoughts | Dr. John Linwood Griffin

February 24, 2013

Nondeductible IRA contribution or mortgage prepayment?

Filed under: Homeownership — JLG @ 10:28 PM

Having just filed our 2012 taxes, I’m pondering an interesting question.  Over the past year I set aside $5,000 that I’d planned to put into a traditional IRA as a nondeductible (after-tax) contribution.  It occurs to me that I could instead use the money as a principal prepayment on our mortgage.  The question is:  Should I?

[The IRA contribution is nondeductible because I participated in my company’s 401(k) plan in 2012.  As a result I am not eligible to deduct my IRA contributions from our federal taxes.  As the IRS explains: “If you were covered by a retirement plan (qualified pension, profit-sharing (including 401(k)), annuity, SEP, SIMPLE, etc.) at work or through self-employment, your IRA deduction may be reduced or eliminated. But you can still make contributions to an IRA even if you cannot deduct them.”]

If every year I chose $5,000 mortgage prepayments over nondeductible IRA contributions, then:

Advantages:

  • Faster payoff.  We would pay off the mortgage in 20 years (or fewer) instead of 30.
  • Less interest.  We would save at least $75,000 in interest payments over the life of the loan.
  • Zero risk.  Each prepayment would yield a guaranteed return of 3.625%/year (our mortgage rate) through 2042.
  • More equity.  We would have more equity in the house if we decide to sell (if we move or “trade up”) or if we need to do a cash-out refinance.

Disadvantages:

  • Underfunding retirement?  We would be reducing our retirement savings.  (However, during years 21-30 we could pay ourselves “mortgage payments” directly into our retirement savings.  If we are disciplined about it, those payments would make up much of the difference.)
  • Tax deferral.  The prepayment wouldn’t experience tax-deferred growth as it would in an IRA.
  • Lower returns?  There’s the chance that an IRA would grow in value significantly more than 3.625%/year.  Additionally, the IRA would continue to grow (or shrink) in value until withdrawn (or until we die, I suppose), whereas a prepayment’s “zero-risk return” ends when the 30-year mortgage term ends.
  • Inflation hedge.  If the dollar experiences high inflation in the next few years, we’d be better off if we were carrying lots of debt (i.e., a high mortgage balance).

Either way I wouldn’t be putting all of our retirement eggs into one basket, in that I’m already making contributions to a 401(k) retirement plan.  (By the way, the best answer I’ve found to how much should I save for retirement? is in Rande Spiegelman’s article “Play the Percentages”.)

Two things made me start thinking about this trade-off between mortgages and nondeductable IRAs:

  1. the notion that both are illiquid ways to invest for retirement, and
  2. this Mortgage Professor article about mortgage repayment as a long-term investment.

The Professor addresses a similar question in his article Roth IRA contributions vs. mortgage prepayment.  (My question is about traditional IRAs.)  The only other relevant advice I’ve found so far is in this paper comparing mortgage prepayment with pre-tax retirement contributions.  (My question is about nondeductible traditional IRAs.)

Until this year my strategy for retirement investing has been:

  1. If you have a 401(k) (or similar) plan with company matching contributions, first make contributions up to the company match.  (For example, if your company matches up to $3,000 of contributions then put your first $3,000 into the 401(k).)
  2. Next, make contributions to a Roth IRA up to the maximum allowable amount.
  3. Next, max out your pre-tax contributions to the 401(k).
  4. Next, make deductible contributions to a traditional IRA up to the maximum allowable amount.
  5. Next, make nondeductible contributions to a traditional IRA up to the maximum allowable amount.

So the conundrum is whether I should replace step 5 (or even step 4) with “Next, make prepayments against your mortgage principal.”  Arguably I have until April 15 to decide, although if I choose prepayment then every month’s delay costs me $500 more in interest paid over the life of the loan.

January 25, 2013

The 5 P’s of cybersecurity

Filed under: Opinions,Work — JLG @ 12:00 AM

Earlier this month I had the privilege of speaking at George Mason University’s cybersecurity innovation forum.  The venue was a “series of ten-minute presentations by cybersecurity experts and technology innovators from throughout the region. Presentations will be followed by a panel discussion with plenty of opportunity for discussion and discovery. The focus of the evening will be on cybersecurity innovations that address current and evolving challenges and have had a real, measurable impact.”

(How does one prepare for a 10-minute talk?  The Woodrow Wilson quote came to mind: “If I am to speak ten minutes, I need a week for preparation; if fifteen minutes, three days; if half an hour, two days; if an hour, I am ready now.”)

Given my experience with network security job training here at TCS, I decided to talk about the approach we take to prepare students for military cybersecurity missions.  It turned out to be a good choice:  The topic was well received by the audience and provided a nice complement to the other speakers’ subjects (botnet research, security governance, and security economics).

My talk had the tongue-in-cheek title The 5 P’s of cybersecurity: Preparing students for careers as cybersecurity practitioners.  I first learned of the 5 P’s from my college roommate who captained the Auburn University rowing team.  He used the 5 P’s (a reduction of the 7 P’s of the military) to motivate his team:

Poor Preparation = Piss Poor Performance

In the talk I asserted that this equation holds equally true for network security jobs as it does for rowing clubs.  A cybersecurity practitioner who is not well prepared—in particular who does not understand the “why” of things happening on their network—will perform neither effectively nor efficiently at their job.  And as with rowing, network security is often a team sport:  One ill-prepared team member will often drag down the rest of the team.

I mentioned how my colleagues at TCS (and many of our competitors and partners in the broad field of “advanced network security job training”) also believe in the equation, perhaps even moreso given that many of them are former or current practitioners themselves.  I have enjoyed working alongside instructors who are passionate about the importance of doing the best job they can.  Many subscribe to an axiom that my father originally used to describe his work as a high-school teacher:

“If my student has failed to learn, then I have failed to teach.”

After presenting this axiom I discussed several principles TCS has adopted to guide our advanced technical instruction, including:

  1. Create mission-derived course material with up-to-date exercises and tools.  We hire former military computer network operators to develop our course content, in part to ensure that what we teach in the classroom matches what’s currently being used in the field.  When new tools are published, or new attacks are put in the news, our content-creators immediately start modifying our course content—not simply to replace the old content with the new, but rather to highlight trends in the attack space & to involve students in speculating on what they will encounter in the future.
  2. Engage students with hands-on cyber exercises. Death by PowerPoint is useless for teaching technical skills.  Even worse for technical skills (in my opinion, not necessarily shared by TCS) is computer-based training (CBT).  Our Art of Exploitation training is effective because we mix brief instructor-led discussions with guided but open-ended hands-on exercises using real attacks and real defensive methodologies on real systems.  The only way to become a master programmer is to author a large and diverse series of software; the only way to become a master cybersecurity practitioner is to encounter scenarios, work through them, and be debriefed on your performance and what you overlooked.
  3. Training makes a practitioner better, and practitioners make training better.  A critical aspect of our training program is that our instructors aren’t simply instructors who teach fixed topics.  Our staff regularly rotate between jobs where they perform the cybersecurity mission—for example, by participating in our penetration test and our malicious software analysis teams—and jobs where they train the mission using the skills they maintain on the first job.  Between our mission-relevant instructors and our training environment set up to emulate on-the-job activities, our students experience in the classroom builds to what they will experience months later on the job.

The audience turned out to be mostly non-technical but I still threw in an example of the “why”-oriented questions that I’ve encouraged our instructors to ask:

The first half of an IPv6 address is like a ZIP code.  The address simply tells other Inetrnet computers where to deliver IPv6 messages.  So the IPv6 address/ZIP code for George Mason might be 12345.

Your IPv6 address is typically based on your Internet service provider (ISP)’s address.  In this example, George Mason’s ISP’s IPv6 address is 1234.  (Continuing the example, another business in Fairfax, Virginia, served by the same ISP might have address 12341; another might have 12342; et cetera.)

However, there is a special kind of address—a provider-independent address—that is not based on the ISP.  George Mason could request the provider-independent address 99999.  Under this scheme GMU would still use the same ISP (1234), they would just use an odd-duck address (99999 instead of 12345).

Question A:  Why is provider-independent addressing good for George Mason?

Question B:  Why is provider-independent addressing hard for the Internet to support?

Overall I had a great evening in Virginia and I am thankful to the staff at George Mason for having extended an invitation to speak.

December 19, 2012

Eighteen hours later

Filed under: Homeownership — JLG @ 10:12 PM

Not even 18 hours after I wrote the previous post, our water heater’s relief valve decided to relieve itself.

Water heater pressure and temperature relief valve.

Water heater pressure and temperature relief valve. The valve releases water into the pipe (shown), which in turn releases water onto your basement floor (not shown).

Fortunately for us it didn’t flood very much.  The water was about an inch deep in the middle of the basement (a concrete floor with no sump or drain) and was dry along much of the basement walls.  I was easily able to reach the water supply cut-off valve and stanch the flow.  Most worrisome is that the bottom of our expensive new furnace was submerged—though thankfully none of the internal components got wet—so I’ll need to keep an eye out for rusting for awhile.

The plumber’s working theory is:

  1. We had the temperature set higher than necessary (about 135 degrees);
  2. the old temperature control module somehow caused the temperature to rise much hotter than the setting;
  3. the relief valve opened due to overheating; and
  4. the relief valve stuck open due to sediment in the tank.

We now have a shiny new relief valve, several shiny new water sensor alarms, and a shiny new bill for emergency service from our plumber.

Other things that were helpful to have on hand:

  • Wet/dry vacuum.  We have a model that is able to empty itself automatically via a garden hose attached to the pump exhaust.  Without this feature you’ll spend 10 minutes emptying the bucket for every 1 minute you spend vacuuming up the flood.
  • Dehumidifier.  We have a model that automatically pumps its contents into the basement sink via a supplied 18-foot hose.
  • Box fan.
  • Shop towels.
  • Large garbage bags into which to put the sodden contents of the cardboard boxes that just yesterday you moved from shelves onto the floor “just for a couple days.”

While waiting for the plumber we spent a day with the water heater shut off.  And holy smokes, Boston tap water is cold this time of year.

It could have been much, much worse.  I went to the basement yesterday morning just to see if there was any seepage through the basement walls from the heavy rain outside.  (And good news: I didn’t see any seepage from the walls.)  If it hadn’t been raining we probably wouldn’t have noticed anything amiss until the flooding shorted out the main electrical panel, which at that point would have represented tens of thousands of dollars of damage and the loss of our “claims free discount” from the insurance company.

$10 water sensor alarms are your friend.

December 16, 2012

So you’ve bought a house!

Filed under: Homeownership — JLG @ 9:14 PM

Once you’ve bought a house there is a temptation to be very Munroevian about the whole matter:

The Munroevian approach to homeownership. (Source: xkcd.com/905/)

It’s been pretty fun geeking out about home maintenance options, making plans for repairs and additions, and even picking up a hammer myself now and then.  There are several surprisingly informative websites with details about how houses work, including:

  • Inspectapedia:  for example, this article about the insulation we just had installed.
  • Check This House:  for example, this article about the importance of second-floor air return ducting (a potential long-term maintenance item for our house).

Six months after closing I understand a little better the #1 question in buying a house—“how much house can I afford?”—or, more to the point, “how big a monthly housing expense can I afford?”

Monthly housing expense = mortgage payment + homeowners insurance + property tax – interest tax deduction + maintenance [or association fees]

Mortgage payment:

Conventional wisdom says to make your mortgage payment as large as possible.  It will be painful now but less painful over time, especially as your earnings rise over the course of your career.  That easing is because your payment will stay the same over the lifetime of your mortgage:  If you have a 30-year mortgage and you make $1000/month payments today (on principal and interest), you’ll still be making exactly $1000/month payments in 29 years.

The effects of inflation will mean that, in 30 years, your $1000/month payment will only feel like a $400/month payment.  (Note however that it is statistically unlikely that you will hold the same mortgage for 30 years—I’ve read several times that mortgages average about 7 years before the house is sold or refinanced.)

Low interest rates are good except for one thing:  I worry about resale value if interest rates rise significantly.  At 3.5% interest rates, a buyer who can afford $1000/month payments can buy a $225,000 house.  But at 7.0% rates, that same buyer can only buy a $150,000 house.

When interest rates go up, many prospective buyers won’t be able to afford to pay as much for your house as you did.  Will we have problems selling our house without taking a bath?

Homeowners insurance:

We pay $90/month.  Annoyingly, our loan documents require a $1,000 deductible for the policy—i.e., we’re not allowed to crank up the deductible to lower our rate.

One thing that surprised me is that none of the “big boys” (Allstate, State Farm, GEICO, etc.) write insurance policies in Massachusetts.  I had a similar problem trying to get renter’s insurance in Florida.  Perhaps we should move to some milquetoast state with uniform laws and no propensity for natural disasters?

Property taxes:

You can find out what we (or your neighbors, or pretty much anybody) pay for property taxes by looking them up on the county tax assessor’s website; these are matters of public record.

As with many jurisdictions, Boston has a residential tax exemption—your taxes are reduced by $130/month if the property serves as your principal residence.  So budget for additional expense if you plan to rent out your house.

Tax deduction:

I can’t imagine the federal mortgage interest tax deduction surviving much longer.  My guess is that it will be phased out over the next few years.  Without the deduction our monthly housing expense will increase by $300/month.

Also, as with rising interest rates, I suspect a tax deduction phase-out will have a depressing effect on home resale prices.

Maintenance:

The expensivity of home maintenance has surprised me.   So far we’ve spent money in three categories:

A. Required maintenance:  $10,000 for new roof shingles.

The home inspectors and roofing experts who evaluated the house initially gave us a one to two year window for replacing the roof.  However, when we had roofers up to do minor repairs (repointing the chimney, recementing the vents, cleaning the gutters) they found cracking asphalt and other problems that prompted us to schedule the replacement immediately.  The new roof will hopefully be good for about 20 years.

B. Opportunity-cost improvements:  $3,600 for whole-house insulation and $3,900 for an oil-to-gas conversion of the furnace.

Massachusetts has an astounding program called Mass Save where you can receive an interest-free loan to defray the up-front costs of energy-efficiency improvements to your house.  The improvements will pay for themselves within three years (in the form of reduced utility bills), plus the house is more comfortable afterwards.  It’s a total win-win-win program for homeowners.

There are also incentive rebate programs for efficiency improvements.  The insulation work actually cost $5,600 (minus a $2,000 rebate from Mass Save); the furnace conversion cost $4,700 (minus a $800 rebate from our gas utility company).

We could have waited a year to make these improvements—the oil heater and indoor fuel oil tank were only about ten years old—but with the rebates and interest-free loan there was no reason not to jump on these, especially with the possibility that the program might not be renewed in future years.

The old 275-gallon oil tank, taking up space in our basement.

With the oil tank removed, there is space aplenty to eventually install a demand water heater.

C. Functional improvements:  $6,000 for electrical work and exhaust ventilation.

Our house is over 100 years old and (not surprisingly) didn’t have outlets or lights or exhaust fans everywhere we wanted them.  Worse, we were occasionally tripping breakers by running too many appliances on a single circuit.

We could have waited a year or two before performing this work, but I wanted to have the new wires pulled before having the insulation work done on the exterior walls and in the attic.  (The electricians said they could certainly do it even after the insulation was put in but that it would be “messier”.)

Also, it is a perpetual source of happiness for me to walk into the kitchen and see:

New externally-vented range hood. We use it daily. The white square of paint is where the over-the-range microwave used to be hung.

or into the bathroom and see:

New bathroom electrical outlet (one of two). Previously the bidet power cord ran along the bathroom floor, via an ugly grey extension cord, into the outlet by the sink.

Every time I take a shower I look over at the safer, neater, convenient bathroom outlet and feel the joy of homeownership.  (We also solved four other extension-cord problems elsewhere in the house, each of which bring joy in turn.)

D. Deferred maintenance and improvements:  Water heater replacement, carpentry, repointing the basement walls.

Our water heater is nine years old and has a nine year warranty.  I don’t believe it’s been cleaned nor flushed regularly, nor had the sacrificial anode replaced, so given the lack of maintenance I worry that it could start leaking—that would be a big problem since there is no floor drain in the basement—so I plan to replace it in 2013 with a demand water heater.

Demand water heaters need a fat gas pipe.  They consume up to 200,000 BTUs/hour or more; in comparison, our new high-efficiency furnace that consumes only up to 60,000 BTUs/hour and a typical gas stove and oven consume up to 65,000 BTUs/hour.  Our current gas pipe is thin, old, and lined (basically not up to snuff) so I’ve submitted an application to the gas company to lay a larger pipe in the spring.  I’ve requested a future-proofed pipe large enough to accommodate those three appliances plus a potential upgrade to a gas clothes dryer and a natural gas grill.

The lesson learned for me is if you’re buying a house, keep at least an extra $10,000 in reserve to cover any urgent maintenance items.  In other words, don’t completely exhaust your financial reserves by making a larger-than-needed down payment or purchasing new furniture too quickly.

In aviation there is a concept of prepaying into a maintenance fund every time you fly your own aircraft.  You know that you’re required to pay for major maintenance every 2,000 flight hours—at a cost of tens of thousands of dollars—so you divide that cost by 2,000 and prepay $10 into your maintenance fund for every hour you fly.

I’ve seen similar recommendations about prepaying for home maintenance.  You know that you’ll periodically have to pay for roofing work, new water heaters, and whatnot, so forecast out when you’ll make those repairs and start prepaying into a maintenance fund.  (If you buy into a condo association, part of your condo association fees are earmarked for exactly this purpose.)

There are a couple other housing-related websites I’ve been reading regularly, including The Mortgage Professor and (perhaps of local interest only) the Massachusetts Real Estate Law Blog.  The professor relates a story of an ill-prepared homeowner, who asked:

“I hadn’t been in my house 3 weeks when the hot water heater stopped working. Only then did I realize that I hadn’t been given the name of the superintendent…who do I see to get it fixed?”

One of the challenges we’ve faced is finding good contractors.  Here’s what I’ve learned about finding good contractors:

  • Get three quotes.  Not because you’re trying to find the absolute lowest cost, but rather that you’ll hear three different perspectives on what they think you should do.  For example, I had three heating contractors in to discuss the oil-to-gas conversion.  One suggested that I simply replace the burner on my existing furnace; one suggested that I install a new 100,000-BTU gas furnace; one suggested that I install a new 60,000-BTU gas furnace because of the square footage of the house.  Those three conflicting opinions gave me a lot of information to mull over; in the end I chose option #3 and it’s worked out perfectly.
  • Ask your neighbors for recommendations.  Several folks in my community recommended a particular roofing company; I ended up hiring them and was thoroughly satisfied with their work and professionalism.
  • Join Angie’s List for recommendations.   I hesitated to join at first—whining about how it costs money!—but in the end I figured was only hurting myself by not joining.  I ended up hiring an electrical contractor that I found on Angie’s List and was thoroughly satisfied with their work and professionalism.

And here’s what I’ve learned about hiring contractors:

  • Read the installation manuals yourself.  I wasn’t happy about how the heating contractor didn’t bother to configure the DIP switches on my new furnace.  (Specifically, he didn’t set the furnace’s fan speed to match the tonnage of the air conditioner’s compressor; he claimed it wasn’t important because he’d never done it before.)  So, I read up on furnace fan speeds and compressors myself, make the correct setting myself, and now find myself self-satisfied with better air conditioning performance.
  • Do your own homework before the contractors arrive.  I asked potential electricians about adding an exhaust fan in our half-bathroom.  One of them suggested that I buy the fan and he’d install it.  I asked why; he explained that if it were up to him he’d just buy the cheapest fan available, but he felt I’d likely be interested in a higher-end fan.  And he was correct!  After I scoured the Internet for information on exhaust fans I identified one of the low-sone (quiet) fans as the one I wanted, and we’re much happier with this choice than we would have with a louder fan.  (Note: I also installed a wall-switch timer on the exhaust fan—a great idea that I learned about while doing my homework on options for fans.)
  • Keep track of your paid invoices.  Some work you perform might increase your basis in the property (see IRS Publication 523), which could reduce the amount of tax you (might) pay when you sell the house.
  • Be ready to be flexible.  The heating contractor said it’d be done in one weekend, but it ended up taking a month and a half before the last of the work (sealing the old hole in our chimney) was complete.  The roofing contractor gave a two-week window in which they’d do the work, then ended up doing the work three days before the start of the window.  The insulation contractors said it’d be a two day job, but it ended up being a three-day job spread out over two weeks.  Fortunately, all of our contractors have taken pride in their work—so we’ve been left largely happy with the work that’s been done.

December 14, 2012

I can’t keep on renaming my dog

Filed under: Opinions,Work — JLG @ 12:00 AM

A clever meme hit the Internet this week:

“Stop asking me to change my password. I can’t keep on renaming my dog.”

If you (or the employees you support) aren’t using a password manager, clear off your calendar for the rest of the day and use the time to set one up.  It’s easy security.  Password managers make it simple to create good passwords, to change your passwords when required, to use different passwords on every site, and to avoid reusing old passwords.

The upside to using a password manager:

  • You only need to remember two strong passwords.  (One to log into the computer running the password manager, and one for the password manager itself.)

The downside to using a password manager:

  • All your eggs are in one basket.  (Therefore you need to pay close attention to choosing a good master password, protecting that password, and backing up your stored passwords.)

Generally speaking a password manager works as follows:

  1. You provide the password manager with a master passphrase.
  2. The password manager uses your master passphrase to create (or read) an encrypted file that contains your passwords and other secrets.

(For deeper details, see KeePass’s FAQ for a brief technical explanation or Dashlane’s whitepaper for a detailed technical explanation.  For example, in the KeePass FAQ the authors describe how the KeePass product derives 256-bit Advanced Encryption Standard [AES] keys from a user’s master passphrase, how salt is used to protect against dictionary attacks, and how initialization vectors are used to protect multiple encrypted files against known-plaintext attacks.  Other products likely use a similar approach to deriving and protecting keys.)

Password managers often also perform useful convenience functions for you—inserting stored passwords into your web browser automatically; generating strong passwords of any desired length; checking your usernames against hacker-released lists of pwned websites; evaluating the strength of your existing passwords; leaping tall buildings in a single bound; etc.

The root of security with password managers is in protecting your master password.  There are three main considerations to this protection:

(A) Choose a good passphrase. 

I’m intentionally using the word “passphrase” instead of “password” to highlight the need to use strong, complex, high-entropy text as your passphrase.  (Read my guidance about strong passphrases in TCS’s Better Passwords, Usable Security whitepaper.  Or if you don’t read that whitepaper, at least read this webcomic.)

Your master passphrase should be stronger than any password you’re currently using—stronger than what your bank requires, stronger than what your employer requires.  (However, it shouldn’t be onerously long—you need to memorize it, you will need to type it every day, and you will likely need to type it on mobile devices with cramped keyboards.)  I recommend a minimum of 16 characters for your master passphrase.

(Side note:  For similar reasons, another place where you should use stronger-than-elsewhere passphrases is with full-disk encryption products, such as TrueCrypt or FileVault, where you enter in a password at boot time that unlocks the disk’s encryption key.  As Microsoft’s #7 immutable law of security states, encrypted data is only as secure as its decryption key.)

(B) Don’t use your passphrase in unhygienic environments.

An interesting concept in computer security is the $5 wrench.  Attackers, like electricity, follow the path of least resistance.  If they’ve chosen you as their target, and if they aren’t able to use cryptographic hacking tools to obtain your passwords, then they’ll try other approaches—perhaps masquerading as an IT administrator and simply asking you for your password, or sending you a malicious email attachment to install a keylogger onto your computer, or hiding a pinhole spy camera in the light fixture above your desk.  So even with strong encryption you are still at risk to social engineering attacks targeting your passwords and password manager.

One way to reduce the risk of revealing your passphrase is to avoid typing it into computer systems over which you have neither control nor trust, such as systems in Internet cafes, or at airport kiosks, or at your Grandma Edna’s house.  To paraphrase public-service messages from the 1980s, when you give your passphrase to an untrusted computer you could be giving that passphrase to anyone who used that computer before you.

For situations where you simply must use a computer of dubious provenance—say, you’re on vacation, you take a wrong turn at Albuquerque, your wallet and laptop get stolen, and you have to use your password manager at an Internet cafe to get your credit card numbers and bank contact information—some password managers provide features like one time passwordsscreen keyboardsmultifactor authentication, and web-based access to help make lemonade out of life’s little lemons.

(C) Make regular backups of your encrypted file.

If you have a strong passphrase [(A)] and you keep your passphrase secret [(B)] then it doesn’t matter where copies of your encrypted file are stored.  The strong encryption means that your file won’t be susceptible to a brute-force or password-guessing attack even if an attacker obtains a copy of your file.  (Password management company LastPass had a possible security breach of their networks in 2011.  Even so, users with strong passphrases had “no reason to worry.”)  As such you are safely able to make backup copies of your encrypted file and to store those backups in a convenient manner.

Some password managers are designed to store your encrypted file on your local computer.  Other managers (notably LastPass) store your encrypted file on cloud servers managed by the same company, making it easier to synchronize the password file across all devices you use.  Still other managers integrate easily with third-party cloud storage providers (notably Dropbox) for synchronization across multiple devices, or support direct synchronization between two devices over a Wi-Fi network.  (In all remote-storage cases I’ve found, the file is always encrypted locally before any portion of the file is uploaded into the cloud.)

Whichever type of manager you use, be aware that that one file holds your only copy of all of your passwords—it is critical that you not lose access to the contents of the file.  Computers have crashed.  Password management companies have disappeared (Logaway launched on May 4, 2010, and ceased operations on February 2, 2012).  Cloud services havelost data and have experienced multi-day disruptions.  Protect yourself by regularly backing up your encrypted file, for example by copying it onto a USB dongle (whenever you change or add a password) or by printing a hard copy every month to stash in a safe deposit box.

If you maintain a strict separation between your home accounts and your work accounts—for example to keep your employer from snooping and obtaining your Facebook password—simply set up two password managers (one on your home laptop, the other on your work PC) using two unique passphrases as master keys.

Password manager software is easy to set up and use.  The biggest problem you’ll face is choosing from among the cornucopia of password managers.  A partial list I just compiled, in alphabetical order, includes: 1PasswordAnyPasswordAurora Password ManagerClipperzDataVaultDashlaneHandy PasswordKaspersky Password ManagerKeePassKeeper,LastPassNorton Identity SafeParanotic Password ManagerPassword AgentPassword SafePassword WalletPINsRoboFormSecret ServerSplashIDSticky PasswordTK8 Safe, and Universal Password Manager.  There is even a hardware-based password manager available.

Your top-level considerations in choosing a password manager are:

  1. Does it run on your particular OS or mobile device?  (Note that some password managers sometimes charge, or charge extra, to support synchronization with mobile devices.)
  2. Do you already use Dropbox on all your devices?  If not, consider a manager that provides its own cloud storage (LastPass, RoboForm, etc.)  If so, and only if you would prefer to manage your own encrypted file, choose a service that supports Dropbox (1Password, KeePass, etc.)

I don’t recommend or endorse any particular password manager.  I’ve started using one of the “premium” (paid) password managers and am astonished at how much better any of the managers are over what I’d been using before (an unencrypted manual text-file-based system that I’d hacked together last millennium).

November 20, 2012

Gabbing to the GAB

Filed under: Opinions,Work — JLG @ 12:00 AM

Earlier this month the (ISC)² U.S. Government Advisory Board (GAB) invited me to present my views and opinions to the board.  What a neat opportunity!

The GAB is a group of mostly federal agency Chief Information Security Officers (CISOs) or similar executives.  Officially it comprises “10-20 senior-level information security professionals in their respective region who advise (ISC)² on industry initiatives, policies, views, standards and concerns” and whose goals include offer deeper insights into the needs of the information security community and discuss matters of policy or initiatives that drive professional development.

In terms of content, in addition to discussing my previous work on storage systems with autonomous security functionality, I advanced three of my personal opinions:

  1. Before industry can develop the “cybersecurity workforce of the future” it needs to figure out how to calculate the return on investment (ROI) for IT/security administration.  I suggested a small initial effort to create an anonymized central database for security attacks and the real costs of those attacks.  If such a database was widely available at nominal cost (or free) then an IT department could report on the value of their actions over the past year: “we deployed such-and-such a protection tool, which blocks against this known attack that caused over $10M in losses to a similar organization.”  Notably, my suggested approach is constructive (“here’s what we prevented”) rather thannegative (“fear, uncertainty, and doubt / FUD”).  My point is that coming at the ROI problem from a positive perspective might be what makes it work.
  2. No technical staff member should be “just an instructor” or “just a developer.”  Staff hired primarily as technical instructors should (for example) be part of an operational rotation program to keep their skills and classroom examples fresh.  Likewise, developers/programmers/etc. should spend part of their time interacting with students, or developing new courseware, or working with the sales or marketing team, etc.  I brought up the 3M (15%) / Hewlett-Packard Labs (10%) / Google (20%) time model and noted that there’s no reason that a practical part-time project can’t also be revenue-generating; it just should be different (in terms of scope, experience, takeaways) from what the staff member does the rest of their time.  My point is that treating someone as “only” an engineer (developer, instructor, etc.) does a disservice not just to that person, but also to their colleagues and to their organization as a whole.
  3. How will industry provide the advanced “tip-of-the-spear” training of the future?  One curiosity of mine is how to provide on-the-job advanced training.  Why should your staff be expected to learn only when they’re in the classroom?  Imagine if you could provide your financial team with regular security conundrums — “who should be on the access control list (ACL) for this document?” — that you are able to generate, monitor, and control.  Immediately after they take an action (setting the ACL) then your security system provides them with positive reinforcement or constructive criticism as appropriate.  My point is that if your non-security-expert employees regularly deal with security-relevant problems on the job, then security will no longer be exceptional to your employees.

I had a blast speaking.  The GAB is a group of great folks and they kept me on my toes for most of an hour asking questions and debating points.  It’s not every day that you get to engage high-level decision makers with your own talking points, so my hope is that I gave them some interesting viewpoints to think about — and perhaps some new ideas on which to take action inside their own agencies and/or to advise the government.

November 1, 2012

2012 Conference on Computer and Communications Security

Filed under: Reviews,Work — JLG @ 12:00 AM

In October I attended the 19th ACM Conference on Computer and Communications Security (CCS) in Raleigh, North Carolina.  It was my fourth time attending (and third city visited for) the conference.

Here are some of my interesting takeaways from the conference:

The point of Binary Stirring is to end up with a completely different (but functionally equivalent) executable code segment, each time you load a program.  The authors double “each code segment into two separate segments—one in which all bytes are treated as data, and another in which all bytes are treated as code.  …In the data-only copy (.told), all bytes are preserved at their original addresses, but the section is set non-executable (NX).  …In the code-only copy (.tnew), all bytes are disassembled into code blocks that can be randomly stirred into a new layout each time the program starts.”  (The authors measured about a 2% performance penalty from mixing up the code segment.)

But why mix the executable bytes at all?  Binary Stirring is intended to protect against clever “return-oriented programming” (ROP) attacks by eliminating all predictable executable code from the program.  If you haven’t studied ROP (I hadn’t before I attended the talk) then it’s worth taking a look, just to appreciate the cleverness of the attack & the challenge of mitigating it.  Start with last year’s paper Q: Exploit Hardening Made Easy, especially the related work survey in section 9.

Regarding ORAM, imagine a stock analyst who stores gigabytes of encrypted market information “in the cloud.”  In order to make a buy/sell decision about a particular stock (say, NASDAQ:TSYS), she would first download a few kilobytes of historical information about TSYS from her cloud storage.  The problem is that an adversary at the cloud provider could detect that she was interested in TSYS stock, even though the data is encrypted in the cloud.  (How?  Well, imagine that the adversary watched her memory access patterns the last time she bought or sold TSYS stock.  Those access patterns will be repeated this time when she examines TSYS stock.)  The point of oblivious RAM is to make it impossible for the adversary to glean which records the analyst downloads.

  • Fully homomorphic encryption:  The similar concept of fully homomorphic encryption (FHE) was discussed at some of the post-conference workshops.  FHE is the concept that you can encrypt data (such as database entries), store them “in the cloud,” and then have the cloud do computation for you (such as database searches) on the encrypted data, without decrypting.

When I first heard about the concept of homomorphic encryption (circa 2005, from some of my excellent then-colleagues at IBM Research) I felt it was one of the coolest things I’d encountered in a company filled with cool things.  Unfortunately FHE is still somewhat of a pipe dream — like ORAM, it’ll be a long while before it’s efficient enough to solve any practical real-world problems — but it remains an active area of interesting research.

  • Electrical network frequency (ENF):  In the holy cow, how cool is that? category, the paper “How Secure are Power Network Signature Based Time Stamps?” introduced me to a new forensics concept: “One emerging direction of digital recording authentication is to exploit an potential time stamp originated from the power networks. This time stamp, referred to as the Electrical Network Frequency (ENF), is based on the fluctuation of the supply frequency of a power grid.  … It has been found that digital devices such as audio recorders, CCTV recorders, and camcorders that are plugged into the power systems or are near power sources may pick up the ENF signal due to the interference from electromagnetic fields created by power sources.”  Wow!

The paper is about anti-forensics (how to remove the ENF signature from your digital recording) and counter-anti-forensics (how to detect when someone has removed the ENF signature).  The paper’s discussion of ENF analysis reminded me loosely of one of my all-time favorite papers, also from CCS, on remote measurement of CPU load by measuring clock skew as seen through TCP (transmission control protocol) timestamps.

  • Resource-freeing attacks (RFA):  I also enjoy papers about virtualization, especially regarding the fair or unfair allocation of resources across multiple competing VMs.  In the paper “Resource-Freeing Attacks: Improve Your Cloud Performance (at Your Neighbor’s Expense)”, the authors show how to use antisocial virtual behavior for fun and profit: “A resource-freeing attack (RFA) [improves] a VM’s performance by forcing a competing VM to saturate some bottleneck. If done carefully, this can slow down or shift the competing application’s use of a desired resource. For example, we investigate in detail an RFA that improves cache performance when co-resident with a heavily used Apache web server. Greedy users will benefit from running the RFA, and the victim ends up paying for increased load and the costs of reduced legitimate traffic.”

A disappointing aspect of the paper is that they don’t spend much time discussing how one can prevent RFAs.  Their suggestions are (1) use a dedicated instance, or (2) build better hypervisors, or (3) do better scheduling.  That last suggestion reminded me of another of my all-time favorite research results, from last year’s “Scheduler Vulnerabilities and Attacks in Cloud Computing”, wherein the authors describe a “theft-of-service” attack:  A virtual machine calls Halt() just before the hypervisor timer fires to measure resource use by VMs, meaning that the VM consumes CPU resources but (a) isn’t charged for them and (b) is allocated even more resources at the expense of other VMs.

  • My favorite work of the conference:  The paper is a little hard to follow, but I loved the talk on “Scriptless Attacks – Stealing the Pie Without Touching the Sill”.  The authors were interested in whether an attacker could still perform “information theft” attacks once all the XSS (cross-site scripting) vulnerabilities are gone.  Their answer: “The surprising result is that an attacker can also abuse Cascading Style Sheets (CSS) in combination with other Web techniques like plain HTML, inactive SVG images or font files.”

One of their examples is that the attacker can very rapidly shrink then grow the size of a text entry field.  When the text entry field shrinks to one pixel smaller than the width of the character the user typed, the browser automatically creates a scrollbar.  The attacker can note the appearance of the scrollbar and infer the character based on the amount the field shrank.  (The shrinking and expansion takes place too fast for the user to notice.)  The data exfiltration happens even with JavaScript completely disabled.  Pretty cool result.

Finally, here are some honorable-mention papers in four categories — work I enjoyed reading, that you might too:

Those who cannot remember the past, are condemned to repeat it:

Sidestepping institutional security:

Why be a white hat?  The dark side is where all the money is made:

Badware and goodware:

Overall I enjoyed the conference, especially the “local flavor” that the organizers tried to inject by serving stereotypical southern food (shrimp on grits, fried catfish) and hiring a bluegrass band (the thrilling Steep Canyon Rangers) for a private concert at Raleigh’s performing arts center.

October 15, 2012

Public-key cryptography & certificate chaining

Filed under: Opinions,Work — JLG @ 12:00 AM

Of the many marvelous Calvin and Hobbes cartoons by Bill Watterson, one of the most marvelous (and memorable) is The Horrendous Space Kablooie.  Quoth Calvin, “That’s the whole problem with science.  You’ve got a bunch of empiricists trying to describe things of unimaginable wonder.”

I feel the same way about X.509, the name of the international standard defining public key certificates.  X.509?  It’s sort of hard to take that seriously — “X.509” feels better suited as the name of an errant asteroid or perhaps a chemical formula for hair restoration.

But I digress.  X.509 digital certificates are exchanged when you create a “secure” connection on the Internet, for example when you read your webmail using HTTPS.  The exchange happens something like this:

  • Your computer:  Hi, I’m a client.
  • Webmail server:  Howdy, I’m a server.  Here’s my X.509 certificate, including the public key you’ll use in the next step.
  • Your computer:  Fabulous.  I’ve calculated new cryptographic information that we’ll use for this session, and I’ve encrypted it using your public key; here it is.
  • (Further traffic is encrypted using the session cryptographic information.)

Several things happen behind the scenes to provide you with security:

  1. Your computer authenticates the X.509 certificate(s) provided by the server.  It checks that the server uses the expected web address.  It also verifies that a trusted third party vouches for the certificate (by checking the digital signature included in the certificate).
  1. Your computer verifies that there is no “man in the middle” attack in progress.  It does this by ensuring that the server has the private key associated with its certificate.  It does this by encrypting the session cryptographic information with the server’s public key.  If the server didn’t have the private key then it wouldn’t be able to encrypt and decrypt any further traffic.

Unfortunately the system isn’t perfect.  The folks who programmed your web browser included a set of trusted root certificates with the browser.  Those root certificates were issued by well-known certificate authorities [CAs] such as Verisign and RSA.  If an attacker breaches security at either a root CA or an intermediate CA, as happened with the 2011 Comodo and DigiNotar attacks, then an attacker could silently insert himself into your “secure” connection.  Yikes!  Efforts like HTTPS Everywhere and Convergence are trying to address this problem.

Public-key cryptography is pretty neat.  When you use public-key cryptography you generate two keys, a public key (okay to give out to everyone) and a private key (not okay).  You can use the keys in two separate ways:

  • When someone wants to send you a private message, they can encrypt it using your public key.  The encrypted message can only be decrypted using your private key.
  • When you want to publish a message, you can encrypt (sign) it using your private key.  Anyone who has your public key can decrypt (validate) your message.

In a public key infrastructure, a root CA (say, Verisign) uses its private key to sign the public-key certificates of intermediate certificate authorities (say, Thawte).  The intermediate CAs then use their private key to sign the public-key certificates of their customers (say, www.google.com).  When you visit Google’s site using HTTPS, Google provides you both their certificate and Thawte’s certificate.  (The chained relationship Verisign-Thawte-Google is sometimes called the “chain of trust”.)  Your browser uses the certificates provided by Google, plus the Verisign root certificate (bundled with the browser), to verify that the chain of trust is unbroken.

[I use Google as the example here, since you can visit https://www.google.com and configure your browser to show the certificates that Google provides.  However, I have no knowledge of Google’s contractual relationship with Thawte.  My assertions below about Google are speculative, but the overall example is valid.]

Recently I was asked “We have been trying to understand Certificate Chaining and Self Signing.  Would a company [like Google] be allowed to purchase one certificate from a Certificate issuer like Verisign and then issue its own signed additional certificates for additional use?”

Great question!  (Where “great question” is defined as “um, I don’t know, let me check into that.”)  It turns out the answer is no, a company’s certificate(s) cannot be used to sign other certificates.

Using Google as an example, the principal reason is that neither Verisign nor Thawte let Google act as an “intermediate certificate authority.”  It’s (1) likely against the license agreement under which Thawte signed Google’s certificate, and (2) prohibited by metadata fields inside both Thawte’s certificate and Google’s certificate:

  • Google’s certificate is prohibited from signing other ones because of a flag inside the certificate metadata.  (Specifically, their Version 3 certificate has an Extension called Certificate Basic Constraints that has a flag Is not a Certificate Authority.)  And Google can’t modify their certificate to change this flag, because then signature validation would fail (your browser would detect that Google’s modified certificate doesn’t match the original certificate that Thawte signed).
  • Certificates signed by Thawte’s certificate cannot be used as Certificate Authorities (CAs) because of a flag inside Thawte’s certificate.  (Specifically, their Version 3 certificate has an Extension called Certificate Basic Constraints that has an field Maximum number of intermediate CAs that’s set to zero, meaning that no verification program should accept any certificates that we signed using their key.)

If your company needs to issue its own signed certificates, for example to protect your internal servers, it’s relatively easy to do.  All you have to do is run a program that generates a root certificate.  You would then be like Verisign in that you could issue and sign as many other certificates as you wanted.  (The down side of your “private PKI” is that none of your users’ browsers would initially recognize your root certificate as a valid certificate.  For example, anyone surfing to a web page protected by certificates you signed would get a big warning page every time, at least until they imported your root certificate’s signature to their trusted-certificates list.)

The article I found most helpful in digging up this answer is here:
http://unitstep.net/blog/2009/03/16/using-the-basic-constraints-extension-in-x509-v3-certificates-for-intermediate-cas/

(The full name of the X.509 standard is the far worse ITU-T Recommendation X.509: Information technology – Open systems interconnection – The Directory: Public-key and attribute certificate frameworks.  One name with four hyphens, two colons, and the hyphenated equivalent of comma splicing?  Clearly rigorous scientific work.)

September 25, 2012

Better living through IPv6-istry

Filed under: Opinions,Work — JLG @ 12:00 AM

There have been many, many words written about the IPv4-to-IPv6 transition — probably around 340 undecillion words at this point — but perhaps my favorite words came in a recent Slashdot comment by AliasMarlowe:

I believe in the incremental approach to updates; it’s so much safer and usually easier.
So it’s going to be IPv5 for me, while you suckers make a mess of IPv6!

I’ve long been a fan of IPv6.  Deploying IPv6 has the obvious benefit of solving the IPv4 address exhaustion problem, as well as making it easier to do local subnetting, and site network architecture, and to some degree internet-scale routing.

But perhaps the greatest benefit of deploying IPv6 is the restoration of end-to-end transparency.  IPv6 obviates the need for network address translation (NAT).  With IPv6, when your Skype application wants to initiate a call to my Skype application, the apps can address each other directly without relying on hole punching, third-party relaying, or other “clever” NAT-circumvention techniques.

(End-to-end addressing may sound unimportant, but if we could restore this critical Internet design goal to practice then we could party like it’s 1979!)

I recently spoke with some of TCS’s computer network operations students about security considerations for IPv6 deployments.  They were surprised when I claimed that NAT is not needed in an IPv6 security plan; several students commented that the NAT on their home network router was the only thing protecting their computers from the evils of the Internet.

A common misperception!  There are generally two functions performed by your home network router (or your corporate upstream router, if so configured):

  1. Firewalling / stateful packet inspection.  This is a security function.
  2. IP masquerading / network address [and port] translation.  This is not a security function; it simply allows all the devices on your internal network to share a single external network (IP) address.

With IPv6 you can (and should) still deploy inline firewall appliances to perform function #1.  But with the plethora of available addresses in IPv6 — 18,446,744,073,709,551,616 globally routable addresses per standard local subnet — there is no overt need for masquerading.

Of course, masquerading provides ancillary benefits:  It somewhat hinders external traffic analysis, such as network mapping, by obfuscating the internal source and destination of traffic.  Combining masquerading with private IPv4 addressing also prevents internal addresses from being externally routable.

But similar benefits can be realized in IPv6 without masquerading and therefore without losing the benefits of end-to-end transparency.  For example IPv6 privacy extensions can obfuscate your internal network architecture and IPv6 unique local addresses can be used to isolate systems that shouldn’t be visible on external networks.

September 23, 2012

I LOVE FLYING

Filed under: Aviation — JLG @ 1:41 AM

The weather cooperated (somewhat) and I indeed got to fly last weekend for my second solo cross-country flight:

JLG cross-country solo over Newport, RI, September 16, 2012

By the end of the flight I was positively giddy; as I walked back to my car I texted Evelyn the above picture with the caption “I LOVE FLYING”.  Almost all of my flight training has left me grinning from ear-to-ear, but this flight was by far the most fun I’ve had yet.

(Almost all of my flight training has left me grinning:  The required night landings weren’t nearly as much fun as I thought they would be, especially since my instructor chose to test my performance under pressure — asking me to fly an unfamiliar approach to the runway, while simulating a landing light failure, all during a rushed and chaotic situation — and I didn’t handle it particularly gracefully.  But “trial by fire” was the whole point, and I feel that I learned from the experience and am better prepared to execute emergency landings at night.  I also did manage to land the airplane despite the chaos, though I’d drifted off the runway centerline and was still drifting as the wheels touched down.)

The cross-country flight was spectacular.  All the more so because I didn’t think I’d get to fly due to the weather:  There was a cloud layer (“ceiling”) around 4,000 feet along much of the route, well below the 5,000 foot minimum required by my flight school for cross-country flights.  Also, the surface winds were gusting to 16 knots at KBED and 18 knots at KGON, both above the 15-knot limit that my instructor chose for my original solo endorsement.  But my instructor waived both limits for the flight, citing his comfort level with how well I’ve been flying lately, and off I went at 3,000 feet.

It felt as though everything went right:

  • My navigation was great.  I chose to navigate primarily using VOR navigation, with dead reckoning as backup (following along on my aviation chart and looking for outside ground references to verify my position and course) and GPS as backup to the backup.  In the past my VOR navigation has been shaky, but this time it was rock solid — thanks to my instructor’s advice to set up the navigation radios before I even taxied the airplane, instead of hurriedly trying to dial them in when I need them.  My route was KBED to the GDM (Gardner, MA) VOR, to the PUT (Putnam, CT) VOR, to a landing at KGON (Groton, CT), thence direct to a landing at KEWB (New Bedford, MA), and back to KBED.
  • My landings were great.  Approaching KGON I twice asked the tower for a “wind check” to verify that the winds were still below the maximums to land; I was concerned both with the wind gusts and the “crosswind component” of the wind.  (Pilots prefer the wind to blow steadily and directly down the runway.  The winds at KGON were both gusty and at an angle to the runway; if the crosswind component of the gusts was greater than 8 knots then I was not authorized to land.)  I was prepared throughout the landing to abort if the winds started gusting, but ended up with a landing so smooth it felt as though there were no wind whatsoever.
  • The views were great.  Here are some pictures:

3,000 feet over Newport, RI

Wow.  Also:

2,500 feet over Waltham, MA (view towards Boston)

and

Boston, MA. Our house is off-frame to the right.

Wow.

This weekend I passed my private pilot knowledge test, scoring 54 correct (90%) out of 60 questions.  (A passing score is 70% or above.)  The questions I missed were on the following topics:

  • Hand-propping an airplane.  Engines without electric starters require someone to go out and manually spin the propeller “old-school” to get the engine going.  Since I don’t do any hand-propping I hadn’t even read the section of the Airplane Flying Handbook that explains the recommended procedure (“Contact!” etc.)
  • Tri-color visual approach slope indicator (VASI).  Does a tri-color VASI use a green, amber, or white light to indicate that you are on the correct glideslope?  I answered white (I suppose I was thinking about a pulsating VASI) instead of the correct green.  A tri-color VASI doesn’t even have a white light!  I’m not sure there are any airports in the Northeast that still have a tri-color VASI in use, but if so I’d love to see one.  EDIT: There are two nearby!  Falmouth Airpark (5B6, Falmouth, MA) and Richmond Airport (08R, West Kingston, RI) both report tri-color VASIs in use.
  • Characteristics of stable air masses.  One of the neat things about learning to fly is that you learn a lot of arcane or obscure facts about weather systems, fog, etc., that generally are only going to be useful to you if you plan to fly through ugly weather.  Apparently I didn’t learn enough arcane or obscure facts; I missed two weather-related questions.
  • Dropping items from the airplane.  It turns out it’s totally legit to drop things from an airplane!  (I incorrectly answered that you are only allowed to do so in an emergency.)  FAR 91.15: “No pilot in command of a civil aircraft may allow any object to be dropped from that aircraft in flight that creates a hazard to persons or property. However, this section does not prohibit the dropping of any object if reasonable precautions are taken to avoid injury or damage to persons or property.”  During my post-exam review my instructor mentioned that he has returned car keys to his wife using this method.

So I’m inexorably closer to being done!  I have a couple of in-school checkrides coming up with another instructor — the fifth instructor I will have flown with during my flight training — and if the weather cooperates I could take the final FAA oral test and checkride as early as October 10.

« Newer PostsOlder Posts »