Jagged Thoughts | Dr. John Linwood Griffin

November 7, 2015

Every Eighteen Months, part 2: Career coaching

Filed under: Opinions,Work — JLG @ 5:42 PM

It can be an uphill battle to get personalized advice about your job performance and your career progress.  Earlier this week a friend and I texted about the challenge of extracting good feedback from management in our modern litigation-averse corporate America.

He noted that in his job he hasn’t been getting the feedback he’s been asking for:

“I like the idea of knowing how I’m actually doing.  I credit honest feedback at every stage of my life where I claim to have grown.”

I replied:

“Hard to get that from an employer.  Good to try to cultivate a mentor wherever you go—someone senior but not in your org chart—but even then I’ve never been able to get the kind of constructive criticism that I’d like from anyone other than same-level colleagues.”

So the trick is to find those mentors.  I appreciate constructive criticism from anybody willing to lob it my way, but I especially appreciate getting it from successful entrepreneurial types (S.E.T.) who are living their dream.

Along those lines I recently had a great discussion with another S.E.T. acquaintance about my career goals, plans, and aspirations.  I described how my long-term goal has been to found and lead my own company (preferably successfully, that second time around).  I explained that as a step in that direction, my current dream job would have me being an entrepreneurial engineer.  I noted that I’ve been seeking out jobs where I am able to come up with ideas, pursue the good ones, and (if they fail) come up with more ideas and pursue them—jobs where I can:

  • have a product vision unique in the industry,
  • execute independently on my vision by applying my (and my colleagues’) unique and most prominent skills, and my interesting background, toward realizing the vision, and
  • increase my employer’s profits by the boatload (not all ideas are big wins, but all big wins come from ideas).

Unfortunately, such jobs are few and far between.  While there are some great aspects of my current job (it’s challenging, fast-paced, exposing me to new-to-me parts of the business, and immersing me in new technologies), I’m not in a role in which I am expected to—or in which I or my peers have largely been able to—have a broad degree of product, business, and technical influence.  So I asked my S.E.T. friend:  How do I sell myself and my capability, to advance into more senior roles—what do I need to convey about myself, and to whom?

The answer was to talk with a career coach.

Great idea!  One I’d never considered before.  And via a friend I came across a good coach willing to squeeze me in for one session.  (Protip: If your prospective coach suggests that you meet over a slice of pizza and a glass of beer, you know you’ve found a good one.)

Here were my takeaways from my chat with a career coach:

Do I need more experience in startups to do another startup?

He suggests not.  If I have an idea and can grab people and start pursuing it, the sink-or-swim-myself model is apparently just as good as watching someone else sink or swim.

He was definitely big on the idea of me going to startup(s) though.  He noted that if I feel that my career has stalled that’s because it has stalled, and at mid-sized companies filled with relatively young people there just aren’t going to be opportunities that open up for me for vertical movement.

He was also very cautionary to do my due diligence before going to a startup.  Know the founders, know whether they’re in good shape technically—are they hiring me to grow or to survive?

How do I market myself to land the opportunities I crave?

Change my resume from a laundry list of things I’ve done to an expression of who I want to be.  De-technicalize it, replacing the jargon with evidence of leadership—in general, present myself in my capacity to lead.  Clarify through the top-of-page-1 material that I’m looking to be considered for executive leadership, and that (separately) I have the credentials of coming from a solid technical background.

As an example, if I’m being evaluated for a CTO role then a CEO is going to look at whether I led a team through something instead of just the individual things I’ve done.  Don’t make the reader search for it, put it clearly and up front.  If you want to be an executive you need to tell the reader that you’re an executive.

Simultaneously update LinkedIn profile to be a resume supplement, focusing on brief powerful statements in the “summary” section at the top as an attention-grabber.  Also create an AngelList profile.

What might I get out of business school?  (The coach works frequently with current and former business school students, so it seemed apropos to ask.)

If I want to inject speed into an executive career path—moving through a career at a faster pace than I could expect by simply working through promotions—business school can provide that.  But don’t look at school as all you need or as the last piece of the puzzle; you’re being taught in the classroom a lot of what you’ll have learned in the workplace.

There are too many B-school graduates already and they’re mostly in their mid-20s.  It’s less of a unique credential/ticket than it used to be.

He espouses a go-big-or-go-home philosophy: If you’re gonna go to business school, and if you can afford it, go full time so that you get an intense, immersive, fast-paced experience.

What are my next steps?

Make an effort to land that next job that gets you on the right track to find the position you want.  Look for director or VP roles.  Don’t move horizontally.

Always be job hunting; never settle down.  Submit your resume early and often.  Write your CV and cover letter in such a way that you can recycle them in 5 minutes for any new opportunity you see.

(When asked about whether I should try to find a longer-term coach to work with.)  Don’t worry about finding a regular coach for now.  He doesn’t know anyone who does coaching on the side (except coaching for C-level executives) but in any event he feels that being in the game (circulating, getting interviews under my belt) would be more useful for me than being on the sideline (talking to a coach).

Overall a great conversation.  The main takeaway matches something I’ve heard before (though it’s hard to do):  Market yourself by presenting your past in the context of the job you want, not necessarily by conveying the minutiae of the job you had.  Describe your previous work by highlighting the things you did that you did well, that you got something out of, that you enjoyed, and that are most relevant to the position you seek.

April 19, 2015

An open letter to the Women’s Flat Track Derby Association

Filed under: Opinions — JLG @ 9:08 PM

In one of my all-time favorite newspaper articles, the author asserts:

 “Everyone should listen to what I have to say and heed my advice because I am correct … One of the things I like best about myself is that not only do I know what is best for everyone, I always make sure to come forward with this information.” (Edwin Wiersbicki, “I Know What Is Best For Everyone“, The Onion 34(12): 21 Oct 1998)

Following in Mr. Wiersbicki’s bold example, I myself have a few suggestions to offer to the Women’s Flat Track Derby Association (WFTDA)—the international governing body of women’s flat track roller derby—even though

  1. I am not a woman,
  2. I have never participated in a roller derby game,
  3. I haven’t even been on quad roller skates in probably 20 years, and
  4. I wasn’t really clear on the concepts and principles behind flat track roller derby until I watched a game last night.

(For example, it didn’t register to me flat track roller derby is played on a flat track, unlike the delightful banked-track sport a young John found himself glued to on television on Saturdays in 1989.)

I’d been wanting to attend a Boston Derby Dames match for the past two years, ever since I heard that Boston had an active roller derby team. And last night I did! It was a doubleheader, featuring a B-league matchup (Boston B-Party vs. Charm City Female Trouble) and the main event (Boston Massacre vs. Charm City All-Stars).

Charm City Female Trouble at Boston B-Party, April 18, 2015

Charm City Female Trouble at Boston B-Party, April 18, 2015

The skater names these athletes choose for themselves are hilarious. A few that I texted to Evelyn: “Boston Creamer”, “M.C. Slammer”, “Allie B. Back”, “I. M. Pain”. Even the officials were in on the fun: “Buxom Melons”, “TestosteRon Jeremy”.

As a novice spectator it was pretty hard to follow along with the action until two former skaters sitting near me (one named “Jodie Faster”) patiently explained what was going on. The key principles:

  • A 60-minute game is made up of a series of 2-minute “jams”.
  • A jam is 5-on-5 skating in a counterclockwise direction.
  • In a jam, each team fields one “jammer” (similar to a football quarterback) and four blockers.
  • Jammers score points by lapping members of the other team.
  • Blockers try to (a) stop the other jammer from lapping them, and (b) disrupt the other blockers so that their jammer can make progress. Skaters are not allowed to grasp with their hands, so hip checks are common.
  • It’s not (much) violence! Rather, strategy and agility are important. Brute force only gets you so far.

That last bullet was my favorite part of watching the games: As the game wore on, and the blockers became more familiar with the jammers’ jukes, and the skaters collectively grew more tired, and more skaters got sent to the penalty box (yielding 5-on-4 or 5-on-3 skating), it was very interesting to watch the teams shift their strategies—switching up personnel to get more favorable matchups; lining up differently at the start of a jam; inducing the pack of blockers to move more quickly or more slowly around the rink; trapping a skater on one side of the track and forcing her out of bounds; making use of a quirky rule to switch which skater is declared the jammer in the middle of a jam.

Protip:  In order to notice these things it was very helpful to be sitting next to Jodie and Brittany; I highly recommend that anyone new to roller derby make it a point to sit next to people who look like former or current skaters.

An interesting aspect of the roller derby leagues [according to my guides] is that it takes up about 20 hours a week for each of the team members (on top of their full time jobs needed to pay for skating equipment)—not just practice time, but also business activities such as reserving venues or merchandising or soliciting sponsors. As an away game, the teams from Baltimore likely drove themselves up I-95 and stayed at skaters’ houses to save money. Somehow knowing that each skater had an integral role in the functioning of the team and the league (instead of, say, a small powerful board taking care of such details) made the skating more fun to watch.

The games were held at the nifty Shriners Auditorium in Wilmington. $16 for a general admission ticket plus two $4 cheeseburgers and a $5 Shock Top beer (all served by genuine Shriners!) worked out to about $7.25 per hour of entertainment. Pretty good value, especially since the A-teams turned out to be well matched.  Both games featured several lead changes, and the major game stayed nail-bitingly close into the final minute—generating thunderous excitement from the audience of about 700. (The audience seemed to consist mostly of current skaters, former skaters, friends of skaters, or partners of skaters.)

I had a good time, though I doubt I’d go back to see another game. There are two perplexing rules whose enforcement took away a big part of my enjoyment of the games. One rule imposes a grossly unfair penalty for infractions by a certain class of skater (a violation of the Equal Protection Clause). The other rule flushes down the toilet almost about 75% of the opportunities for skaters to demonstrate strategy and agility (“my favorite part of watching the games”, as described above). These two rules vexed me so much that they inspired me to write this blog post to suggest that the rules be changed.

"Think Logically" by Randall Munroe. http://xkcd.com/1112/

“Think Logically” by Randall Munroe. http://xkcd.com/1112/

So, to the WFTDA, here are my three unsolicited recommendations:

  1. Eliminate penalties that send jammers to the penalty box (6.1.1). Each time you send off a jammer it’s a blank check for the other team to score 15 to 20 unanswered points. Baltimore lost by 14 points last night, in a game where three seemingly bogus penalties on Baltimore jammers netted about 45 total points for Boston.

The disparity in impact between a blocker getting a penalty (perhaps 3 points) and a jammer getting a penalty (15+ points)—for the same infraction—is absurd. This rule should be changed. Perhaps a jammer’s penalty should be immediately served by a blocker, as is done with penalties against goalies in hockey? (I note that other writers have proposed other rule changes for jammer penalty enforcement.)

Certainly other sports have asymmetric positional rule impact. For example, in this article Gregg Easterbrook complains about asymmetric football rules such as holding, where offensive holding is a 10-yard penalty and replay the down, whereas defensive holding is a 10-yard penalty and an automatic first down:

Under current rules, the defense is penalized more than the offense for the same foul. Let’s make defensive and offensive holding equivalent.

But back to roller derby, the rule to send off jammers for penalties is one of the worst rules I’ve ever seen in sports.

  1. Limit (or eliminate) the concept of Lead Jammer calling off the jam (2.4.7). It is truly dull from an audience perspective to watch the jammer make one loop of the track, pass the blockers, and immediately call off the jam for a ho-hum 4-0 score. You’re wasting 90 seconds of potentially audience-stimulating action!

Yes, the current rule makes it valuable to be the Lead Jammer, but there are many ways that the rules could be improved to preserve the value of being the first jammer out of the gate:

  • Teams could be limited in the number of times per half that a jammer could call off the jam, making it a strategic (instead of tactical) decision for when to call off the jam.
  •  The Lead Jammer (or perhaps just the jammer currently in the lead) could receive a bonus point every time she laps the pack, or bonus points just for being the Lead Jammer (cf. NASCAR points for leading the most laps in a race).
  • The Lead Jammer could accumulate more than 1.0 point for each blocker passed, or could receive another scoring modifier.
  • Jammers could only be eligible to call off the jam if in the lead under certain conditions (for example in every other jam [cf. vulnerability in Bridge], or only when trailing in score, or only in the first half of the period).

I felt this rule was just plain dumb. For the first 20 minutes of action I couldn’t figure out why the jams were ending with time still left on the jam clock. Once it was explained to me, I couldn’t fathom why the rules authority would go out of its way to make the game boring to its audience.

  1. Eliminate some of the 30-second delays between jams (1.5.3). Instead of having a new set of personnel out to line up start ever discrete jam, consider having a continuous “extended jam” concept where winded skaters are allowed to be replaced with fresh skaters while the action continues around them (as in relay speed skating, or hockey, or tag-team wrestling).

The personnel changes could happen continuously (for example by a skater tagging out with another skater in the clearance area outside the track); or they could happen on a referee’s whistle at 2-minute intervals. Scoring could continue with whichever skater is currently wearing the star helmet. Perhaps the last 10 minutes of each period could be skated under the “extended jam” concept?

(This rule isn’t something that vexed me, but I feel such a change would be interesting to try and could make the games more interesting for players and spectators alike.)

Anyone from the WFTDA Rules Committee is welcome to contact me (see instructions on this page); I would be happy to elaborate on any of the above three points.

April 5, 2015

Every Eighteen Months, part 1: First steps

Filed under: Opinions — JLG @ 7:05 PM

Imagine, if you will, a much younger John.

Ten years ago, almost exactly to this day, I stood on the rain-slick precipice of darkness, scrying into the future to decide what’s next? for my life.

I had spent the past six years as a student in the best storage systems research laboratory in the world, itself part of the #1-ranked computer engineering program in the world. I’d learned how to think in graduate school—how to explore the question why?—how to look at trees and see the forest. I’d gained confidence: confidence to formulate and pursue a technical hypothesis, confidence to take the risk that a line of investigation might be wrong, confidence to lead and advise others in the pursuit of their personal goals.

I had spent most of those six years assuming that I would follow in the footsteps of my thesis advisor—become a tenure-track faculty member, set a visionary research agenda and rally a large team to sponsor and drive the work, change the world through my research results, and nurture my own set of wide-eyed/wet-behind-the-ears students into the change-the-world visionaries of the future.

I had a wonderful girlfriend ready to help me succeed in my career, supportive colleagues and contacts in academia and industry, a solid record of publication and public speaking, a reasonably compelling vision for my future research agenda, and an eagerness to move somewhere new and experience/travel/enjoy being there.

I had my choice of off-the-hook amazing job offers in amazing places. In academia, both the University of Edinburgh (Scotland) and the University of Waterloo (Canada) offered me the academic positions I’d dreamed about. I’d also interviewed at world-class industrial and business firms—including Google, Hitachi, Intel, McKinsey & Co., and Microsoft Research—and had received offers to work directly with two of the most impressive people I’d ever met (John Colgrove at Veritas Software near San Francisco, or Dr. Leendert van Doorn at IBM Research near New York City).

I was the king of the world!

But then I asked my advisor a question he couldn’t answer.

I asked for my advisor’s advice on how to choose which job to take—what to consider in terms of internal and external impact, work-life balance, and personal satisfaction; what I could expect over the first few years of my career; what were my personal strengths and weaknesses that would be unique (or be problematic) in each position; how each choice would tie in with my long-term career and life plans.

He answered that he couldn’t advise me. He couldn’t because he had only ever worked in academia; he didn’t have the experience to help me understand what I could expect; to suggest which choices would open or close which doors; to guide me toward the right decision for me.

That answer stunned me. It opened my eyes that I would have the same limitation if I went straight into an academic position.

It was the mentorship aspect of a job in academia—the “confidence to lead and advise others in the pursuit of their personal goals” and the “nurture my own set of wide-eyed/wet-behind-the-ears students into the change-the-world visionaries of the future”—that was the key draw to my pursuing such a position. From this conversation with my advisor I realized that I needed more personal work experience in order to pay forward the gifts of encouragement, nurturing, guidance, and freedom that I’d received from my teachers and mentors over the years.

Moreover, if I was going to change the world I realized that I needed a better understanding how the real world works. At Carnegie Mellon I saw firsthand how many successful faculty members pursued technology transition: Some, like my advisor, built relationships with key technology companies in his field and freely shared results (and freely shared graduate students for internships) to ensure a steady flow of ideas and technology both into and out of his group. Some, like Dr. Phil Koopman, applied his years of industry experience toward choosing projects that were both academically rigorous and immediately practical for solving the complex problems that had vexed him but that industry couldn’t (or wouldn’t) solve for itself. And some, like Dr. Garth Gibson, took the flat-out entrepreneurial approach and founded a company to productize the game-changing technology concept he’d conceived, defined, explored, and proselytized over the past decade.

So I decided not to pursue a professorship, at least not for a while, while I instead went out to discover how the real world works.

(Saying “no” to the position at Edinburgh is the hardest phone call I’ve ever made.)

I accepted the position at IBM, and the next decade turned out to be far more interesting (and far more interestingly chaotic) than I imagined they would be.

[Author’s note: I wrote this text on May 10, 2014.]

November 30, 2014

How to Peer Review

Filed under: Opinions,Reviews — JLG @ 8:25 AM

I tried to title this blog post “7 things to keep in mind during peer review (#5 will shock you)” but just couldn’t bring myself to do it.

It’s paper review season; I’ve been working on reviewing papers for a conference for which a former IBM colleague invited me to be on the program committee.  Then this morning a long-time reader of this blog submitted a question:

First time on an academic CFP review team. Have the papers. Any tips for what to do?

Do I ever.  Here’s my advice:

  1. Don’t wait until the last minute to do your reviews.  I try to do at least 1/day so that I make progress and am able to give each paper enough time to do a good job.
  2. The words of the day are ‘constructive criticism’. Especially for papers you grade as ‘reject’, give the authors suggestions that, if adopted, would have moved the paper towards the ‘accept’ column.
  3. Don’t be biased by poor English skills. Judge the merit of the ideas. If needed, the program committee chair can assign a ‘shepherd’ to work with the authors to improve the phrasing and presentation. (There is usually a way for you to provide reviewer comments that are not given to the authors — you can flatly say “if we accept this paper then such-and-such must be fixed before publication.”
  4. Be fair, in that it’s easy to say “well that’s obvious” and assign a paper a low score. Was it obvious *before* you read the paper? Every manuscript doesn’t have to be groundbreaking. Rather, every paper should advance the community’s understanding of important issues/concepts in a way that can be externally validated and built upon in future work.
  5. The Golden Rule applies: I try to give the kind of feedback (or: do the kind of thoughtful evaluation) that I wished other reviewers did on the papers I submitted.  That doesn’t mean I accept everything, of course; historically speaking I think I recommend ‘accept’ for only about 20% of papers.
  6. If you are working with junior academic types it can be a good idea to farm out a paper or two to them, both to give them experience/exposure and to give you sometimes a better evaluation of the paper than you might have done yourself. Be sure to convey the importance of confidentiality and ethics (not stealing unpublished work). Also be sure to read the paper yourself, and file your own review if there are important points not stated by your ‘external reviewer’.
  7. Peer review is part of the fuel that makes our scientific engines work. I’ve always thought of it as an honor to be asked to review a paper (and a *big* honor the few times I’ve been asked to be on a Program Committee). So I try to deliver reviews that ‘feed the scientific engines’, so to speak.

August 8, 2013

KB1YBA/AE, or How I Spent My Weekend In Vegas

Filed under: Opinions,Reviews — JLG @ 8:15 PM

Last week I attended Black Hat USA 2013, BSidesLV 2013, and DEF CON 21 in fabulous Las Vegas, Nevada, thanks to the generosity of my employer offering to underwrite the trip.  Here were the top six topics of discussion from the weekend:

1. Not surprisingly, PRISM was the main topic of conversation all weekend.  Depending on your perspective, PRISM is either

“an internal government computer system used to facilitate the government’s statutorily authorized collection of foreign intelligence information from electronic communication service providers under court supervision” (Director of National Intelligence statement, June 8, 2013)


“a surveillance program under which the National Security Agency [NSA] vacuums up information about every phone call placed within, from, or to the United States [and where] the program violates the First Amendment rights of free speech and association as well as the right of privacy protected by the Fourth Amendment” (American Civil Liberties Union statement, June 11, 2013)

The NSA director, Gen. Keith Alexander, used the opening keynote at Black Hat to explain his agency’s approach to executing the authorities granted by Section 215 of the USA PATRIOT act and Section 702 of the Foreign Intelligence Surveillance Act.  His key points were:

  • The Foreign Intelligence Surveillance Court (FISC) does not rubber-stamp decisions, but rather is staffed with deeply-experienced federal judges who take their responsibilities seriously and who execute their oversight thoroughly.  Along similar lines, Gen. Alexander stated that he himself has read the Constitution and relevant federal law, that he has given testimony both at FISC hearings and at Congressional oversight hearings, and that he is completely satisfied that the NSA is acting within the spirit and the letter of the law.
  • Members of the U.S. Senate, as well as other executive branch agencies, have audited (and will continue to audit) the NSA’s use of data collected under Section 215 and Section 702.  These audits have not found any misuse of the collected data.  He offered that point as a rebuttal to the argument that the Government can abuse its collection capability—i.e., the audits show that the Government is not abusing the capability.
  • Records collected under Section 215 and Section 702 are clearly marked to indicate the statutory authority under which it is collected; this indication is shown on screen (a “source” field for the record) whenever the records are displayed.  Only specially trained and tested operators at the NSA are allowed to see the records, and that only a small number of NSA employees are in this category.  The collected data are not shared wholesale with other Government agencies but rather are shared on a case-by-case basis.
  • The NSA has been charged with (a) preventing terrorism and (b) protecting U.S. civil liberties.  If anyone can think of a better way of pursuing these goals, they are encouraged to share their suggestions at ideas@nsa.gov.

In the end I was not convinced by Gen. Alexander’s arguments (nor, anecdotally speaking, was any attendee I met at either Black Hat or DEF CON).  I walked away from the keynote feeling that the NSA’s collection of data is an indiscriminate Government surveillance program, executed under a dangerous and unnecessary veil of secrecy, with dubious controls in place to prevent abuse of the collected data that, if abused, would lead to violations of civil rights of U.S. citizens.  In particular, if this program had existed on September 11, 2001, I harbor no doubt that the statutory limits (use or visibility of collected data) would have been exceeded in the legislative and executive overreaction to the attacks.  This forbidden fruit is just too ripe and juicy.  As such I believe the Section 215 and Section 702 statutory limits will inexorably be exceeded if these programs—i.e., the regular exercise of the federal Government’s technical capability to indiscriminately collect corporate business records about citizen activities—continue to exist.

I do appreciate how the NSA is soliciting input from the community on how the NSA could better accomplish its antiterrorism directive.  Unfortunately, Pandora’s Box is already open; I can’t help but feel disappointed that my Government chose secretly to “vacuum up information” as its first-stab approach to satisfying the antiterrorism directive.  As I wrote in a comment on the Transportation Security Administration (TSA)’s proposed rule to allow the use of millimeter wave scanning in passenger screening:

I fly monthly.  Every time I fly, I opt out of the [millimeter wave] scanning, and thus I have no choice but to be patted down.  I shouldn’t have to submit to either.  In my opinion and in my experience, the TSA’s intrusive searching of my person without probable cause is unconstitutional, period.

I appreciate that the TSA feels that they’re between a rock and a hard place in responding to their Congressional directive, but eroding civil liberties for U.S. citizens is not the answer to the TSA’s conundrum.  Do better.

Eroding civil liberties for U.S. citizens is not the answer to the NSA’s conundrum.  Do better.

2. Distributed Denial of Service (DDoS) attacks.  There were at least five Black Hat talks principally about DDoS, including one from Matthew Prince on how his company handled a three hundred gigabit per second attack against a customer.  The story at that link is well worth reading.  Prince’s talk was frightening in that he forecast how next year we will be discussing 3 terabit/sec, or perhaps 30 terabit/sec, attacks that the Internet will struggle to counter. The DDoS attacks his company encountered required both a misconfigured DNS server (an open DNS resolver that is able to contribute to a DNS amplification attack; there are over 28 million such servers on the Internet) and a misconfigured network (one that does not prevent source address spoofing; Prince reports there are many such networks on the Internet).

Another interesting DDoS talk was Million Browser Botnet by Jeremiah Grossman and Matt Johansen.  The essence is that you write your botnet code in Javascript, then buy Javascript ads on websites…resulting in each reader of those websites becoming a node in your personal botnet (as long as they’re displaying the page).  Wow.

3. CreepyDOL.  To quote the Ars Technica article about this work: “You may not know it, but the smartphone in your pocket is spilling some of your deepest secrets to anyone who takes the time to listen. It knows what time you left the bar last night, the number of times per day you take a cappuccino break, and even the dating website you use. And because the information is leaked in dribs and drabs, no one seems to notice. Until now.”  The researcher, Brendan O’Connor, stalked himself electronically to determine how much information an attacker could glean by stalking him electronically.  In his presentations he advanced a twofold point:

(a) it’s easy to track people when they enable wifi or Bluetooth on their phones (since it sprays out its MAC address in trying to connect over those protocols), and

(b) many services (dating websites, weather apps, apple imessage registration, etc.) leak information in clear text that can be correlated with the MAC address to figure out who’s using a particular device.

O’Connor’s aha! moment was that he created a $50 device that can do this tracking.  You could put 100 of these around the city and do a pretty good job of figuring out where a person of interest is and/or where that person goes, for relatively cheap ($5,000) and with no need to submit official auditable requests through official channels.

The researcher also brought up an excellent point about how the chilling effect of Draconian laws like the Computer Fraud and Abuse Act (CFAA) make it impossible for legitimate computer security researchers to perform their beneficial-to-society function.  If the CFAA had existed in the physiomechanical domain in the 1960s then Ralph Nader could have faced time in federal prison for the research and exposition he presented in Unsafe At Any Speed: The Designed-In Dangers of The American Automobile—and consumers might never have benefitted from the decades of safety improvements illustrated in this video.  Should consumer risks in computer and network security systems should be treated any differently than consumer risks in automotive systems?  I’m especially curious to hear arguments from anybody who thinks “yes.”

4. Home automation (in)security.  In a forthcoming blog post I will describe the astonishingly expensive surprise expense we incurred this summer to replace the air conditioner at our house.  (“What’s this puddle under the furnace?” asked Evelyn.  “Oh, it’ll undoubtedly be a cheap and easy fix,” replied John.)

As part of the work we had a new “smart” thermostat installed to control the A/C and furnace.  The thing is a Linux-based touchscreen, and is amazing—I feel as though I will launch a space shuttle if I press the wrong button—and of course it is Wi-Fi enabled and comes with a service where I can control the temperature from a smartphone or from a web application.

And, of course, with great accessibility comes great security concerns.  Once the thermostat was up and running on the home network I did the usual security scans to see what services seemed to be available (short answer: TCP ports 22 and 9999).  Gawking at this new shuttle control panel got me interested in where the flaws might be in all these automation devices, and sure enough at Black Hat there were a variety of presentations on the vulnerabilities that can be introduced by consumer environmental automation systems:

Clearly, home automation and/or camera-enabled insecurity was a hot topic this year.  I was glad to see that the installation manual for our new thermostat (not camera-enabled, I think) emphasizes that it should be installed behind a home router’s firewall; it may even have checked during installation that it received an RFC 1918 private address to ensure that it wasn’t directly routable from the greater Internet.

5. Femtocells and responsible disclosureTwo years ago I wrote about research that demonstrated vulnerabilities in femtocells (a.k.a. microcells), the little cellular base stations you can plug into your Internet router to improve your cellular reception in dead zones.  This year, Doug DePerry and Tom Ritter continued the femtocell hacking tradition with a talk on how they got root access on the same device I use at home.  The researchers discovered an HDMI port on the bottom of the device, hidden under a sticker, and sussed out that it was actually an obfuscated USB port that provided console access.  Via this console they were able to modify the Linux kernel running on the device and capture unencrypted voice and SMS traffic.  The researchers demonstrated both capabilities live on stage, causing every attendee to nervously pull out and turn off their phones.  They closed by raising the interesting question of why any traffic exists unencrypted at the femtocell—why doesn’t the cellular device simply create an encrypted tunnel to a piece of trusted back-end infrastructure?  They also asked why deploy femtocells at all, instead of simply piggybacking an encrypted tunnel over ubiquitous Wi-Fi.

Regarding responsible disclosure, the researchers notified Verizon in December 2012 about the vulnerability.  Verizon immediately created a patch and pushed it out to all deployed femtocells, then gave the researchers a green light to give the talk as thanks for their responsible disclosure.  Several other presenters reported good experiences with having responsibly disclosed other vulnerabilities to other vendors, enough so that I felt it was a theme of this year’s conference.

6. Presentation of the Year:Adventures in Automotive Networks and Control Units” by Charlie Miller and Chris Valasek.  It turns out it’s possible to inject traffic through the OBD-II diagnostic port that can disable a vehicle’s brakes, stick the throttle wide open, and cause the steering wheel to swerve without any driver input.  Miller and Valasek showed videos of all these happening on a Ford Escape and a Toyota Prius that they bought, took apart, and reverse engineered.  It’s only August but their work gets my vote for Security Result of 2013.  Read their 101-page technical report here.


I mean, wow.  The Slashdot discussion of their work detailed crafty ways that this attack could literally be used to kill people.  The exposed risks are viscerally serious; Miller showed a picture from testing the brake-disabling command wherein he crashed uncontrolled through his garage, crushing a lawnmower and causing thousands of dollars of damage to the rear wall.  (In a Black Hat talk the day before, Out of Control: Demonstrating SCADA Device Exploitation, researchers Eric Forner and Brian Meixell provided an equally visceral demonstration of the risks of Internet-exposed and non-firewalled SCADA controllers by overflowing a real fluid tank using a real SCADA controller right on stage.  I for one look forward to this new age of security-of-physical-systems research where researchers viscerally demonstrate the insecurity of physical systems.)

Regardless, other than those gems I was unimpressed by this year’s Black Hat (overpriced and undergood) and felt “meh” about DEF CON (overcrowded and undernovel).  Earlier in the year I was on the fence about whether to attend the Vegas conferences, having been underwhelmed last year, which prompted my good friend and research partner Brendan to observe that if I had a specific complaint about these conferences then I should stop whining about it and instead do something to help fix the problem.  In that spirit I volunteered to be on the DEF CON “CFP review team,” in hopes that I could help shape the program and shepherd some of the talks.  Unfortunately I was not selected to participate (not at all surprising, since I work indirectly for The Man).

In my offer to volunteer I offered these specific suggestions toward improving DEF CON, many of which are equally relevant to improving Black Hat:

I’d like to see the DEFCON review committee take on more of a “shepherding” role, as is done with some academic security conferences — i.e., providing detailed constructive feedback to the authors, and potentially working with them one-on-one in suggesting edits to presentations or associated whitepapers.

I think there are things the hacker community can learn from the academic community, such as:

* You have to answer the core question of why the audience should care about your work and its implications.

* It’s one thing to present a cool demo; it’s another to organize and convey enough information that others can build upon your work.

* It only strengthens your work if you describe others’ related work and explain the similarities and differences in your approach and results.

Of course there are plenty of things the academic community can learn from the hacker community!  I’m not proposing to swoop in with a big broom and try to change the process that’s been working fine for DEFCON for decades.  In fact I’m curious to experience how the hacker community selects its talks, so I can more effectively share that information with the academic community.  (For example I spoke at last year’s USENIX Association board meeting on the differences between the USENIX annual technical conference and events like DEFCON, Shmoocon, and HOPE, and I commented on lessons USENIX could take away from hacker cons.)

But each year I’ve been disappointed at how little of a “lasting impression” I’ve taken away from almost all of the DEFCON talks.  A “good presentation” makes me think about how *I* should change the way I approach my projects, computer systems, or external advocacy. I wish more DEFCON talks (and, frankly, Black Hat talks) were “good presentations.”  I’m willing to contribute my effort to your committee to help the community get there.

One academic idea you might be able to leverage is that whenever you’re published, you’re expected to serve on program committees (or as a reviewer) in future years.  (The list of PC members is usually released at the same time as the RFP, and the full list of PC members and reviewers are included in the conference program, so it’s both a service activity and resume fodder for those who participate.)  So perhaps you could start promulgating the idea that published authors at BH and DC are expected to do service activities (in the form of CFP review team membership) for future conferences.

Finally, KB1YBA/AE in the title of this post refers to the culmination of a goal I set for myself eighteen years ago.  There are three levels of achievement (three license classes) that you can attain as a U.S. ham radio enthusiast:

  • Technician class.  Imagine, if you will, a much younger John.  In 1995, at the urging of my old friend K4LLA, I earned technician (technically “technician plus”) class radiotelephone privileges by passing both a written exam and a 5-words-per-minute Morse code transcription exam.  [Morse code exams are no longer required to participate in amateur radio at any level in the United States.]  At that time I set a goal for myself that someday I would pass the highest-level (extra class) exam.
  • General class.  In 2011, at the urging of my young friend K3QB, I passed the written exam to earn general class privileges.  With each class upgrade you are allowed to transmit on a broader range of frequencies.  With this upgrade I was assigned the new call sign KB1YBA by the Federal Communications Commission.
  • Amateur Extra class.  At DEF CON this year, with the encouragement of my former colleague NK1B, I passed the final written exam and earned extra class privileges.  It will take a few weeks before the FCC assigns me a new call sign, but in the meantime I am allowed to transmit on extra-class frequencies by appending an /AE suffix onto my general-class call sign when I identify myself on-air.  For example: “CQ, CQ, this is KB1YBA temporary AE calling CQ on [frequency].”

I don’t mean to toot my own horn, but it feels pretty dang good to fulfill a goal I’ve held for half my life.  Only 17% [about 120,000] of U.S. ham radio operators have earned Amateur Extra class privileges.

May 26, 2013

Security B-Sides Boston 2013

Filed under: Reviews — JLG @ 11:07 AM

Security B-Sides is an odd duck series of workshops.  Are you:

  1. Traveling to attend (or already living near) a major commercial security conference (RSA in San Francisco, Black Hat in Las Vegas, or SOURCE in Boston)?
  2. Not particularly interested in attending any of the talks in the commercial security conference you’ve already paid hundreds of dollars to attend?
  3. Unconcerned with any quality control issues that may arise in choosing a conference program via upvotes on Twitter?

Then you should attend B-Sides.

Okay, so it’s not as grim as I lay out above.  Earlier this month I attended Security B-Sides Boston (BSidesBOS 2013) on USSJoin’s imprimatur.  I felt the B-Sides program itself was weak, the hallway conversations were good, the keynotes were great, and the post-workshop reception was excellent.

But if I were on the B-Sides steering committee I would have B-Sides take place either immediately before or immediately after its symbiotic commercial conference.  In academic conferences you will often see a “core” conference with 1-day workshops before or after or both, meaning that attendees can optionally participate, without requiring separate travel, and without interfering with the conference they’ve already paid hundreds of dollars to attend.

My takeaways from the B-Sides workshop came from the two keynote talks.  Dr. Dan Geer (chief information security officer at In-Q-Tel)’s talk was one of the best keynotes I’ve ever seen.  Some of his thought-provoking points included:

  • It’s far cheaper to keep all your data than to do selective deletion.  He implied that there is an economic incentive at work whose implications we need to understand:  As long as it’s cheaper to just keep everything (disks are cheap, and now cloud storage is cheap), people are going to just keep everything.  I’d thought about the save-everything concept before, but not from an economic perspective.
  • When network intrusions are discovered, the important question is often “how long has this been going on?” instead of  “who is doing this?”  He implied that recovery was often more important than adversarial discovery (i.e., most people just want to revert affected systems to a known-good state, make sure that known holes are plugged, and move forward.)  And the times could be staggering; he noted a Symantec report that the average zero-day exploit is in use for 300 days before it is discovered.
  • Could the U.S. corner the vulnerability market?  Geer made the fascinating suggestion that the U.S. buy every vulnerability on the market (offering 10 times market rates if needed) and immediately release them publicly.  His goal is to collapse the information asymmetry that has built up because of the economics of selling zero-day attacks.  He pined for the halcyon days of yore when zero-day attacks were discovered by hobbyists and released for fun (leading to “market efficiency” where everyone was on the same playing field when it came to technology decisions) rather than the days of today when they are sold for profit (leading to asymmetry, where known vulnerabilities are no longer public).
  • “Security is the state of unmitigatable surprise.  Privacy is where you have the effective capacity to misrepresent yourself.  Freedom in the context of the Internet is the ability to reinvent yourself when you want.”  He suggested that each of us should have as many distinct, curated online identities as we can manage — definitely an interesting research area.  He made the fascinating suggestion of “try to erase things sometime,” for example by creating a Facebook profile…then later trying to delete it and all references to it.
  • Observability is getting out of control and is not coming back.  He commented that facial recognition is viable at 500 meters, and iris identification at 50 meters.
  • All security technology is dual use; technology itself is neutral and should be treated as such.  During my early days as a government contractor I similarly railed against the automatic (by executive order) top secret classifications applied to cyber weaponry and payloads — because doing so puts the knowledge out of reach of our network security defenders.  As it turns out, One Voice Railing usually isn’t the most effective way to change entrenched bureaucratic thinking.  (I haven’t really figured out what is the most effective way.)
  • “Your choice is one big brother or many little brothers.  Choose wisely.”  This closing line is open to deep debate and interpretation; I’ve already had several interesting conversations about what Geer meant and what he’s implying.  My position is that his earlier points (e.g., observability is out of control and is not coming back) demonstrate that we’ve already crossed the Rubicon of “no anonymity, no privacy” — without even realizing it — and that it’s far too late to go back to a time where no brother will watch you.  Can anything be done?  I’m very interested in continuing to debate this question.

Mr. Josh Corman (director of security intelligence at Akamai) gave the second keynote.  Some of his interesting points included:

  • Our dependence on software and IT is growing faster than our ability to secure it.  Although this assertion isn’t new, it always brings up an interesting debate: if you can’t secure the software, then what can you do instead?  (N-way voting?  Graceful degradation?  Multiple layers of encryption or authentication?  Auditing and forensic analyses?  Give up?)  A professor I knew gave everybody the root password on his systems, under the theory that since he knew it was insecure then he would only to use the computer as a flawed tool rather than as a vital piece of infrastructure.  Clearly the professor’s Zen-like approach wouldn’t solve everyone’s security conundrums, but the simplicity and power of his approach makes me think that there are alternative, unexplored, powerful ways to mitigate the imbalance of insecure and increasingly critical computer systems.
  • HDMoore’s Law: Casual attacker power grows at the rate of Metasploit.  This observation was especially interesting: not only do defenders have to worry about an increase in vulnerabilities but they need to worry about an increase in baseline attacker sophistication, as open-source security-analysis tools grow in capability and complexity.
  • “The bacon principle: Everything’s better with bacon.”  His observation here is that it is especially frustrating when designers introduce potential vulnerability vectors into a system for no useful reason.  As an example, he asks why an external medical devices needs to be configurable using Bluetooth when the device (a) doesn’t need to be frequently reconfigured and (b) could be just as easily configured using a wired [less permissive] connection.  The only thing Bluetooth (“bacon”) adds to such a safety-critical device is insecurity.
  • Compliance regulations set the bar too low.  Corman asserts that the industry’s emphasis on PCI compliance (the payment card industry data security standard) means that we put the most resources towards protecting the least important information (credit card numbers).  It’s a double whammy:  Not only is there an incentive to only protect PCI information and systems, but there is no incentive to do better than the minimal set of legally-compliant protections.
  • Is it time for the security community to organize and professionalize?  Corman railed against “charlatans” who draw attention to themselves (for example, by appearing on television) without having meaningful or true things to say.  He implied that the security community should work together to define and promulgate criteria, beyond security certifications, that could provide a quality control function for people claiming to represent security expertise and best practices.  (A controversial proposal!)  A decade ago I explored related conversations about the need to create licensed professional software engineers, both to incent members of our community to adhere to well-grounded and ethical principles in their practice & to provide the community and the state with engineers who assume the responsibility and risk over critical systems designs.
  • “Do something!”  Corman closed by advocating for the security community to come together to shape the narrative of information security — especially in terms of lobbying to influence Governmental oversight and regulation — instead of letting other people do the lobbying and define the narrative.  He gave the example of unpopular security legislation like SOPA and PIPA: “you can either DDoS it [after the legislation is proposed] or you can supply draft language [to help make it good to begin with].”  I felt this was a great message for a keynote talk, especially in how it matches the influential message I heard from a professor at Carnegie Mellon (Dr. Philip Koopman) who fought successfully against adoption of the Uniform Computer Information Transactions Act and who exhorted me and my fellow students to be that person who stands up and fights on important issues when others remain silent.

All in all not a bad way to spend $20 and a Saturday’s worth of time.

April 14, 2013

The washing machine sounds like a jackhammer!

Filed under: Homeownership — JLG @ 1:05 AM

Every few weeks we find ourselves driving in the nearby neighborhood where we lived last year, usually to visit Boloco or Zenna Noodle Bar or one of those ubiquitous self-serve yogurt places that are proliferating like 1990s-era Starbucks.  The drive through invariably takes us by the apartment we used to rent on the ground floor (of four, plus basement) of a gorgeous building in a great location.  But what a strange feeling it is to see our old apartment form outside:  It’s a place that invokes memories of home…yet, after less than a year, it’s a place that feels inaccessibly distant.  Did we really used to live there?

(In terms of inaccessibly distant memories, even worse are the places we lived before moving up here to the craggy hills of Massachusetts.  I’ve recently visited friends who live near our former Florida house and friends near my former Maryland apartment.  And I was stunned at how those places don’t even look like what I remember them looking like.  Yikes.)

Still, we have no regrets at having switched from making monthly lease payments to a faceless landlord in California to making monthly mortgage payments to a faceless bank in Ohio.

Along those lines, here are some of my recent lessons in homeownership:

  • Read the Deed (or: just because you hired a lawyer doesn’t mean you don’t need to check his work).  

We just discovered that the closing attorney made four errors in closing our house.  (I’d been aware of three errors but dug up a fourth this week.)  Here are the errors from least severe to most:

1. The attorney miscalculated our share of the property tax.  When you buy a house you reimburse the seller for some of the property tax that’s already been paid.  In our case the seller had already paid taxes for the period April 1–June 30, so we needed to reimburse him for the period June 15–30 (we closed on June 15).  The attorney calculated the wrong amount.  But it was only a $15 error (“chump change”, as my former racquetball partner would have said).

2. The attorney miscalculated the size of our required monthly escrow payments by $180/month.  So every month we’re paying an extra $180 unnecessarily into our escrow account (beyond the amount that is needed to fully pay our property tax and insurance premium) — a $1,620 overpayment so far, and growing.  Once a year our mortgage servicer is supposed to recalculate the required monthly escrow payment, so I do expect to get the money refunded sometime late this year.  But in the meantime, we have $1,620 less in our bank account.  What’s most annoying to me is that I pointed this error out the day before closing (we only received the draft settlement statement at 4:26pm the day before our 10:00am closing) but the attorney didn’t bother to fix it.

3. The attorney made two errors on our Massachusetts Declaration of Homestead: a misspelled property address and an invalid signing date (May 15 instead of June 15).  I worried that these scrivener’s errors could invalidate the document — in a worst-case scenario, an invalidated declaration of homestead could put up to $375,000 of our home equity at risk — so I had the attorney file a replacement declaration with the correct address and new date.  To the attorney’s credit, his office paid the $35 filing fee for the second filing.

4. The doozy: The attorney misprepared the deed to the property!  In Massachusetts there are three main ways to co-own real property.  If you are married you generally want to take title as tenants by the entirety; we certainly did, and the attorney confirmed our choice over a month in advance.  But the deed he prepared and filed gives us title as tenants in common.  Argh!

The deed currently reads:

I, [the seller], in consideration of [the sales price], grant to JOHN GRIFFIN and EVELYN GRIFFIN of [address] with quitclaim covenants…

It should instead state:

…grant to JOHN GRIFFIN and EVELYN GRIFFIN, husband and wife, as tenants by the entirety, of [address]…

Argh indeed.  The solution is for us to deed ourselves the property, inserting that phrase into the new deed.  Again to the attorney’s credit, his office is eating the $125 cost of the second filing.  (I asked whether filing a new deed would have any impact on our title insurance policy, homestead declaration, mortgage note, or our filings with the IRS.  His answer is “no”, since the new deed won’t change the parties involved in ownership — just the form of tenancy.)

I discovered the problem when I compared our deed to the deeds of other married couples we know, and looking up the discrepancy in the relevant statutes.

  • The washing machine sounds like a jackhammer!

Whenever we ran a load (in the LG high-efficiency front-load washer) with warm or hot water, the house piping rattled like a jackhammer and the washer’s intake hoses vibrated so strongly that I was worried that it would dent the wall behind the washer.  A quick search of the Internet led me to diagnose a rapid-fire water hammer effect, likely due to the washer rapidly opening and closing the hot water valve in a misguided attempt to regulate intake temperature.  And there’s a cheap and easy fix:

$10 and no more worries that your washing machine is going to rip your pipes out of the wall.

$10 and no more worries that your washing machine is going to rip your pipes out of the wall.

What you need is a water hammer arrestor.  You actually need two ($20 total), one for each intake hose — and no more jackhammering.  But, while you’re flailing away behind the washing machine:

  • Is the dryer supposed to be venting so much exhaust into the laundry room? 

While installing the water hammer arrestors I discovered that the dryer’s exhaust vent wasn’t flush with the exhaust pipe leading to the outside.  (“Discovered” by noticing that there was lint everywhere behind the dryer.)  Although much of the hot, humid, lint-filled dryer exhaust correctly went outside, some of the hot, humid, lint-filled air was staying inside and slowly coating the back of the dryer, the wall, the floor, and the hoses with hot and humid lint.

The ruffled pipe on the left is correctly inserted into the exhaust pipe on the right. When incorrectly non-inserted (not shown), some of that warm moist air unsurprisingly flows back into the laundry room.

The ruffled pipe on the left is correctly inserted into the exhaust pipe on the right. When incorrectly non-inserted (not shown), some of that warm moist air unsurprisingly flows back into the laundry room.

It was pretty easy to fix by moving the dryer so that the pipes lined up correctly.  My guess is they were knocked out of alignment a few months ago when the insulation contractors were busy drilling holes in the walls and pumping them full of cellulose.

But the larger lesson for me is that nobody is going to come knocking on our front door asking to check whether our dryer is venting correctly.  Gone are the days when some faceless property management company dropped by periodically to take care of these kinds of things.  So I’m making an effort to look behind things (like the kitchen stove), or underneath things (like the refrigerator coils), or around things (like the toilet flush mechanism), or inside things (like the bathtub drain), to ensure I understand what’s going on and especially to ensure I notice when something isn’t quite right.  (It’s surprising how much more you care about fixing a problem when it’s your problem instead of being your landlord’s problem.)

And speaking of venting:

  • Is the bathroom fan supposed to be venting so much exhaust into the attic?

The short story here is that the electrical contractors used duct tape to connect the fan exhaust ducts to the roof vents.  This image from Lowes.com shows the components that usually route air from your exhaust fan to the outdoors:

Roof vent kit. The middle component is the coupling that would normally connect the hose (upper left) to the vent cap (upper right). Instead of buying this $3 part, our electrical contractor decided to use duct tape, with predictable results. Image source: Lowes.com

Roof vent kit. The middle component is the coupling that would normally connect the hose (upper left) to the vent cap (upper right). Instead of buying this $3 part, our electrical contractor decided to use duct tape, with predictable results. Image source: Lowes.com

It took me months to notice that the contractor had skimped on the coupling; the rest of the installation was very professional.  (I did notice that he had wrapped the connection area with duct tape, but I assumed he put it there to support the weight of the hose, not to connect the hose to the cap.)  I only discovered it when I noticed a “used bathroom smell” in the stairwell leading to the attic space:  The fan was drawing air out of the bathroom and pushing it into the attic, which then pushed it towards the stairwell, then back towards the bathroom.  When I checked the attic I discovered that the duct tape on both hoses was failing, causing the hoses to vent partially into the attic space.  $6 of parts (and about an hour of cursing and bumping my head) and all is well.

(Side note 1:  If I could go back in time, I’d have specifically instructed the contractor to use rigid ducting to exhaust the new fans; I think that would reduce the fan noise even further.  I may eventually go in and replace the ducting myself.)

(Side note 2:  Putting the bathroom fans on a timer switch is easily one of the most satisfactory modifications that we’ve done to the house.  Before we take a shower we simply press the “60 minute” button on the switch and forget about the fan — and we have no more moisture problems in the bathroom.  Try it!)

(Side note 3:  There is a conventional formula that you can use to determine what capacity [cubic feet per minute, CFM] bathroom exhaust fan you need.  I was tempted to get an oversized fan but decided to get exactly the size called for by the formula.  My decision was correct:  if you install a timer for your bathroom fan, you don’t need an oversized fan.)

  • One snow shovel isn’t enough for two people.

Lesson learned:  Two snow shovels would have gotten the job done in half the time with twice the fun.

Before the cold season I bought a good-quality plastic shovel for bulk snow removal.  Our neighbor later mentioned (correctly) that we also need a square metal shovel to break up and remove ice from the sidewalk, for those days when freezing rain sticks to the walkway and stairs.

Our roofing contractor recommended we get a roof rake because of the shallow slope of the roof over our porch.  You use them to rake heavy snow off your roof to avoid potential collapse from the weight of the snow.  I ended up being glad we bought one (the rakes are about $50 each, but prevention is cheaper than repair); I ended up using it twice this winter when we had deep snows followed by rain.

  • Which bulb to buy?

One nice thing about homeownership is you can finally start thinking in terms of years for things like “what’s the return on investment of buying one of those newfangled expensive light bulbs?”

When we bought the house all the fixtures had incandescent bulbs.  When the Mass Save folks came to give us a free energy audit, they replaced all the incandescents with CFLs (for free!)  CFLs use about 25% of the energy of incandescent bulbs (lower wattage, less heat) and have a longer service life than incandescents.  Unfortunately, CFLs take a few minutes to warm up to full brightness and can be pretty dim for the first few moments after turning them on — fine for some rooms but not really good for places like stairwell lighting.  As a test we’ve replaced a couple screw-in CFLs with LED bulbs and so far are very happy with the light distribution, color, and instant-on performance of the LED bulbs.

I also put three low-lumen LED bulbs in an outdoor lamppost to try them out.  I ended up writing a review of the bulbs in which I calculated the long-term cost savings you get by buying an $11 LED bulb instead of a $1 incandescent bulb:

These things are advertised as “2700–3000K” but they look much whiter to me — maybe 4000K? — certainly nothing like the warm orange-y color shown in the item description. My only complaint is that they don’t look as pretty as the dimmed 15W incandescent bulbs they replaced (the ones with the nice orange filament glow). But these bulbs don’t look bad, just a little futuristic.

I really like their light intensity of 50 lumens per bulb; I bought them in part because these were the dimmest bulbs I could find. I installed them in a lamppost next to our front steps. The old 110 lumen incandescents were harshly bright on dark evenings, so much so that three of them together hurt my eyes when I glanced at them. These LED bulbs are pleasantly dim (though still too bright to stare at directly) — the bulbs aren’t distracting when viewed from across the street, but they still provide enough illumination to make out the steps.

These bulbs are currently $11/bulb[.] I’m curious to see whether I get the claimed 3-year (30,000-hour) service life out of these LEDs, especially with them being in the outdoor fixture. At current electricity rates, each 1.5W LED bulb costs me $0.92 per year if I leave them on 24/7. So if the bulbs do last three years, then it’ll cost me $4.59 total (bulb cost plus electricity cost) per bulb per year.

It turns out that that’s a bargain in comparison with my old 15W incandescents ($1.09/bulb, 1500-hour service life). A 15-watt bulb costs $9.24 in electricity per year when run continuously, and a 1500-hour service life means you’ll burn through six bulbs each year…so the total cost for incandescents running 24/7 is $14.69 per bulb per year.

(In actual use I only ran the incandescents at night and I had to replace them after about six months. Under that 12-hour “only at night” scenario, the incandescent cost is only $6.80 per bulb per year — but that’s still more expensive than the $4.59/year if I run the LED bulbs full-time. And in an apples-to-apples comparison, if I were to run the LEDs only at night [and if they last six years that way] then the total cost for the LEDs is only $2.75 per bulb per year.)

In order to get these bulbs to work I had to screw them in more tightly than I’d had to with the incandescent bulbs. At first they didn’t light up and I worried that I’d received three dead bulbs, but it turns out that a little extra torque fixed the problem.

Most of the LED bulbs we’ve bought have a smaller “candelabra” base.  One downside to both the CFL and LED screw-in bulbs in the candelabra form factor is their large size relative to incandescent bulbs.  The CFL and LED candelabra bulbs we have are surprisingly larger than the incandescent bulbs they replace — which makes sense in order to squeeze in the extra circuitry, but which made it challenging to fit the newer bulbs into candelabra-sized wall fixtures.

April 8, 2013

A survey of published attacks against M2M devices

Filed under: Opinions,Work — JLG @ 12:00 AM

Last year I became interested in working with M2M (machine to machine) systems.  M2M is the simple idea of two computers communicating directly with each other without a human in the loop.

As an example of M2M, consider a so-called smart utility meter that is able both to transmit load information in real time to a server at the power company, and to receive command and control instructions in return from the server.  (The actual communications could take place over a cellular network, a powerline network, or perhaps even over the Internet using a telephony or broadband connection.)  An excerpt from that Wikipedia article demonstrates the types of new functionality that are enabled through real-time bidirectional communications with utility meters:

The system [an Italian smart meter deployment] provides a wide range of advanced features, including the ability to remotely turn power on or off to a customer, read usage information from a meter, detect a service outage, change the maximum amount of electricity that a customer may demand at any time, detect unauthorized use of electricity and remotely shut it off, and remotely change the meter’s billing plan from credit to prepay, as well as from flat-rate to multi-tariff.

Of course, with great power comes great opportunities for circumventing the security measures engineered into M2M components.  In an environment where devices are deployed for years, where device firmware can be difficult to update, and where devices are often unattended and not physically well secured—meaning potential attackers may have complete physical access to your hardware—it can be very challenging to implement low-impact, cost-effective protections.

Responding to this challenge, several researchers have given presentations or released papers that describe fascinating attacks against the security components of M2M systems.  In one well-known example, Barnaby Jack explained the technical details behind several attacks he created that reprogram Automated Teller Machines (and demonstrated live attacks against two real ATMs on stage) in a presentation at the Black Hat USA 2010 security conference.  In another, Jerome Radcliffe described at Black Hat USA 2011 how he reverse engineered the communication protocols that are used to configure an insulin pump and to report glucose measurements to the pump.

In reviewing these published attacks, I’ve developed a threefold taxonomy to help M2M engineers consider and mitigate related risks to their security architectures they develop.  In each category I list three examples of published attacks:

1. Attacks against M2M devices

A. Use a programming or debugging interface to read or reprogram a device.
B. Extract information from the device by examining buses or individual components.
C. Replace or bypass hardware or software pieces on the device in order to circumvent policy. 

2. Attacks against M2M services

A. Inject false traffic into the M2M network in order to induce a desired action.
B. Analyze traffic from the M2M network to violate confidentiality or user protection.
C. Modify component operation to fraudulently receive legitimate M2M services. 

3. Attacks against M2M infrastructure

A. Extract subscriber information from M2M infrastructure control systems.
B. Identify and map M2M network components and services.
C. Execute denial-of-service (DoS) attacks against infrastructure or routing components.

I’ve written a whitepaper that explores the technical details of three published attacks in the first category:  A survey of published attacks against machine-to-machine devices, services, and infrastructure—Part 1: Devices.  (TCS intends to publish parts 2 and 3 later this year, covering attacks against M2M services and infrastructure.)

My goal with the whitepapers is to illustrate the hacker methodology—the clever, creative, and patient techniques an adversary may use to attack, bypass, or circumvent your M2M security infrastructure.  (As a side note, I am grateful to the M2M security researchers and hackers who have been willing to share their methodology and results publicly.)

The key takeaway is to think like an attacker! by preparing in advance for when and how security systems fail.  A Maginot Line strategy for M2M may not be effective in the long term.  I often recommend such planning to include (a) a good security posture before you’re attacked, (b) good logging, auditing, and detection for when you’re attacked, and (c) a good forensics and remediation capability for after you’re attacked.

February 24, 2013

Nondeductible IRA contribution or mortgage prepayment?

Filed under: Homeownership — JLG @ 10:28 PM

Having just filed our 2012 taxes, I’m pondering an interesting question.  Over the past year I set aside $5,000 that I’d planned to put into a traditional IRA as a nondeductible (after-tax) contribution.  It occurs to me that I could instead use the money as a principal prepayment on our mortgage.  The question is:  Should I?

[The IRA contribution is nondeductible because I participated in my company’s 401(k) plan in 2012.  As a result I am not eligible to deduct my IRA contributions from our federal taxes.  As the IRS explains: “If you were covered by a retirement plan (qualified pension, profit-sharing (including 401(k)), annuity, SEP, SIMPLE, etc.) at work or through self-employment, your IRA deduction may be reduced or eliminated. But you can still make contributions to an IRA even if you cannot deduct them.”]

If every year I chose $5,000 mortgage prepayments over nondeductible IRA contributions, then:


  • Faster payoff.  We would pay off the mortgage in 20 years (or fewer) instead of 30.
  • Less interest.  We would save at least $75,000 in interest payments over the life of the loan.
  • Zero risk.  Each prepayment would yield a guaranteed return of 3.625%/year (our mortgage rate) through 2042.
  • More equity.  We would have more equity in the house if we decide to sell (if we move or “trade up”) or if we need to do a cash-out refinance.


  • Underfunding retirement?  We would be reducing our retirement savings.  (However, during years 21-30 we could pay ourselves “mortgage payments” directly into our retirement savings.  If we are disciplined about it, those payments would make up much of the difference.)
  • Tax deferral.  The prepayment wouldn’t experience tax-deferred growth as it would in an IRA.
  • Lower returns?  There’s the chance that an IRA would grow in value significantly more than 3.625%/year.  Additionally, the IRA would continue to grow (or shrink) in value until withdrawn (or until we die, I suppose), whereas a prepayment’s “zero-risk return” ends when the 30-year mortgage term ends.
  • Inflation hedge.  If the dollar experiences high inflation in the next few years, we’d be better off if we were carrying lots of debt (i.e., a high mortgage balance).

Either way I wouldn’t be putting all of our retirement eggs into one basket, in that I’m already making contributions to a 401(k) retirement plan.  (By the way, the best answer I’ve found to how much should I save for retirement? is in Rande Spiegelman’s article “Play the Percentages”.)

Two things made me start thinking about this trade-off between mortgages and nondeductable IRAs:

  1. the notion that both are illiquid ways to invest for retirement, and
  2. this Mortgage Professor article about mortgage repayment as a long-term investment.

The Professor addresses a similar question in his article Roth IRA contributions vs. mortgage prepayment.  (My question is about traditional IRAs.)  The only other relevant advice I’ve found so far is in this paper comparing mortgage prepayment with pre-tax retirement contributions.  (My question is about nondeductible traditional IRAs.)

Until this year my strategy for retirement investing has been:

  1. If you have a 401(k) (or similar) plan with company matching contributions, first make contributions up to the company match.  (For example, if your company matches up to $3,000 of contributions then put your first $3,000 into the 401(k).)
  2. Next, make contributions to a Roth IRA up to the maximum allowable amount.
  3. Next, max out your pre-tax contributions to the 401(k).
  4. Next, make deductible contributions to a traditional IRA up to the maximum allowable amount.
  5. Next, make nondeductible contributions to a traditional IRA up to the maximum allowable amount.

So the conundrum is whether I should replace step 5 (or even step 4) with “Next, make prepayments against your mortgage principal.”  Arguably I have until April 15 to decide, although if I choose prepayment then every month’s delay costs me $500 more in interest paid over the life of the loan.

January 25, 2013

The 5 P’s of cybersecurity

Filed under: Opinions,Work — JLG @ 12:00 AM

Earlier this month I had the privilege of speaking at George Mason University’s cybersecurity innovation forum.  The venue was a “series of ten-minute presentations by cybersecurity experts and technology innovators from throughout the region. Presentations will be followed by a panel discussion with plenty of opportunity for discussion and discovery. The focus of the evening will be on cybersecurity innovations that address current and evolving challenges and have had a real, measurable impact.”

(How does one prepare for a 10-minute talk?  The Woodrow Wilson quote came to mind: “If I am to speak ten minutes, I need a week for preparation; if fifteen minutes, three days; if half an hour, two days; if an hour, I am ready now.”)

Given my experience with network security job training here at TCS, I decided to talk about the approach we take to prepare students for military cybersecurity missions.  It turned out to be a good choice:  The topic was well received by the audience and provided a nice complement to the other speakers’ subjects (botnet research, security governance, and security economics).

My talk had the tongue-in-cheek title The 5 P’s of cybersecurity: Preparing students for careers as cybersecurity practitioners.  I first learned of the 5 P’s from my college roommate who captained the Auburn University rowing team.  He used the 5 P’s (a reduction of the 7 P’s of the military) to motivate his team:

Poor Preparation = Piss Poor Performance

In the talk I asserted that this equation holds equally true for network security jobs as it does for rowing clubs.  A cybersecurity practitioner who is not well prepared—in particular who does not understand the “why” of things happening on their network—will perform neither effectively nor efficiently at their job.  And as with rowing, network security is often a team sport:  One ill-prepared team member will often drag down the rest of the team.

I mentioned how my colleagues at TCS (and many of our competitors and partners in the broad field of “advanced network security job training”) also believe in the equation, perhaps even moreso given that many of them are former or current practitioners themselves.  I have enjoyed working alongside instructors who are passionate about the importance of doing the best job they can.  Many subscribe to an axiom that my father originally used to describe his work as a high-school teacher:

“If my student has failed to learn, then I have failed to teach.”

After presenting this axiom I discussed several principles TCS has adopted to guide our advanced technical instruction, including:

  1. Create mission-derived course material with up-to-date exercises and tools.  We hire former military computer network operators to develop our course content, in part to ensure that what we teach in the classroom matches what’s currently being used in the field.  When new tools are published, or new attacks are put in the news, our content-creators immediately start modifying our course content—not simply to replace the old content with the new, but rather to highlight trends in the attack space & to involve students in speculating on what they will encounter in the future.
  2. Engage students with hands-on cyber exercises. Death by PowerPoint is useless for teaching technical skills.  Even worse for technical skills (in my opinion, not necessarily shared by TCS) is computer-based training (CBT).  Our Art of Exploitation training is effective because we mix brief instructor-led discussions with guided but open-ended hands-on exercises using real attacks and real defensive methodologies on real systems.  The only way to become a master programmer is to author a large and diverse series of software; the only way to become a master cybersecurity practitioner is to encounter scenarios, work through them, and be debriefed on your performance and what you overlooked.
  3. Training makes a practitioner better, and practitioners make training better.  A critical aspect of our training program is that our instructors aren’t simply instructors who teach fixed topics.  Our staff regularly rotate between jobs where they perform the cybersecurity mission—for example, by participating in our penetration test and our malicious software analysis teams—and jobs where they train the mission using the skills they maintain on the first job.  Between our mission-relevant instructors and our training environment set up to emulate on-the-job activities, our students experience in the classroom builds to what they will experience months later on the job.

The audience turned out to be mostly non-technical but I still threw in an example of the “why”-oriented questions that I’ve encouraged our instructors to ask:

The first half of an IPv6 address is like a ZIP code.  The address simply tells other Inetrnet computers where to deliver IPv6 messages.  So the IPv6 address/ZIP code for George Mason might be 12345.

Your IPv6 address is typically based on your Internet service provider (ISP)’s address.  In this example, George Mason’s ISP’s IPv6 address is 1234.  (Continuing the example, another business in Fairfax, Virginia, served by the same ISP might have address 12341; another might have 12342; et cetera.)

However, there is a special kind of address—a provider-independent address—that is not based on the ISP.  George Mason could request the provider-independent address 99999.  Under this scheme GMU would still use the same ISP (1234), they would just use an odd-duck address (99999 instead of 12345).

Question A:  Why is provider-independent addressing good for George Mason?

Question B:  Why is provider-independent addressing hard for the Internet to support?

Overall I had a great evening in Virginia and I am thankful to the staff at George Mason for having extended an invitation to speak.

Older Posts »