Jagged Thoughts | Dr. John Linwood Griffin

January 20, 2019

ShmooCon 2018

Filed under: Opinions,Reviews,Work — JLG @ 11:51 PM

Note: This report is one year out of date. I attended ShmooCon in Washington, DC, last year, wrote up my thoughts, decided to wait until the videos were available before posting my thoughts, and then plum forgot about it. I see that this year’s convention was this weekend, so now seems like a good opportunity to finally make good on posting…

In general I was underwhelmed. This trip was my second time there, and ShmooCon seems to be more thought experiment (“here’s some things I thought of, and some first steps in that direction”) than hey-look-at-this-cool-result-you-can-learn-from. But most attendees seemed to enjoy the content — and for me, talking one-on-one to folks was the highlight of the con (as usual) — and it was as hard as ever to score tickets — so ShmooCon is clearly not waning in popularity.

There were some cool takeaways for me, including:

  • A talk by someone who set up ~$400/mo worth of cloud instances (~80 nodes across ~8 hosting providers, one in each zone they provide) to collect metrics on things-that-scan-the-entire-IPv4-address-space (#noisynet). It was nice to see some quantitative numbers on what the minimum cost for entry is for RYOing your own round-the-world distributed collection of servers; $400 sounded surprisingly affordable. He also pointed out that the upstream bandwidth cost of sending a SYN packet to every IPv4 address is around 280 GB, which also sounds surprisingly affordable.
  • A talk by a lawyer (#blinkblink) on how in the US there is a different legal standard (so far) for requiring you to unlock your phone with a passcode (something you know) vs requiring you to unlock with your thumbprint or face (something you are). In particular, if you use these technologies, you should probably also learn how to shut them off in a hurry (such as by triggering iOS Emergency Mode by pressing the lock button 5 times rapidly).
  • A talk on robot attack and defense (#robotsattack) that was much more subtle than the kinds of attacks you’d expect. For example psychological attacks (the speaker retested the infamous Milgram experiment and found that people are pretty susceptible to being railroaded by a robot) and social engineering (she found that robots-that-pretend-to-deliver-boxes-of-cookies were much more likely to be allowed unescorted into locked spaces).
  • I didn’t see the talk, but I heard people discussing EFF’s talk about their investigation into a malware espionage campaign (#duckduckapt). Their whitepaper (info at https://www.eff.org/press/releases/eff-and-lookout-uncover-new-malware-espionage-campaign-infecting-thousands-around) has interesting details such as how they pinpointed a physical building that appears to be responsible for the malware’s command-and-control (C2) infrastructure by identifying the C2 test nodes, probing which SSIDs were visible from those nodes, and cross-referencing those SSIDs on a Wi-Fi geolocation service.

One regret is that I missed the plenary session in which the future of cryptocurrency was debated. In one cryptocurrency conversation I had over the weekend, my observation that serious companies are putting serious money into serious blockchain R&D was countered by someone else’s observation that there may not be enough power in the world to run all the blockchain infrastructure if it continues to grow uncontained. I can’t yet predict what blockchain has in store for us but I do wish that I received 0.01 BTC each time someone voices a strong opinion about blockchain.

Finally, if you only watch one talk from ShmooCon (when they’re up on the tubes, in ~3 weeks?), I recommend:

  • A somewhat extemporaneous talk by a seemingly-well-known exploiter/0day developer (#forging). Listening to the speaker talk casually about how everything-you-thought-was-pure-and-good-in-the-world-actually-isn’t was jaw-dropping, even for someone who was a computer security researcher in a previous life. Example: He discovered a way to turn a laptop camera on and off so quickly that the LED charging circuit that would have turned on the light never turned on, and he passed along an anecdote of how widely an organization opens its wallet once he uses that trick to provide a screenshot of their target from the target’s computer.

November 7, 2015

Every Eighteen Months, part 2: Career coaching

Filed under: Opinions,Work — JLG @ 5:42 PM

It can be an uphill battle to get personalized advice about your job performance and your career progress.  Earlier this week a friend and I texted about the challenge of extracting good feedback from management in our modern litigation-averse corporate America.

He noted that in his job he hasn’t been getting the feedback he’s been asking for:

“I like the idea of knowing how I’m actually doing.  I credit honest feedback at every stage of my life where I claim to have grown.”

I replied:

“Hard to get that from an employer.  Good to try to cultivate a mentor wherever you go—someone senior but not in your org chart—but even then I’ve never been able to get the kind of constructive criticism that I’d like from anyone other than same-level colleagues.”

So the trick is to find those mentors.  I appreciate constructive criticism from anybody willing to lob it my way, but I especially appreciate getting it from successful entrepreneurial types (S.E.T.) who are living their dream.

Along those lines I recently had a great discussion with another S.E.T. acquaintance about my career goals, plans, and aspirations.  I described how my long-term goal has been to found and lead my own company (preferably successfully, that second time around).  I explained that as a step in that direction, my current dream job would have me being an entrepreneurial engineer.  I noted that I’ve been seeking out jobs where I am able to come up with ideas, pursue the good ones, and (if they fail) come up with more ideas and pursue them—jobs where I can:

  • have a product vision unique in the industry,
  • execute independently on my vision by applying my (and my colleagues’) unique and most prominent skills, and my interesting background, toward realizing the vision, and
  • increase my employer’s profits by the boatload (not all ideas are big wins, but all big wins come from ideas).

Unfortunately, such jobs are few and far between.  While there are some great aspects of my current job (it’s challenging, fast-paced, exposing me to new-to-me parts of the business, and immersing me in new technologies), I’m not in a role in which I am expected to—or in which I or my peers have largely been able to—have a broad degree of product, business, and technical influence.  So I asked my S.E.T. friend:  How do I sell myself and my capability, to advance into more senior roles—what do I need to convey about myself, and to whom?

The answer was to talk with a career coach.

Great idea!  One I’d never considered before.  And via a friend I came across a good coach willing to squeeze me in for one session.  (Protip: If your prospective coach suggests that you meet over a slice of pizza and a glass of beer, you know you’ve found a good one.)

Here were my takeaways from my chat with a career coach:

Do I need more experience in startups to do another startup?

He suggests not.  If I have an idea and can grab people and start pursuing it, the sink-or-swim-myself model is apparently just as good as watching someone else sink or swim.

He was definitely big on the idea of me going to startup(s) though.  He noted that if I feel that my career has stalled that’s because it has stalled, and at mid-sized companies filled with relatively young people there just aren’t going to be opportunities that open up for me for vertical movement.

He was also very cautionary to do my due diligence before going to a startup.  Know the founders, know whether they’re in good shape technically—are they hiring me to grow or to survive?

How do I market myself to land the opportunities I crave?

Change my resume from a laundry list of things I’ve done to an expression of who I want to be.  De-technicalize it, replacing the jargon with evidence of leadership—in general, present myself in my capacity to lead.  Clarify through the top-of-page-1 material that I’m looking to be considered for executive leadership, and that (separately) I have the credentials of coming from a solid technical background.

As an example, if I’m being evaluated for a CTO role then a CEO is going to look at whether I led a team through something instead of just the individual things I’ve done.  Don’t make the reader search for it, put it clearly and up front.  If you want to be an executive you need to tell the reader that you’re an executive.

Simultaneously update LinkedIn profile to be a resume supplement, focusing on brief powerful statements in the “summary” section at the top as an attention-grabber.  Also create an AngelList profile.

What might I get out of business school?  (The coach works frequently with current and former business school students, so it seemed apropos to ask.)

If I want to inject speed into an executive career path—moving through a career at a faster pace than I could expect by simply working through promotions—business school can provide that.  But don’t look at school as all you need or as the last piece of the puzzle; you’re being taught in the classroom a lot of what you’ll have learned in the workplace.

There are too many B-school graduates already and they’re mostly in their mid-20s.  It’s less of a unique credential/ticket than it used to be.

He espouses a go-big-or-go-home philosophy: If you’re gonna go to business school, and if you can afford it, go full time so that you get an intense, immersive, fast-paced experience.

What are my next steps?

Make an effort to land that next job that gets you on the right track to find the position you want.  Look for director or VP roles.  Don’t move horizontally.

Always be job hunting; never settle down.  Submit your resume early and often.  Write your CV and cover letter in such a way that you can recycle them in 5 minutes for any new opportunity you see.

(When asked about whether I should try to find a longer-term coach to work with.)  Don’t worry about finding a regular coach for now.  He doesn’t know anyone who does coaching on the side (except coaching for C-level executives) but in any event he feels that being in the game (circulating, getting interviews under my belt) would be more useful for me than being on the sideline (talking to a coach).

Overall a great conversation.  The main takeaway matches something I’ve heard before (though it’s hard to do):  Market yourself by presenting your past in the context of the job you want, not necessarily by conveying the minutiae of the job you had.  Describe your previous work by highlighting the things you did that you did well, that you got something out of, that you enjoyed, and that are most relevant to the position you seek.

April 8, 2013

A survey of published attacks against M2M devices

Filed under: Opinions,Work — JLG @ 12:00 AM

Last year I became interested in working with M2M (machine to machine) systems.  M2M is the simple idea of two computers communicating directly with each other without a human in the loop.

As an example of M2M, consider a so-called smart utility meter that is able both to transmit load information in real time to a server at the power company, and to receive command and control instructions in return from the server.  (The actual communications could take place over a cellular network, a powerline network, or perhaps even over the Internet using a telephony or broadband connection.)  An excerpt from that Wikipedia article demonstrates the types of new functionality that are enabled through real-time bidirectional communications with utility meters:

The system [an Italian smart meter deployment] provides a wide range of advanced features, including the ability to remotely turn power on or off to a customer, read usage information from a meter, detect a service outage, change the maximum amount of electricity that a customer may demand at any time, detect unauthorized use of electricity and remotely shut it off, and remotely change the meter’s billing plan from credit to prepay, as well as from flat-rate to multi-tariff.

Of course, with great power comes great opportunities for circumventing the security measures engineered into M2M components.  In an environment where devices are deployed for years, where device firmware can be difficult to update, and where devices are often unattended and not physically well secured—meaning potential attackers may have complete physical access to your hardware—it can be very challenging to implement low-impact, cost-effective protections.

Responding to this challenge, several researchers have given presentations or released papers that describe fascinating attacks against the security components of M2M systems.  In one well-known example, Barnaby Jack explained the technical details behind several attacks he created that reprogram Automated Teller Machines (and demonstrated live attacks against two real ATMs on stage) in a presentation at the Black Hat USA 2010 security conference.  In another, Jerome Radcliffe described at Black Hat USA 2011 how he reverse engineered the communication protocols that are used to configure an insulin pump and to report glucose measurements to the pump.

In reviewing these published attacks, I’ve developed a threefold taxonomy to help M2M engineers consider and mitigate related risks to their security architectures they develop.  In each category I list three examples of published attacks:

1. Attacks against M2M devices

A. Use a programming or debugging interface to read or reprogram a device.
B. Extract information from the device by examining buses or individual components.
C. Replace or bypass hardware or software pieces on the device in order to circumvent policy. 

2. Attacks against M2M services

A. Inject false traffic into the M2M network in order to induce a desired action.
B. Analyze traffic from the M2M network to violate confidentiality or user protection.
C. Modify component operation to fraudulently receive legitimate M2M services. 

3. Attacks against M2M infrastructure

A. Extract subscriber information from M2M infrastructure control systems.
B. Identify and map M2M network components and services.
C. Execute denial-of-service (DoS) attacks against infrastructure or routing components.

I’ve written a whitepaper that explores the technical details of three published attacks in the first category:  A survey of published attacks against machine-to-machine devices, services, and infrastructure—Part 1: Devices.  (TCS intends to publish parts 2 and 3 later this year, covering attacks against M2M services and infrastructure.)

My goal with the whitepapers is to illustrate the hacker methodology—the clever, creative, and patient techniques an adversary may use to attack, bypass, or circumvent your M2M security infrastructure.  (As a side note, I am grateful to the M2M security researchers and hackers who have been willing to share their methodology and results publicly.)

The key takeaway is to think like an attacker! by preparing in advance for when and how security systems fail.  A Maginot Line strategy for M2M may not be effective in the long term.  I often recommend such planning to include (a) a good security posture before you’re attacked, (b) good logging, auditing, and detection for when you’re attacked, and (c) a good forensics and remediation capability for after you’re attacked.

January 25, 2013

The 5 P’s of cybersecurity

Filed under: Opinions,Work — JLG @ 12:00 AM

Earlier this month I had the privilege of speaking at George Mason University’s cybersecurity innovation forum.  The venue was a “series of ten-minute presentations by cybersecurity experts and technology innovators from throughout the region. Presentations will be followed by a panel discussion with plenty of opportunity for discussion and discovery. The focus of the evening will be on cybersecurity innovations that address current and evolving challenges and have had a real, measurable impact.”

(How does one prepare for a 10-minute talk?  The Woodrow Wilson quote came to mind: “If I am to speak ten minutes, I need a week for preparation; if fifteen minutes, three days; if half an hour, two days; if an hour, I am ready now.”)

Given my experience with network security job training here at TCS, I decided to talk about the approach we take to prepare students for military cybersecurity missions.  It turned out to be a good choice:  The topic was well received by the audience and provided a nice complement to the other speakers’ subjects (botnet research, security governance, and security economics).

My talk had the tongue-in-cheek title The 5 P’s of cybersecurity: Preparing students for careers as cybersecurity practitioners.  I first learned of the 5 P’s from my college roommate who captained the Auburn University rowing team.  He used the 5 P’s (a reduction of the 7 P’s of the military) to motivate his team:

Poor Preparation = Piss Poor Performance

In the talk I asserted that this equation holds equally true for network security jobs as it does for rowing clubs.  A cybersecurity practitioner who is not well prepared—in particular who does not understand the “why” of things happening on their network—will perform neither effectively nor efficiently at their job.  And as with rowing, network security is often a team sport:  One ill-prepared team member will often drag down the rest of the team.

I mentioned how my colleagues at TCS (and many of our competitors and partners in the broad field of “advanced network security job training”) also believe in the equation, perhaps even moreso given that many of them are former or current practitioners themselves.  I have enjoyed working alongside instructors who are passionate about the importance of doing the best job they can.  Many subscribe to an axiom that my father originally used to describe his work as a high-school teacher:

“If my student has failed to learn, then I have failed to teach.”

After presenting this axiom I discussed several principles TCS has adopted to guide our advanced technical instruction, including:

  1. Create mission-derived course material with up-to-date exercises and tools.  We hire former military computer network operators to develop our course content, in part to ensure that what we teach in the classroom matches what’s currently being used in the field.  When new tools are published, or new attacks are put in the news, our content-creators immediately start modifying our course content—not simply to replace the old content with the new, but rather to highlight trends in the attack space & to involve students in speculating on what they will encounter in the future.
  2. Engage students with hands-on cyber exercises. Death by PowerPoint is useless for teaching technical skills.  Even worse for technical skills (in my opinion, not necessarily shared by TCS) is computer-based training (CBT).  Our Art of Exploitation training is effective because we mix brief instructor-led discussions with guided but open-ended hands-on exercises using real attacks and real defensive methodologies on real systems.  The only way to become a master programmer is to author a large and diverse series of software; the only way to become a master cybersecurity practitioner is to encounter scenarios, work through them, and be debriefed on your performance and what you overlooked.
  3. Training makes a practitioner better, and practitioners make training better.  A critical aspect of our training program is that our instructors aren’t simply instructors who teach fixed topics.  Our staff regularly rotate between jobs where they perform the cybersecurity mission—for example, by participating in our penetration test and our malicious software analysis teams—and jobs where they train the mission using the skills they maintain on the first job.  Between our mission-relevant instructors and our training environment set up to emulate on-the-job activities, our students experience in the classroom builds to what they will experience months later on the job.

The audience turned out to be mostly non-technical but I still threw in an example of the “why”-oriented questions that I’ve encouraged our instructors to ask:

The first half of an IPv6 address is like a ZIP code.  The address simply tells other Inetrnet computers where to deliver IPv6 messages.  So the IPv6 address/ZIP code for George Mason might be 12345.

Your IPv6 address is typically based on your Internet service provider (ISP)’s address.  In this example, George Mason’s ISP’s IPv6 address is 1234.  (Continuing the example, another business in Fairfax, Virginia, served by the same ISP might have address 12341; another might have 12342; et cetera.)

However, there is a special kind of address—a provider-independent address—that is not based on the ISP.  George Mason could request the provider-independent address 99999.  Under this scheme GMU would still use the same ISP (1234), they would just use an odd-duck address (99999 instead of 12345).

Question A:  Why is provider-independent addressing good for George Mason?

Question B:  Why is provider-independent addressing hard for the Internet to support?

Overall I had a great evening in Virginia and I am thankful to the staff at George Mason for having extended an invitation to speak.

December 14, 2012

I can’t keep on renaming my dog

Filed under: Opinions,Work — JLG @ 12:00 AM

A clever meme hit the Internet this week:

“Stop asking me to change my password. I can’t keep on renaming my dog.”

If you (or the employees you support) aren’t using a password manager, clear off your calendar for the rest of the day and use the time to set one up.  It’s easy security.  Password managers make it simple to create good passwords, to change your passwords when required, to use different passwords on every site, and to avoid reusing old passwords.

The upside to using a password manager:

  • You only need to remember two strong passwords.  (One to log into the computer running the password manager, and one for the password manager itself.)

The downside to using a password manager:

  • All your eggs are in one basket.  (Therefore you need to pay close attention to choosing a good master password, protecting that password, and backing up your stored passwords.)

Generally speaking a password manager works as follows:

  1. You provide the password manager with a master passphrase.
  2. The password manager uses your master passphrase to create (or read) an encrypted file that contains your passwords and other secrets.

(For deeper details, see KeePass’s FAQ for a brief technical explanation or Dashlane’s whitepaper for a detailed technical explanation.  For example, in the KeePass FAQ the authors describe how the KeePass product derives 256-bit Advanced Encryption Standard [AES] keys from a user’s master passphrase, how salt is used to protect against dictionary attacks, and how initialization vectors are used to protect multiple encrypted files against known-plaintext attacks.  Other products likely use a similar approach to deriving and protecting keys.)

Password managers often also perform useful convenience functions for you—inserting stored passwords into your web browser automatically; generating strong passwords of any desired length; checking your usernames against hacker-released lists of pwned websites; evaluating the strength of your existing passwords; leaping tall buildings in a single bound; etc.

The root of security with password managers is in protecting your master password.  There are three main considerations to this protection:

(A) Choose a good passphrase. 

I’m intentionally using the word “passphrase” instead of “password” to highlight the need to use strong, complex, high-entropy text as your passphrase.  (Read my guidance about strong passphrases in TCS’s Better Passwords, Usable Security whitepaper.  Or if you don’t read that whitepaper, at least read this webcomic.)

Your master passphrase should be stronger than any password you’re currently using—stronger than what your bank requires, stronger than what your employer requires.  (However, it shouldn’t be onerously long—you need to memorize it, you will need to type it every day, and you will likely need to type it on mobile devices with cramped keyboards.)  I recommend a minimum of 16 characters for your master passphrase.

(Side note:  For similar reasons, another place where you should use stronger-than-elsewhere passphrases is with full-disk encryption products, such as TrueCrypt or FileVault, where you enter in a password at boot time that unlocks the disk’s encryption key.  As Microsoft’s #7 immutable law of security states, encrypted data is only as secure as its decryption key.)

(B) Don’t use your passphrase in unhygienic environments.

An interesting concept in computer security is the $5 wrench.  Attackers, like electricity, follow the path of least resistance.  If they’ve chosen you as their target, and if they aren’t able to use cryptographic hacking tools to obtain your passwords, then they’ll try other approaches—perhaps masquerading as an IT administrator and simply asking you for your password, or sending you a malicious email attachment to install a keylogger onto your computer, or hiding a pinhole spy camera in the light fixture above your desk.  So even with strong encryption you are still at risk to social engineering attacks targeting your passwords and password manager.

One way to reduce the risk of revealing your passphrase is to avoid typing it into computer systems over which you have neither control nor trust, such as systems in Internet cafes, or at airport kiosks, or at your Grandma Edna’s house.  To paraphrase public-service messages from the 1980s, when you give your passphrase to an untrusted computer you could be giving that passphrase to anyone who used that computer before you.

For situations where you simply must use a computer of dubious provenance—say, you’re on vacation, you take a wrong turn at Albuquerque, your wallet and laptop get stolen, and you have to use your password manager at an Internet cafe to get your credit card numbers and bank contact information—some password managers provide features like one time passwordsscreen keyboardsmultifactor authentication, and web-based access to help make lemonade out of life’s little lemons.

(C) Make regular backups of your encrypted file.

If you have a strong passphrase [(A)] and you keep your passphrase secret [(B)] then it doesn’t matter where copies of your encrypted file are stored.  The strong encryption means that your file won’t be susceptible to a brute-force or password-guessing attack even if an attacker obtains a copy of your file.  (Password management company LastPass had a possible security breach of their networks in 2011.  Even so, users with strong passphrases had “no reason to worry.”)  As such you are safely able to make backup copies of your encrypted file and to store those backups in a convenient manner.

Some password managers are designed to store your encrypted file on your local computer.  Other managers (notably LastPass) store your encrypted file on cloud servers managed by the same company, making it easier to synchronize the password file across all devices you use.  Still other managers integrate easily with third-party cloud storage providers (notably Dropbox) for synchronization across multiple devices, or support direct synchronization between two devices over a Wi-Fi network.  (In all remote-storage cases I’ve found, the file is always encrypted locally before any portion of the file is uploaded into the cloud.)

Whichever type of manager you use, be aware that that one file holds your only copy of all of your passwords—it is critical that you not lose access to the contents of the file.  Computers have crashed.  Password management companies have disappeared (Logaway launched on May 4, 2010, and ceased operations on February 2, 2012).  Cloud services havelost data and have experienced multi-day disruptions.  Protect yourself by regularly backing up your encrypted file, for example by copying it onto a USB dongle (whenever you change or add a password) or by printing a hard copy every month to stash in a safe deposit box.

If you maintain a strict separation between your home accounts and your work accounts—for example to keep your employer from snooping and obtaining your Facebook password—simply set up two password managers (one on your home laptop, the other on your work PC) using two unique passphrases as master keys.

Password manager software is easy to set up and use.  The biggest problem you’ll face is choosing from among the cornucopia of password managers.  A partial list I just compiled, in alphabetical order, includes: 1PasswordAnyPasswordAurora Password ManagerClipperzDataVaultDashlaneHandy PasswordKaspersky Password ManagerKeePassKeeper,LastPassNorton Identity SafeParanotic Password ManagerPassword AgentPassword SafePassword WalletPINsRoboFormSecret ServerSplashIDSticky PasswordTK8 Safe, and Universal Password Manager.  There is even a hardware-based password manager available.

Your top-level considerations in choosing a password manager are:

  1. Does it run on your particular OS or mobile device?  (Note that some password managers sometimes charge, or charge extra, to support synchronization with mobile devices.)
  2. Do you already use Dropbox on all your devices?  If not, consider a manager that provides its own cloud storage (LastPass, RoboForm, etc.)  If so, and only if you would prefer to manage your own encrypted file, choose a service that supports Dropbox (1Password, KeePass, etc.)

I don’t recommend or endorse any particular password manager.  I’ve started using one of the “premium” (paid) password managers and am astonished at how much better any of the managers are over what I’d been using before (an unencrypted manual text-file-based system that I’d hacked together last millennium).

November 20, 2012

Gabbing to the GAB

Filed under: Opinions,Work — JLG @ 12:00 AM

Earlier this month the (ISC)² U.S. Government Advisory Board (GAB) invited me to present my views and opinions to the board.  What a neat opportunity!

The GAB is a group of mostly federal agency Chief Information Security Officers (CISOs) or similar executives.  Officially it comprises “10-20 senior-level information security professionals in their respective region who advise (ISC)² on industry initiatives, policies, views, standards and concerns” and whose goals include offer deeper insights into the needs of the information security community and discuss matters of policy or initiatives that drive professional development.

In terms of content, in addition to discussing my previous work on storage systems with autonomous security functionality, I advanced three of my personal opinions:

  1. Before industry can develop the “cybersecurity workforce of the future” it needs to figure out how to calculate the return on investment (ROI) for IT/security administration.  I suggested a small initial effort to create an anonymized central database for security attacks and the real costs of those attacks.  If such a database was widely available at nominal cost (or free) then an IT department could report on the value of their actions over the past year: “we deployed such-and-such a protection tool, which blocks against this known attack that caused over $10M in losses to a similar organization.”  Notably, my suggested approach is constructive (“here’s what we prevented”) rather thannegative (“fear, uncertainty, and doubt / FUD”).  My point is that coming at the ROI problem from a positive perspective might be what makes it work.
  2. No technical staff member should be “just an instructor” or “just a developer.”  Staff hired primarily as technical instructors should (for example) be part of an operational rotation program to keep their skills and classroom examples fresh.  Likewise, developers/programmers/etc. should spend part of their time interacting with students, or developing new courseware, or working with the sales or marketing team, etc.  I brought up the 3M (15%) / Hewlett-Packard Labs (10%) / Google (20%) time model and noted that there’s no reason that a practical part-time project can’t also be revenue-generating; it just should be different (in terms of scope, experience, takeaways) from what the staff member does the rest of their time.  My point is that treating someone as “only” an engineer (developer, instructor, etc.) does a disservice not just to that person, but also to their colleagues and to their organization as a whole.
  3. How will industry provide the advanced “tip-of-the-spear” training of the future?  One curiosity of mine is how to provide on-the-job advanced training.  Why should your staff be expected to learn only when they’re in the classroom?  Imagine if you could provide your financial team with regular security conundrums — “who should be on the access control list (ACL) for this document?” — that you are able to generate, monitor, and control.  Immediately after they take an action (setting the ACL) then your security system provides them with positive reinforcement or constructive criticism as appropriate.  My point is that if your non-security-expert employees regularly deal with security-relevant problems on the job, then security will no longer be exceptional to your employees.

I had a blast speaking.  The GAB is a group of great folks and they kept me on my toes for most of an hour asking questions and debating points.  It’s not every day that you get to engage high-level decision makers with your own talking points, so my hope is that I gave them some interesting viewpoints to think about — and perhaps some new ideas on which to take action inside their own agencies and/or to advise the government.

November 1, 2012

2012 Conference on Computer and Communications Security

Filed under: Reviews,Work — JLG @ 12:00 AM

In October I attended the 19th ACM Conference on Computer and Communications Security (CCS) in Raleigh, North Carolina.  It was my fourth time attending (and third city visited for) the conference.

Here are some of my interesting takeaways from the conference:

The point of Binary Stirring is to end up with a completely different (but functionally equivalent) executable code segment, each time you load a program.  The authors double “each code segment into two separate segments—one in which all bytes are treated as data, and another in which all bytes are treated as code.  …In the data-only copy (.told), all bytes are preserved at their original addresses, but the section is set non-executable (NX).  …In the code-only copy (.tnew), all bytes are disassembled into code blocks that can be randomly stirred into a new layout each time the program starts.”  (The authors measured about a 2% performance penalty from mixing up the code segment.)

But why mix the executable bytes at all?  Binary Stirring is intended to protect against clever “return-oriented programming” (ROP) attacks by eliminating all predictable executable code from the program.  If you haven’t studied ROP (I hadn’t before I attended the talk) then it’s worth taking a look, just to appreciate the cleverness of the attack & the challenge of mitigating it.  Start with last year’s paper Q: Exploit Hardening Made Easy, especially the related work survey in section 9.

Regarding ORAM, imagine a stock analyst who stores gigabytes of encrypted market information “in the cloud.”  In order to make a buy/sell decision about a particular stock (say, NASDAQ:TSYS), she would first download a few kilobytes of historical information about TSYS from her cloud storage.  The problem is that an adversary at the cloud provider could detect that she was interested in TSYS stock, even though the data is encrypted in the cloud.  (How?  Well, imagine that the adversary watched her memory access patterns the last time she bought or sold TSYS stock.  Those access patterns will be repeated this time when she examines TSYS stock.)  The point of oblivious RAM is to make it impossible for the adversary to glean which records the analyst downloads.

  • Fully homomorphic encryption:  The similar concept of fully homomorphic encryption (FHE) was discussed at some of the post-conference workshops.  FHE is the concept that you can encrypt data (such as database entries), store them “in the cloud,” and then have the cloud do computation for you (such as database searches) on the encrypted data, without decrypting.

When I first heard about the concept of homomorphic encryption (circa 2005, from some of my excellent then-colleagues at IBM Research) I felt it was one of the coolest things I’d encountered in a company filled with cool things.  Unfortunately FHE is still somewhat of a pipe dream — like ORAM, it’ll be a long while before it’s efficient enough to solve any practical real-world problems — but it remains an active area of interesting research.

  • Electrical network frequency (ENF):  In the holy cow, how cool is that? category, the paper “How Secure are Power Network Signature Based Time Stamps?” introduced me to a new forensics concept: “One emerging direction of digital recording authentication is to exploit an potential time stamp originated from the power networks. This time stamp, referred to as the Electrical Network Frequency (ENF), is based on the fluctuation of the supply frequency of a power grid.  … It has been found that digital devices such as audio recorders, CCTV recorders, and camcorders that are plugged into the power systems or are near power sources may pick up the ENF signal due to the interference from electromagnetic fields created by power sources.”  Wow!

The paper is about anti-forensics (how to remove the ENF signature from your digital recording) and counter-anti-forensics (how to detect when someone has removed the ENF signature).  The paper’s discussion of ENF analysis reminded me loosely of one of my all-time favorite papers, also from CCS, on remote measurement of CPU load by measuring clock skew as seen through TCP (transmission control protocol) timestamps.

  • Resource-freeing attacks (RFA):  I also enjoy papers about virtualization, especially regarding the fair or unfair allocation of resources across multiple competing VMs.  In the paper “Resource-Freeing Attacks: Improve Your Cloud Performance (at Your Neighbor’s Expense)”, the authors show how to use antisocial virtual behavior for fun and profit: “A resource-freeing attack (RFA) [improves] a VM’s performance by forcing a competing VM to saturate some bottleneck. If done carefully, this can slow down or shift the competing application’s use of a desired resource. For example, we investigate in detail an RFA that improves cache performance when co-resident with a heavily used Apache web server. Greedy users will benefit from running the RFA, and the victim ends up paying for increased load and the costs of reduced legitimate traffic.”

A disappointing aspect of the paper is that they don’t spend much time discussing how one can prevent RFAs.  Their suggestions are (1) use a dedicated instance, or (2) build better hypervisors, or (3) do better scheduling.  That last suggestion reminded me of another of my all-time favorite research results, from last year’s “Scheduler Vulnerabilities and Attacks in Cloud Computing”, wherein the authors describe a “theft-of-service” attack:  A virtual machine calls Halt() just before the hypervisor timer fires to measure resource use by VMs, meaning that the VM consumes CPU resources but (a) isn’t charged for them and (b) is allocated even more resources at the expense of other VMs.

  • My favorite work of the conference:  The paper is a little hard to follow, but I loved the talk on “Scriptless Attacks – Stealing the Pie Without Touching the Sill”.  The authors were interested in whether an attacker could still perform “information theft” attacks once all the XSS (cross-site scripting) vulnerabilities are gone.  Their answer: “The surprising result is that an attacker can also abuse Cascading Style Sheets (CSS) in combination with other Web techniques like plain HTML, inactive SVG images or font files.”

One of their examples is that the attacker can very rapidly shrink then grow the size of a text entry field.  When the text entry field shrinks to one pixel smaller than the width of the character the user typed, the browser automatically creates a scrollbar.  The attacker can note the appearance of the scrollbar and infer the character based on the amount the field shrank.  (The shrinking and expansion takes place too fast for the user to notice.)  The data exfiltration happens even with JavaScript completely disabled.  Pretty cool result.

Finally, here are some honorable-mention papers in four categories — work I enjoyed reading, that you might too:

Those who cannot remember the past, are condemned to repeat it:

Sidestepping institutional security:

Why be a white hat?  The dark side is where all the money is made:

Badware and goodware:

Overall I enjoyed the conference, especially the “local flavor” that the organizers tried to inject by serving stereotypical southern food (shrimp on grits, fried catfish) and hiring a bluegrass band (the thrilling Steep Canyon Rangers) for a private concert at Raleigh’s performing arts center.

October 15, 2012

Public-key cryptography & certificate chaining

Filed under: Opinions,Work — JLG @ 12:00 AM

Of the many marvelous Calvin and Hobbes cartoons by Bill Watterson, one of the most marvelous (and memorable) is The Horrendous Space Kablooie.  Quoth Calvin, “That’s the whole problem with science.  You’ve got a bunch of empiricists trying to describe things of unimaginable wonder.”

I feel the same way about X.509, the name of the international standard defining public key certificates.  X.509?  It’s sort of hard to take that seriously — “X.509” feels better suited as the name of an errant asteroid or perhaps a chemical formula for hair restoration.

But I digress.  X.509 digital certificates are exchanged when you create a “secure” connection on the Internet, for example when you read your webmail using HTTPS.  The exchange happens something like this:

  • Your computer:  Hi, I’m a client.
  • Webmail server:  Howdy, I’m a server.  Here’s my X.509 certificate, including the public key you’ll use in the next step.
  • Your computer:  Fabulous.  I’ve calculated new cryptographic information that we’ll use for this session, and I’ve encrypted it using your public key; here it is.
  • (Further traffic is encrypted using the session cryptographic information.)

Several things happen behind the scenes to provide you with security:

  1. Your computer authenticates the X.509 certificate(s) provided by the server.  It checks that the server uses the expected web address.  It also verifies that a trusted third party vouches for the certificate (by checking the digital signature included in the certificate).
  1. Your computer verifies that there is no “man in the middle” attack in progress.  It does this by ensuring that the server has the private key associated with its certificate.  It does this by encrypting the session cryptographic information with the server’s public key.  If the server didn’t have the private key then it wouldn’t be able to encrypt and decrypt any further traffic.

Unfortunately the system isn’t perfect.  The folks who programmed your web browser included a set of trusted root certificates with the browser.  Those root certificates were issued by well-known certificate authorities [CAs] such as Verisign and RSA.  If an attacker breaches security at either a root CA or an intermediate CA, as happened with the 2011 Comodo and DigiNotar attacks, then an attacker could silently insert himself into your “secure” connection.  Yikes!  Efforts like HTTPS Everywhere and Convergence are trying to address this problem.

Public-key cryptography is pretty neat.  When you use public-key cryptography you generate two keys, a public key (okay to give out to everyone) and a private key (not okay).  You can use the keys in two separate ways:

  • When someone wants to send you a private message, they can encrypt it using your public key.  The encrypted message can only be decrypted using your private key.
  • When you want to publish a message, you can encrypt (sign) it using your private key.  Anyone who has your public key can decrypt (validate) your message.

In a public key infrastructure, a root CA (say, Verisign) uses its private key to sign the public-key certificates of intermediate certificate authorities (say, Thawte).  The intermediate CAs then use their private key to sign the public-key certificates of their customers (say, www.google.com).  When you visit Google’s site using HTTPS, Google provides you both their certificate and Thawte’s certificate.  (The chained relationship Verisign-Thawte-Google is sometimes called the “chain of trust”.)  Your browser uses the certificates provided by Google, plus the Verisign root certificate (bundled with the browser), to verify that the chain of trust is unbroken.

[I use Google as the example here, since you can visit https://www.google.com and configure your browser to show the certificates that Google provides.  However, I have no knowledge of Google’s contractual relationship with Thawte.  My assertions below about Google are speculative, but the overall example is valid.]

Recently I was asked “We have been trying to understand Certificate Chaining and Self Signing.  Would a company [like Google] be allowed to purchase one certificate from a Certificate issuer like Verisign and then issue its own signed additional certificates for additional use?”

Great question!  (Where “great question” is defined as “um, I don’t know, let me check into that.”)  It turns out the answer is no, a company’s certificate(s) cannot be used to sign other certificates.

Using Google as an example, the principal reason is that neither Verisign nor Thawte let Google act as an “intermediate certificate authority.”  It’s (1) likely against the license agreement under which Thawte signed Google’s certificate, and (2) prohibited by metadata fields inside both Thawte’s certificate and Google’s certificate:

  • Google’s certificate is prohibited from signing other ones because of a flag inside the certificate metadata.  (Specifically, their Version 3 certificate has an Extension called Certificate Basic Constraints that has a flag Is not a Certificate Authority.)  And Google can’t modify their certificate to change this flag, because then signature validation would fail (your browser would detect that Google’s modified certificate doesn’t match the original certificate that Thawte signed).
  • Certificates signed by Thawte’s certificate cannot be used as Certificate Authorities (CAs) because of a flag inside Thawte’s certificate.  (Specifically, their Version 3 certificate has an Extension called Certificate Basic Constraints that has an field Maximum number of intermediate CAs that’s set to zero, meaning that no verification program should accept any certificates that we signed using their key.)

If your company needs to issue its own signed certificates, for example to protect your internal servers, it’s relatively easy to do.  All you have to do is run a program that generates a root certificate.  You would then be like Verisign in that you could issue and sign as many other certificates as you wanted.  (The down side of your “private PKI” is that none of your users’ browsers would initially recognize your root certificate as a valid certificate.  For example, anyone surfing to a web page protected by certificates you signed would get a big warning page every time, at least until they imported your root certificate’s signature to their trusted-certificates list.)

The article I found most helpful in digging up this answer is here:
http://unitstep.net/blog/2009/03/16/using-the-basic-constraints-extension-in-x509-v3-certificates-for-intermediate-cas/

(The full name of the X.509 standard is the far worse ITU-T Recommendation X.509: Information technology – Open systems interconnection – The Directory: Public-key and attribute certificate frameworks.  One name with four hyphens, two colons, and the hyphenated equivalent of comma splicing?  Clearly rigorous scientific work.)

September 25, 2012

Better living through IPv6-istry

Filed under: Opinions,Work — JLG @ 12:00 AM

There have been many, many words written about the IPv4-to-IPv6 transition — probably around 340 undecillion words at this point — but perhaps my favorite words came in a recent Slashdot comment by AliasMarlowe:

I believe in the incremental approach to updates; it’s so much safer and usually easier.
So it’s going to be IPv5 for me, while you suckers make a mess of IPv6!

I’ve long been a fan of IPv6.  Deploying IPv6 has the obvious benefit of solving the IPv4 address exhaustion problem, as well as making it easier to do local subnetting, and site network architecture, and to some degree internet-scale routing.

But perhaps the greatest benefit of deploying IPv6 is the restoration of end-to-end transparency.  IPv6 obviates the need for network address translation (NAT).  With IPv6, when your Skype application wants to initiate a call to my Skype application, the apps can address each other directly without relying on hole punching, third-party relaying, or other “clever” NAT-circumvention techniques.

(End-to-end addressing may sound unimportant, but if we could restore this critical Internet design goal to practice then we could party like it’s 1979!)

I recently spoke with some of TCS’s computer network operations students about security considerations for IPv6 deployments.  They were surprised when I claimed that NAT is not needed in an IPv6 security plan; several students commented that the NAT on their home network router was the only thing protecting their computers from the evils of the Internet.

A common misperception!  There are generally two functions performed by your home network router (or your corporate upstream router, if so configured):

  1. Firewalling / stateful packet inspection.  This is a security function.
  2. IP masquerading / network address [and port] translation.  This is not a security function; it simply allows all the devices on your internal network to share a single external network (IP) address.

With IPv6 you can (and should) still deploy inline firewall appliances to perform function #1.  But with the plethora of available addresses in IPv6 — 18,446,744,073,709,551,616 globally routable addresses per standard local subnet — there is no overt need for masquerading.

Of course, masquerading provides ancillary benefits:  It somewhat hinders external traffic analysis, such as network mapping, by obfuscating the internal source and destination of traffic.  Combining masquerading with private IPv4 addressing also prevents internal addresses from being externally routable.

But similar benefits can be realized in IPv6 without masquerading and therefore without losing the benefits of end-to-end transparency.  For example IPv6 privacy extensions can obfuscate your internal network architecture and IPv6 unique local addresses can be used to isolate systems that shouldn’t be visible on external networks.

August 30, 2012

High-sodium passwords

Filed under: Opinions,Work — JLG @ 12:00 AM

Recently I’ve had some interesting conversations about passwords and password policies.

In general I despise password policies, or at least I despise the silly requirements made by most policies.  As I wrote in TCS’s recent Better Passwords, Usable Security white paper, “Why do you require your users’ passwords to look as though somebody sneezed on their keyboard? … Is your organization really better protected if you require your users to memorize a new 14-character password every two months? I argue no!”

In the BPUS white paper — which is behind a paywall, and I understand how that means it’s unlikely you’ll ever read it — I argue for three counterintuitive points:

  1. Password policies should serve your users’ needs, not vice versa.
  2. Passwords shouldn’t be your sole means of protection.
  3. Simpler passwords can be better than complex ones.

Beyond these points, it is also important to implement good mechanisms for storing and checking passwords.

Storing passwords: In June of this year there was a flurry of news articles about password leaks, including leaks at LinkedIneHarmony, and Last.fm.  The LinkedIn leak was especially bad because they didn’t “salt” their stored password hashes.  Salting works as follows:

  • An authentication system typically stores hashes of passwords, not cleartext passwords themselves.  Storing the hash originally made it hard for someone who stole the “password file” to actually obtain the passwords.
  • When you type in your password, the authentication system first takes a hash of what you typed in, then compares the hash with what’s stored in the password file.  If your hash matches the stored hash, you get access.
  • But attackers aren’t dumb.  An attacker can create (or obtain) a “rainbow table” containing reverse mappings of hash value to password.  For example, the SHA-1 hash of “Peter” is “64ca93f83bb29b51d8cbd6f3e6a8daff2e08d3ec”.  A rainbow table would map “64ca93f83bb29b51d8cbd6f3e6a8daff2e08d3ec” back to “Peter”.
  • Salt can foil this attack.  Salt is random characters that are appended to a password before the hash is taken.  So, using the salt “89h29348U#^^928h35″, your password “Peter” would be automatically extended to “Peter89h29348U#^^928h35”, which hashes to “b2d58c2785ada702df68d32744811b1cfccc5f2f”.  For large truly-random salts, it is unlikely that a rainbow table already exists for that salt — taking the reverse-mapping option off the table for the attacker.
  • Each user is assigned a different set of random characters for generating the salted hash, and these would be stored somewhere in your authentication system.  Nathan’s set of random characters would be different from Aaron’s.
  • A big win of salt is that it provides compromise independence.  Even if an attacker has both the password/hash file and the list of salts for each user, the attacker still has to run a brute-force attack against every cleartext password he wants to obtain.

If you don’t salt your passwords, then anyone who can get access to the leaked file can likely reverse many of the passwords, very easily.  This password recovery is especially a problem since many users reuse passwords across sites (I admit that I used to do this on certain sites until fairly recently).

Checking passwords: But it turns out that salt may no longer be solving the world’s password woes.  A colleague sent me a link to a post-LeakedIn interview arguing that cryptographic hashes are passé.  At first I felt that the interviewee was blowing smoke, and wrote the following observations to my colleague:

He confuses the notion of “strong salt” with “strong hash”.

(A) strong salt: you add a lot of random characters to your password before hashing…as a result the attacker has to run a brute force attack against the hash for a looooong time (many * small effort) in order to crack the password.

(B) strong hash: you use a computationally-intensive function to compute the hash…as a result the attacker has to run a brute force attack against the hash for a looooong time (few * large effort) in order to crack the password.

In both cases you get the desirable “looooong time” property.  You can also combine 1 and 2 for an even looooonger time (and in general looooonger is better, though looooong is often long enough).

There can be some problems with approach #2 — the biggest is non-portability of the hash (SHA-1 is supported by pretty much everything; bcrypt isn’t necessarily), another could be remote denial of service attacks against the authentication system (it will have a much higher workload because of the stronger hash algorithm, and if you’re LinkedIn you have to process a lot of authentications per second).

Conclusion: The problem with LinkedIn was the lack of salted passwords.

But I kept thinking about that article, and related posts, and eventually had to eat some of my words (though not all of them).  Looooong is often not long enough.

The best discussion I found on the topic was in the RISKS digest, especially this post and its two particularly interesting references.

My point (A) above may be becoming increasingly less valid due to the massive increases in cracking speed made possible by running crackers directly on GPUs.  Basically, using a salt + password means that you should be using a large/strong enough salt to evade brute force attacks.  So that raises the concern that some people aren’t using a large/strong enough salt.

Beyond salting there are always ways to increase the security of a password-based authentication system.  Instead of a stronger hash, you could require users to type 20 character passwords, or you could require two passwords, etc.

But back to my original point, longer or complex passwords aren’t always the best choice.  That is especially the case when you have two-factor authentication (or other protection mechanisms) — as long as you use the two factors everywhere.  (For example, one company I recently talked with deployed a strong two-factor authentication system for VPN connections but mistakenly left single-factor password authentication enabled on their publicly-accessible webmail server.)

Older Posts »