Jagged Thoughts | Dr. John Linwood Griffin

October 9, 2008

NSRC industry day

Filed under: Reviews — JLG @ 10:12 PM

This week I attended the 5th annual industry day at the Networking and Security Research Center (NSRC) at Penn State University. The event was similar in format to other industry days I’ve attended (CMU, Stony Brook) but with a more focused core of industry guests, primarily from telecom companies and large government contractors.

My main interest was in the work of professors Trent Jaeger and Patrick McDaniel of the Systems and Internet Infrastructure Security (SIIS) laboratory. Their students are working on several projects of interest to Jagged, including:

Another NSRC focus is on wireless networking research (cellular, sensor, 802.11, vehicular, you name it). An upside of their work is that it is strongly focused on real-world problems reported by companies — for example, CDMA2000-WiMAX internetworking. A related downside is that it wasn’t clear what academic (basic research) lessons could be drawn from some of the work; some of the results felt limited in scope and applicability to only a specific problem.

All the posters from the industry day are available here:

http://nsrc.cse.psu.edu/id08.html

The most interesting and controversial talk at the event was a keynote by Mr. Steven Chabinsky, the deputy director of the Joint Interagency Cyber Task Force. He advanced the idea that we as a nation have let ourselves be “seduced” by technology, by plowing ahead with deployments of untested and unreliable technology at critical infrastructure points without first fully understanding (or mitigating) the risks and consequences of failure. He called on us as researchers and companies to consider the full spectrum of threat, vulnerability, and consequence in our technological innovations. A lively discussion ensued after the talk regarding the economic incentives to deploy unreliable technology: several of the topics were:

  • Will better policy decisions be made when cyber risks are better understood? The speaker described a current lack of capabilities to quantify risk either as an absolute or a comparative measurement. This is especially true in low-risk but extremely-high-damage scenarios such as directed attacks against components of the power grid. I felt this observation makes an excellent point, and highlights a mental gap between the way that engineers think of technology and the way that decisionmakers compare among technologies. Perhaps the government should fund some new studies along these lines?
  • Where should the government draw the line between regulation and deregulation? There are several non-regulatory actions the government could take to constructively assist companies in developing hardened products (say, that control water processing plants), such as making supplemental development grants available to companies whose technology will be used in critical infrastructure. On one hand, I feel that government should more actively oversee and regulate (and pay for) these kinds of technologies. But perhaps the problem is more complex than I realize — e.g., perhaps one gets a qualitatively better product through open-market competition than one would through contract specification and regulatory compliance. Anyone have an opinion on this?

Mr. Chabinsky’s point was underscored later in the day in a talk on the Ohio EVEREST voting study. Patrick McDaniel discussed how the Help America Vote Act effectively caused an insufficiently-tested prototype technology (electronic voting machines) for a low-profit-margin customer (the government) to be thrust into mandatory and widespread use in a critical environment (the legitimacy of our democracy) in only a few years. He concluded (as concluded by Avi Rubin and others) that current systems are fundamentally flawed and unsecurable. In light of the above discussion, these fundamental flaws represent a failure of technologists (as well as many others) — both (a) in our inability to architect reliable systems and (b) in our inability to adequately inform public policy officials of the true readiness of proposed technologies.

This latter problem — coherently describing and conveying the capabilities and limitations of computer systems in a non-expert human-comprehensible manner — is one of the topics that has long interested me, especially in the context of information sharing in sensitive or classified environments. Anyone want to join us in working on this problem?