Jagged Thoughts | Dr. John Linwood Griffin

November 20, 2012

Gabbing to the GAB

Filed under: Opinions,Work — JLG @ 12:00 AM

Earlier this month the (ISC)² U.S. Government Advisory Board (GAB) invited me to present my views and opinions to the board.  What a neat opportunity!

The GAB is a group of mostly federal agency Chief Information Security Officers (CISOs) or similar executives.  Officially it comprises “10-20 senior-level information security professionals in their respective region who advise (ISC)² on industry initiatives, policies, views, standards and concerns” and whose goals include offer deeper insights into the needs of the information security community and discuss matters of policy or initiatives that drive professional development.

In terms of content, in addition to discussing my previous work on storage systems with autonomous security functionality, I advanced three of my personal opinions:

  1. Before industry can develop the “cybersecurity workforce of the future” it needs to figure out how to calculate the return on investment (ROI) for IT/security administration.  I suggested a small initial effort to create an anonymized central database for security attacks and the real costs of those attacks.  If such a database was widely available at nominal cost (or free) then an IT department could report on the value of their actions over the past year: “we deployed such-and-such a protection tool, which blocks against this known attack that caused over $10M in losses to a similar organization.”  Notably, my suggested approach is constructive (“here’s what we prevented”) rather thannegative (“fear, uncertainty, and doubt / FUD”).  My point is that coming at the ROI problem from a positive perspective might be what makes it work.
  2. No technical staff member should be “just an instructor” or “just a developer.”  Staff hired primarily as technical instructors should (for example) be part of an operational rotation program to keep their skills and classroom examples fresh.  Likewise, developers/programmers/etc. should spend part of their time interacting with students, or developing new courseware, or working with the sales or marketing team, etc.  I brought up the 3M (15%) / Hewlett-Packard Labs (10%) / Google (20%) time model and noted that there’s no reason that a practical part-time project can’t also be revenue-generating; it just should be different (in terms of scope, experience, takeaways) from what the staff member does the rest of their time.  My point is that treating someone as “only” an engineer (developer, instructor, etc.) does a disservice not just to that person, but also to their colleagues and to their organization as a whole.
  3. How will industry provide the advanced “tip-of-the-spear” training of the future?  One curiosity of mine is how to provide on-the-job advanced training.  Why should your staff be expected to learn only when they’re in the classroom?  Imagine if you could provide your financial team with regular security conundrums — “who should be on the access control list (ACL) for this document?” — that you are able to generate, monitor, and control.  Immediately after they take an action (setting the ACL) then your security system provides them with positive reinforcement or constructive criticism as appropriate.  My point is that if your non-security-expert employees regularly deal with security-relevant problems on the job, then security will no longer be exceptional to your employees.

I had a blast speaking.  The GAB is a group of great folks and they kept me on my toes for most of an hour asking questions and debating points.  It’s not every day that you get to engage high-level decision makers with your own talking points, so my hope is that I gave them some interesting viewpoints to think about — and perhaps some new ideas on which to take action inside their own agencies and/or to advise the government.

October 15, 2012

Public-key cryptography & certificate chaining

Filed under: Opinions,Work — JLG @ 12:00 AM

Of the many marvelous Calvin and Hobbes cartoons by Bill Watterson, one of the most marvelous (and memorable) is The Horrendous Space Kablooie.  Quoth Calvin, “That’s the whole problem with science.  You’ve got a bunch of empiricists trying to describe things of unimaginable wonder.”

I feel the same way about X.509, the name of the international standard defining public key certificates.  X.509?  It’s sort of hard to take that seriously — “X.509” feels better suited as the name of an errant asteroid or perhaps a chemical formula for hair restoration.

But I digress.  X.509 digital certificates are exchanged when you create a “secure” connection on the Internet, for example when you read your webmail using HTTPS.  The exchange happens something like this:

  • Your computer:  Hi, I’m a client.
  • Webmail server:  Howdy, I’m a server.  Here’s my X.509 certificate, including the public key you’ll use in the next step.
  • Your computer:  Fabulous.  I’ve calculated new cryptographic information that we’ll use for this session, and I’ve encrypted it using your public key; here it is.
  • (Further traffic is encrypted using the session cryptographic information.)

Several things happen behind the scenes to provide you with security:

  1. Your computer authenticates the X.509 certificate(s) provided by the server.  It checks that the server uses the expected web address.  It also verifies that a trusted third party vouches for the certificate (by checking the digital signature included in the certificate).
  1. Your computer verifies that there is no “man in the middle” attack in progress.  It does this by ensuring that the server has the private key associated with its certificate.  It does this by encrypting the session cryptographic information with the server’s public key.  If the server didn’t have the private key then it wouldn’t be able to encrypt and decrypt any further traffic.

Unfortunately the system isn’t perfect.  The folks who programmed your web browser included a set of trusted root certificates with the browser.  Those root certificates were issued by well-known certificate authorities [CAs] such as Verisign and RSA.  If an attacker breaches security at either a root CA or an intermediate CA, as happened with the 2011 Comodo and DigiNotar attacks, then an attacker could silently insert himself into your “secure” connection.  Yikes!  Efforts like HTTPS Everywhere and Convergence are trying to address this problem.

Public-key cryptography is pretty neat.  When you use public-key cryptography you generate two keys, a public key (okay to give out to everyone) and a private key (not okay).  You can use the keys in two separate ways:

  • When someone wants to send you a private message, they can encrypt it using your public key.  The encrypted message can only be decrypted using your private key.
  • When you want to publish a message, you can encrypt (sign) it using your private key.  Anyone who has your public key can decrypt (validate) your message.

In a public key infrastructure, a root CA (say, Verisign) uses its private key to sign the public-key certificates of intermediate certificate authorities (say, Thawte).  The intermediate CAs then use their private key to sign the public-key certificates of their customers (say, www.google.com).  When you visit Google’s site using HTTPS, Google provides you both their certificate and Thawte’s certificate.  (The chained relationship Verisign-Thawte-Google is sometimes called the “chain of trust”.)  Your browser uses the certificates provided by Google, plus the Verisign root certificate (bundled with the browser), to verify that the chain of trust is unbroken.

[I use Google as the example here, since you can visit https://www.google.com and configure your browser to show the certificates that Google provides.  However, I have no knowledge of Google’s contractual relationship with Thawte.  My assertions below about Google are speculative, but the overall example is valid.]

Recently I was asked “We have been trying to understand Certificate Chaining and Self Signing.  Would a company [like Google] be allowed to purchase one certificate from a Certificate issuer like Verisign and then issue its own signed additional certificates for additional use?”

Great question!  (Where “great question” is defined as “um, I don’t know, let me check into that.”)  It turns out the answer is no, a company’s certificate(s) cannot be used to sign other certificates.

Using Google as an example, the principal reason is that neither Verisign nor Thawte let Google act as an “intermediate certificate authority.”  It’s (1) likely against the license agreement under which Thawte signed Google’s certificate, and (2) prohibited by metadata fields inside both Thawte’s certificate and Google’s certificate:

  • Google’s certificate is prohibited from signing other ones because of a flag inside the certificate metadata.  (Specifically, their Version 3 certificate has an Extension called Certificate Basic Constraints that has a flag Is not a Certificate Authority.)  And Google can’t modify their certificate to change this flag, because then signature validation would fail (your browser would detect that Google’s modified certificate doesn’t match the original certificate that Thawte signed).
  • Certificates signed by Thawte’s certificate cannot be used as Certificate Authorities (CAs) because of a flag inside Thawte’s certificate.  (Specifically, their Version 3 certificate has an Extension called Certificate Basic Constraints that has an field Maximum number of intermediate CAs that’s set to zero, meaning that no verification program should accept any certificates that we signed using their key.)

If your company needs to issue its own signed certificates, for example to protect your internal servers, it’s relatively easy to do.  All you have to do is run a program that generates a root certificate.  You would then be like Verisign in that you could issue and sign as many other certificates as you wanted.  (The down side of your “private PKI” is that none of your users’ browsers would initially recognize your root certificate as a valid certificate.  For example, anyone surfing to a web page protected by certificates you signed would get a big warning page every time, at least until they imported your root certificate’s signature to their trusted-certificates list.)

The article I found most helpful in digging up this answer is here:
http://unitstep.net/blog/2009/03/16/using-the-basic-constraints-extension-in-x509-v3-certificates-for-intermediate-cas/

(The full name of the X.509 standard is the far worse ITU-T Recommendation X.509: Information technology – Open systems interconnection – The Directory: Public-key and attribute certificate frameworks.  One name with four hyphens, two colons, and the hyphenated equivalent of comma splicing?  Clearly rigorous scientific work.)

September 25, 2012

Better living through IPv6-istry

Filed under: Opinions,Work — JLG @ 12:00 AM

There have been many, many words written about the IPv4-to-IPv6 transition — probably around 340 undecillion words at this point — but perhaps my favorite words came in a recent Slashdot comment by AliasMarlowe:

I believe in the incremental approach to updates; it’s so much safer and usually easier.
So it’s going to be IPv5 for me, while you suckers make a mess of IPv6!

I’ve long been a fan of IPv6.  Deploying IPv6 has the obvious benefit of solving the IPv4 address exhaustion problem, as well as making it easier to do local subnetting, and site network architecture, and to some degree internet-scale routing.

But perhaps the greatest benefit of deploying IPv6 is the restoration of end-to-end transparency.  IPv6 obviates the need for network address translation (NAT).  With IPv6, when your Skype application wants to initiate a call to my Skype application, the apps can address each other directly without relying on hole punching, third-party relaying, or other “clever” NAT-circumvention techniques.

(End-to-end addressing may sound unimportant, but if we could restore this critical Internet design goal to practice then we could party like it’s 1979!)

I recently spoke with some of TCS’s computer network operations students about security considerations for IPv6 deployments.  They were surprised when I claimed that NAT is not needed in an IPv6 security plan; several students commented that the NAT on their home network router was the only thing protecting their computers from the evils of the Internet.

A common misperception!  There are generally two functions performed by your home network router (or your corporate upstream router, if so configured):

  1. Firewalling / stateful packet inspection.  This is a security function.
  2. IP masquerading / network address [and port] translation.  This is not a security function; it simply allows all the devices on your internal network to share a single external network (IP) address.

With IPv6 you can (and should) still deploy inline firewall appliances to perform function #1.  But with the plethora of available addresses in IPv6 — 18,446,744,073,709,551,616 globally routable addresses per standard local subnet — there is no overt need for masquerading.

Of course, masquerading provides ancillary benefits:  It somewhat hinders external traffic analysis, such as network mapping, by obfuscating the internal source and destination of traffic.  Combining masquerading with private IPv4 addressing also prevents internal addresses from being externally routable.

But similar benefits can be realized in IPv6 without masquerading and therefore without losing the benefits of end-to-end transparency.  For example IPv6 privacy extensions can obfuscate your internal network architecture and IPv6 unique local addresses can be used to isolate systems that shouldn’t be visible on external networks.

August 30, 2012

High-sodium passwords

Filed under: Opinions,Work — JLG @ 12:00 AM

Recently I’ve had some interesting conversations about passwords and password policies.

In general I despise password policies, or at least I despise the silly requirements made by most policies.  As I wrote in TCS’s recent Better Passwords, Usable Security white paper, “Why do you require your users’ passwords to look as though somebody sneezed on their keyboard? … Is your organization really better protected if you require your users to memorize a new 14-character password every two months? I argue no!”

In the BPUS white paper — which is behind a paywall, and I understand how that means it’s unlikely you’ll ever read it — I argue for three counterintuitive points:

  1. Password policies should serve your users’ needs, not vice versa.
  2. Passwords shouldn’t be your sole means of protection.
  3. Simpler passwords can be better than complex ones.

Beyond these points, it is also important to implement good mechanisms for storing and checking passwords.

Storing passwords: In June of this year there was a flurry of news articles about password leaks, including leaks at LinkedIneHarmony, and Last.fm.  The LinkedIn leak was especially bad because they didn’t “salt” their stored password hashes.  Salting works as follows:

  • An authentication system typically stores hashes of passwords, not cleartext passwords themselves.  Storing the hash originally made it hard for someone who stole the “password file” to actually obtain the passwords.
  • When you type in your password, the authentication system first takes a hash of what you typed in, then compares the hash with what’s stored in the password file.  If your hash matches the stored hash, you get access.
  • But attackers aren’t dumb.  An attacker can create (or obtain) a “rainbow table” containing reverse mappings of hash value to password.  For example, the SHA-1 hash of “Peter” is “64ca93f83bb29b51d8cbd6f3e6a8daff2e08d3ec”.  A rainbow table would map “64ca93f83bb29b51d8cbd6f3e6a8daff2e08d3ec” back to “Peter”.
  • Salt can foil this attack.  Salt is random characters that are appended to a password before the hash is taken.  So, using the salt “89h29348U#^^928h35″, your password “Peter” would be automatically extended to “Peter89h29348U#^^928h35”, which hashes to “b2d58c2785ada702df68d32744811b1cfccc5f2f”.  For large truly-random salts, it is unlikely that a rainbow table already exists for that salt — taking the reverse-mapping option off the table for the attacker.
  • Each user is assigned a different set of random characters for generating the salted hash, and these would be stored somewhere in your authentication system.  Nathan’s set of random characters would be different from Aaron’s.
  • A big win of salt is that it provides compromise independence.  Even if an attacker has both the password/hash file and the list of salts for each user, the attacker still has to run a brute-force attack against every cleartext password he wants to obtain.

If you don’t salt your passwords, then anyone who can get access to the leaked file can likely reverse many of the passwords, very easily.  This password recovery is especially a problem since many users reuse passwords across sites (I admit that I used to do this on certain sites until fairly recently).

Checking passwords: But it turns out that salt may no longer be solving the world’s password woes.  A colleague sent me a link to a post-LeakedIn interview arguing that cryptographic hashes are passé.  At first I felt that the interviewee was blowing smoke, and wrote the following observations to my colleague:

He confuses the notion of “strong salt” with “strong hash”.

(A) strong salt: you add a lot of random characters to your password before hashing…as a result the attacker has to run a brute force attack against the hash for a looooong time (many * small effort) in order to crack the password.

(B) strong hash: you use a computationally-intensive function to compute the hash…as a result the attacker has to run a brute force attack against the hash for a looooong time (few * large effort) in order to crack the password.

In both cases you get the desirable “looooong time” property.  You can also combine 1 and 2 for an even looooonger time (and in general looooonger is better, though looooong is often long enough).

There can be some problems with approach #2 — the biggest is non-portability of the hash (SHA-1 is supported by pretty much everything; bcrypt isn’t necessarily), another could be remote denial of service attacks against the authentication system (it will have a much higher workload because of the stronger hash algorithm, and if you’re LinkedIn you have to process a lot of authentications per second).

Conclusion: The problem with LinkedIn was the lack of salted passwords.

But I kept thinking about that article, and related posts, and eventually had to eat some of my words (though not all of them).  Looooong is often not long enough.

The best discussion I found on the topic was in the RISKS digest, especially this post and its two particularly interesting references.

My point (A) above may be becoming increasingly less valid due to the massive increases in cracking speed made possible by running crackers directly on GPUs.  Basically, using a salt + password means that you should be using a large/strong enough salt to evade brute force attacks.  So that raises the concern that some people aren’t using a large/strong enough salt.

Beyond salting there are always ways to increase the security of a password-based authentication system.  Instead of a stronger hash, you could require users to type 20 character passwords, or you could require two passwords, etc.

But back to my original point, longer or complex passwords aren’t always the best choice.  That is especially the case when you have two-factor authentication (or other protection mechanisms) — as long as you use the two factors everywhere.  (For example, one company I recently talked with deployed a strong two-factor authentication system for VPN connections but mistakenly left single-factor password authentication enabled on their publicly-accessible webmail server.)

August 6, 2012

Why you should consider graduate school

Filed under: Opinions — JLG @ 11:43 PM

Are you interested in graduate school?  Here’s an hour’s worth of reasons you should consider going:

I gave this talk, “Why You Shouldn’t Write Off Higher Education, Young Grasshopper,” at the H.O.P.E. (Hackers On Planet Earth) Number 9 conference in New York City on July 13, 2012.

My abstract was:

This talk is addressed to that kid in the back who’s wearing a Utilikilt and a black t-shirt that says “I Hack Charities,” who asks, “Why would I bother going to grad school? I’m self-taught, college was a waste of my time, and universities only exist to train wage slaves.” John will draw from personal experience to describe how in graduate school:

1. You get to do what you love.
2. You get to make large structured contributions to the community.
3. You experience personal growth while surrounded by amazing people.
4. You’re part of a meritocracy and a close-knit social circle.
5. The door is open for interesting opportunities afterward.

Included will be a discussion on how hackers can get in.

This talk is one of a series of talks I’ve given about the post-secondary experience, especially as it relates to computer engineering and related disciplines:

  • Life after high school.  Since 1994 I’ve annually visited my Dad’s high school mathematics classes in Alabama to talk with his students about what job opportunities, college opportunities, and travel opportunities are available in the years to come.  I’ve also spoken with middle school students in Maryland and elementary school students in Pennsylvania.
  • Why you shouldn’t write off higher education, young grasshopper.  A talk oriented towards hackers but applicable to anyone considering graduate school (masters or doctoral level), especially in a technical field.  First delivered in 2012.
  • Through the looking glass: What’s next after a systems Ph.D.  A talk for doctoral students who are curious about what general opportunities are available in the years to come.  I’ve given this talk at Carnegie Mellon, at Johns Hopkins, and at the University of North Carolina.  First delivered in 2004.  (See also my computer systems Ph.D. job search page.)
  • What’s next after a systems Ph.D.: A six-month retrospective on corporate research.  A surprisingly bitter talk for doctoral students who are curious about jobs in corporate research laboratories (they’re great jobs but are in many ways not what I expected…don’t let this talk convince you not to take a corporate job).  I’ve given this talk at Carnegie Mellon.  First delivered in 2005.

If you are interested in having me talk with your students (or friends, grandnieces, etc.) on any of these (or related) topics, you are very welcome to contact me.  See my contact information on my home page.

Mad props to Brendan for (a) convincing me to submit my name to the H.O.P.E. speaker committee in the first place, (b) rewriting my abstract so it would be interesting to the speaker committee, and (c) helping shape my talk so it would be interesting to the H.O.P.E. attendees.  The talk was well attended, I think I provided some valuable information to some interested folks, and I had a great set of interested folks come up and talk one-on-one with me in the Q&A session after the talk.

Thanks also to the many people who helped me prepare for H.O.P.E. by talking with me about their own perspectives on graduate school, especially Steve Bellovin, Randal Burns, Angelos Keromytis, Fabian Monrose, Drew Morin, Margo Seltzer, Andreas Terzis, and the anonymous students and industry colleagues who shared their experiences and/or plans.  I also benefited greatly from reading Mor Harchol-Balter’s advice on applying to Ph.D. programs in computer science.

April 14, 2012

Higher quality spam

Filed under: Opinions — JLG @ 6:44 PM

Although I’ve blogged in various forms since 1996 or so, I first set up a WordPress blog in 2008.  That blog was hosted on the Jagged Technology website and was intended to convey information of interest to Jagged and its customers — the idea being that if I provided a high signal-to-noise ratio of useful technical content then it might help my sales figures.  Within a few days I started receiving spam comments on the blog, to which my heavy-handed solution was to disable comments altogether.

Earlier this year I set up a new WordPress blog here on my personal website, in order to have somewhere to post my aviation experiences as I experienced them. Given that I was decommissioning the Jagged website I decided to move my old posts to this site (a process that you’d think would be simple — export from a WordPress site and import into a WordPress site — but wasn’t; in the end an old-fashioned copy-and-paste between browser windows gave the best results in the shortest amount of time).

My good friend Jay asked about the conference reports:

Do you keep the notes public to force yourself to write? a form of self-promotion? or what.

Yes to all three.  My primary motivation for putting the conference reports up is as an archival service to the community; there aren’t that many places you can go to learn about CCS 2009, for example, and since I write the reports anyway (for my own reference and for distribution to my colleagues) I post them in case anyone now or in the future might find them useful.  Everybody has a mission in life, and mine is apparently to provide useful summary content for search engines and Internet archives.

With the new blog I decided to keep comments enabled, first out of curiosity about spam (during a visit to Georgia Tech a few years ago, one of the researchers asked why I didn’t spend more time analyzing spam instead of simply deleting it) and second on the off chance that somebody wanted to reply to one of my aviation posts with, say, suggested VFR sightseeing routes in the greater Massachusetts area.

And wow, has my curiosity about spam been piqued.  I created the blog at 12:16am on February 4, 2012; the first spam message arrived at 2:16am on February 9.  The second arrived at 12:42pm that day.  Recognizing a trend, I hustled to enable the Akismet anti-spam plugin.  Akismet works in part by crowdsourcing:  If someone else on another site marks a WordPress comment as spam, and the comment later gets posted on my site, Akismet automatically marks it as spam.  Since enabling the plugin sixty-eight days ago:

  • Number of spam comments posted on Jagged Thoughts: 312
  • Number of non-spam comments posted on Jagged Thoughts: 0
  • Number of false negatives (comments mistakenly marked as non-spam by Akismet): 1

So I’m averaging 4.6 spam comments per day.  That’s significantly fewer than I expected to receive, though perhaps this site hasn’t yet been spidered by enough search engines to be easily found when spam software searches for WordPress sites.

I was prompted to write this post by an order-of-magnitude improvement in spam quality in a couple of messages I received yesterday.  To date, most of the spam has fit into one of these three categories:

  1. Do you need pharmaceuticals?  We can help!
  2. Would you like more visitors to your site?  We can help!
  3. Are you dissatisfied with who’s hosting your site?  We can help!

Even without Akismet it is easy to identify spam simply by looking at (a) the “Website” link provided by the commenter or (b) any links included inside the comment.  Such links these invariably point to an online pharmacy, or to a Facebook page with a foreign name but a profile picture of Miley Cyrus, or to a provider of virtual private servers, or to some other such site.  Also almost none of the spam comments are attached to the most recent post.  My theory here is that comments on older messages are less likely to be noticed by site admins but are still clearly visible to search engines.  (There’s an option in WordPress to disable comments on posts more than a year old; now I understand why it’s there.)

There are spam comments about my compositional prowess:

This design is wicked! You definitely know how to keep a reader entertained. Between your wit and your videos, I was almost moved to start my own blog (well, almost…HaHa!) Great job. I really enjoyed what you had to say, and more than that, how you presented it. Too cool!

Comments that are clearly copied from elsewhere on the Internet:

In the pre-Internet age, buying a home was a long and arduous task. But the Internet of today helps the buyer to do their own preliminary work-researching neighborhoods, demographics, general price ranges, characteristics of homes in certain areas, etc. Now with a simple click, home buyers can access whole databases featuring statistics about neighborhoods and properties before they have even met the realtor.

Comments that are WordPress-oriented:

Howdy would you mind stating which blog platform you’re using? I’m going to start my own blog in the near future but I’m having a hard time deciding between BlogEngine/Wordpress/B2evolution and Drupal. The reason I ask is because your layout seems different then most blogs and I’m looking for something unique. P.S Sorry for getting off-topic but I had to ask!

Comments written in foreign languages, comments that are nothing but long lists of pharmaceutical products with links, and comments that are gibberish.

Once per week I’ve gone through and skimmed the comments marked as spam, just to make sure that I didn’t miss someone’s useful post debating, say, the merits of purchasing personal aviation insurance or always renting from flying clubs that provide insurance to their members.  Over the past week I’ve received three spam comments containing information that clearly relate to the text of the post.  For example, this comment on my Discovering flight post:

Absolutely but the overall senfuleuss is tied to the complexity of the simulator and cost of the simulator Airlines and places like Flight Safety use large simulators with exact replicas of the cockpits of the specific plane being simulated, mounted on hydraulic systems that provide 3 degrees of motion, and video displays for each window providing outside views. These have become so realistic that you can do most of the flying required for a type certifcate on them, and airlines use them for aircrew checkrides. Moving downward from these multimillion dollar systems, there are aircraft specific sims that have the full cockpit, but without the 3 axis motion, all the way down to the cheapest flight training devices recognized by the FAA. These are not much different than MS Flight Simulator, but have an physical replica of a radio stack, throttle, yoke and rudder pedals. You can used these type of devices to log a small portion of the instrument time required for your instrument rating. One problem common to most simulators is that they tend to be harder to hand fly than an actual airplane is, particularly the lower end sims. If you are refering to a non-FAA approved simulator, like MS Flight sim, it provides no help in learning how to handle a plane. When flying real plane the forces on the controls provide an immense amount of feed back to the pilot that is missing from a PC simulator. The other problem with a PC sim is that you can not easily look around and maintain control trying to fly a proper traffic pattern on FSX is almost impossible. A home sim can be helpfull in practicing rarely used instrument procedures, things like an NDB approach or a DME arc, but it of course it does not count to your instrument currency in any way. I have also used FSX to check out airports that I will be visiting in real life for the first time. It does an accurate enough representation of geographic features that can help you place the airport in relationship to terrain in advance of the flight.

I am thrilled that the spam software authors have started performing analytics to ensure that I receive relevant and topical spam comments!  The above comment includes genuinely useful observations about using home flight simulation software to augment pilot training:

  1. When flying real plane the forces on the controls provide an immense amount of feed back to the pilot that is missing from a PC simulator.
  2. The other problem with a PC sim is that you can not easily look around and maintain control trying to fly a proper traffic pattern [] is almost impossible.
  3. A home sim can be helpfull in practicing rarely used [] procedures
  4. It does an accurate enough representation of geographic features that can help you place the airport in relationship to terrain in advance of the flight.

Early in my own flight training I tried using Microsoft Flight Simulator 2004 along with a USB flight yoke and USB foot pedals (all of which I’d bought back in 2006) to recreate my training flights at home and to squeeze in some extra practice.  For the most part I found the simulator ineffective in improving basic piloting skills — as examples, the simulator did nothing to help me with memorizing the correct relationship between the airplane nose and the horizon when attempting to transition from climb to level flight at 110 knots, and it did not display useful real-world visual references as I flew traffic patterns around area airports.  However, I found the simulator very useful in practicing the preflight and in-flight checklists, in memorizing which instruments were in which location on the Skyhawk’s control panel, in practicing taxi procedures around Hanscom airport given various wind conditions, and in reviewing the directions and speeds my instructor chose when we flew between KBED and KLWM airports.

Of course it’s not surprising that the comment contained insightful and critical commentary, given that it’s taken verbatim from Yahoo! Answers (Is flying with simulators help in real flight training?)  What’s surprising — and exciting — is that I’ve started receiving higher quality, targeted, and relevant spam based on the topics I post!  Randall Munroe would be proud.  Hopefully this trend will continue and spam software will provide me with similarly-useful, carefully-selected, topically-relevant information, helping me to become a better pilot.  (Note to spam software authors:  Just kidding.  Please don’t target this site for extra spam.)

EDIT (August 2, 2012):  I apologize to the spam software authors.  For the past two months this article has received an exponentially increasing amount of spam, currently about 300/day:

Look, folks, I apologize.  I wasn’t trying to piss you off.

I assume your motivation is economic.  (I may be wrong; perhaps you’re nihilistic, anarchistic, or simply interested in chaos theory.)  Spam is a lucrative business.  What I’m saying is that with a few small changes it can be even more lucrative.  Given the equation:

more approved comments = more planted links to your SEO and pharmacy sites = more revenue for you

Your economic goal is therefore to get more comments approved.  You’ve already taken the first step of copying paragraphs of user-generated text from Wikipedia, Yahoo, and the like, instead of relying on stock phrases such as “payday loans uk”.  I bet that simple change significantly increased both your approval percentage and your profit.

The next step is to be more selective in what content your bots copy-and-paste as spam.  Given a blog post about buying a house, you have a greater chance of having your spam comment approved if you include real-estate-oriented (“higher quality”) text rather than, say, unrelated passages about hair loss or railway construction in China.

Beyond that, socialbots (see for example Tim Hwang’s talk I’m not a real friend, but I play one on the Internet) show promise for spammers.  It’s one thing to trick an author into approving your spam comment; it would be another level of efficacy altogether to trick a site’s user community into having a comment-based conversation with your spambot.

So don’t shoot (or spam) the messenger; instead consider using my thoughts as inspiration to step up your game.

« Newer Posts