Jagged Thoughts | Dr. John Linwood Griffin

September 10, 2012

Left-brained flying

Filed under: Aviation — JLG @ 11:30 PM

I’m almost finished with flight training for my private pilot certificate!  That’s actually a little disappointing, because I’ve been enjoying the training so much; I probably won’t fly nearly this much after the lessons are over.

Two weeks ago I performed one of my two required solo cross-country flights.  When I first heard about this requirement I hoped that cross-country meant a flight from Boston to Seattle (cross country) or even to Winnipeg (cross countries) but it turns out it just means, in the exciting parlance of Title 14 of the Code of Federal Regulations, Part 61.109(a)(5)(ii), simply:

one solo cross country flight of 150 nautical miles total distance, with full-stop landings at three points, and one segment of the flight consisting of a straight-line distance of more than 50 nautical miles between the takeoff and landing locations.

I’d also hoped that I would be able to pick the airports for my cross-country training flights, but that choice too is regulated, this time by the flight school.  For my first flight my instructor assigned one of two approved routes, in this case KBED-KSFM-KCON-KBED — Bedford, MA, to Sanford, ME, to Concord, NH, and back to Bedford.

(Astute readers will calculate that route as only covering 143 nautical miles.  Although that distance doesn’t meet the requirement above, it’s okay since the school requires students perform a minimum of two solo cross-country routes; the next one will be a 179-mile trip to Connecticut and back.  I suspect the school chose those airports for the first “solo XC” because SFM and CON are both non-towered airports that are easy to find from the air — meaning good practice and confidence-building for the student — as well as because the student has flown round-trip to SFM with an instructor at least once.)

The most memorable aspect of my first solo cross-country flight was that everything happened so quickly!  Without an instructor around to act as a safety net, I had an ever-present feeling that I was forgetting something or missing something:

  • Oh shoot I forgot to reset the clock when I passed my last checkpoint; where exactly am I right now?
  • Oh shoot I haven’t heard any calls for me on this frequency lately, did I miss a message from air traffic control to switch to a different controller’s frequency?
  • Oh shoot I’ve been so busy with the checklist that it’s been over a minute since I looked outside the cockpit…is someone about to hit me?  Am I about to collide with a TV tower?

There were two genuine wide-eyed moments on the flight:

  1. Traffic on a collision course.  While flying northeast at 3,500 feet, air traffic control informed me that there was another plane, in front of me, headed towards me, at my altitude.  Yikes.  I hesitated while looking for the plane, until ATC notified me again, a little more urgently, that there was a plane in front of me at my altitude — except that it was a lot closer than it had been a moment before.  I (a) asked ATC for a recommendation, (b) heard them recommend that I climb 500 feet, (c) did so, forthwith.  Moments later I saw the plane, passing below me and just to my left — we would have missed each other, but not by much.
    Lesson learned:  What I should have done was immediately changed altitude and heading as soon as I got the first notification from ATC.  I delayed because I didn’t comprehend the severity of the situation; it’s pretty rare for someone to be coming right at you — this was the first time it’s happened to me.  Given the other pilot’s magnetic heading, that plane was flying at an altitude contrary to federal regulations, which would have been small consolation if we’d collided. (Sub-lesson learned, as my Dad taught me when learning to drive:  Think of the absolutely dumbest, stupidest, most idiotic thing that the other pilot could possibly do, and prepare for him to do exactly that.)
  2. Flight into a cloud.  During the SFM-CON leg I flew (briefly and unintentionally) into a cloud.  Yikes.  The worst part is I didn’t even see the cloud coming; visibility was slowly deteriorating all around me, so I was focusing mostly on the weather below and to my left, trying to determine when I should turn left to get away from the deteriorating weather.  All of a sudden, wham, white-out.  At the time I was flying at 4,500 feet with a ceiling supposedly at 6,500 feet in that area — at least according to my pre-flight weather briefing — so I’d expected to be well clear of the clouds.  (The clouds probably had been at 6,500 feet two hours before when I got the weather briefing.)
    Flying into clouds without special “instrument meteorological conditions” training is (a) prohibited by the FAA and (b) a bad idea; without outside visual references to stay straight-and-level you can pretty quickly lose your spatial orientation and crash.  During flight training you’re taught what to do if you unintentionally find yourself in a cloud: Turn around! is usually your best option:  Check your current heading, make a gentle 180-degree turn, keep watching your instruments to make sure you’re not gaining or losing altitude or banking too steeply, exit the cloud, unclench.  Fortunately, an opening quickly appeared in the cloud below me, so I immediately heaved the yoke forward and flew down out of the cloud, then continued descending to a safe altitude (safe above the ground and safe below the clouds).
    Lesson learned: I should have changed my flight plan to adapt as soon as I noticed the weather start to deteriorate.  First, I should have stopped climbing once I noticed that visibility was getting worse the more I climbed.  Second, given that my planned route was not the most direct route to the destination airport, I should have diverted directly toward the destination (where the weather looked okay) as soon as the weather started getting worse instead of continuing to fly my planned route.

Despite these eye-opening moments, the flight went really well.  My landings were superb — I am pleased to report that, now that I have over 150 landings in my logbook, I can usually land the plane pretty nicely — and once I settled into the swing of things I had time to look out the window, enjoy the view, and think about how much fun I’m having learning to fly.

Unfortunately I haven’t yet had the opportunity to repeat the experience.  I was scheduled to fly the longer cross-country flight the next weekend, but the weather didn’t cooperate.  I was then scheduled to fly the longer cross-country flight the next next weekend, but the weather didn’t cooperate.  So I’m hoping that next weekend (the next next next weekend, so to speak) will have cooperative weather.  Once I finish the second cross-country flight I will then pass the written exam, then take “3 [or more] hours of flight training with an authorized instructor in a single-engine airplane in preparation for the practical test,” then pass the practical test.  Then I’ll be a pilot!  Meanwhile, whenever I fly I am working on improving the coordination of my turns (using rudder and ailerons in the right proportions), making sure to clear to the left or right before starting a turn, and remembering to execute the before landing checklist as soon as I start descending to the traffic pattern altitude.

Overall, flying is easier than I expected it to be.  The most important rule always is fly the airplane.  No matter what is happening — if the propeller shatters, just as lightning strikes the wing causing the electrical panel to catch fire, while simultaneously your passenger begins seizing and stops breathing — maintain control of the airplane!

  • First, the propeller shattering means that you’ve lost your engine; follow the engine failure checklist.  In the Cessna Skyhawk, establish a 68-knot glide speed; this will give you the most time and most distance to land.  Then, look for the best place to land [you should already have a place in mind, before the emergency happens] and turn towards that place.
  • Now, attend to the electrical fire.  First, fly the airplane — maintain 68 knots, maintain a level heading, continue heading toward your best place to land.  Meanwhile, follow the electrical fire in flight checklist by turning off the master switch to turn off power to the panel.  Are you still flying the airplane?  Good, now turn off all the other switches (still flying?), close the vents (fly), activate the fire extinguisher (fly), then get back to flying.  (It may sound like I’m trying to be obtuse here, but that’s really the thought process you’re supposed to follow — if you don’t fly the airplane, it won’t matter in the end what you do to extinguish the fire.)
  • Now, ignore your passenger.  Your job is to get the airplane on the ground so that you can help or call for help.  Unfortunately you don’t have a radio anymore — you lost it when you flipped the master switch — so once you’re within range of your best place to land, execute an emergency descent and get down as quickly as possible.

My new instructor often tells me that I’m flying too tensely, especially on the approach to landing — he remarks that I tighten my shoulders, look forward with intense concentration, make abrupt control movements, and maintain a death grip on the steering wheel.  This tenseness is what I think of as “left-brained flying:”  I am too cerebral, utilitarian, and immediate in my approach to maneuvering and in handling problems in the air; it gets the job done (I fly, I turn, I land, etc.) but doesn’t result in a very artistic (or comfortable) flight.  I am working to be more of a “right-brained pilot,” reacting to the flow of events instead of to single events, making small corrections to the control surfaces and waiting to see their effect on my flight path; and in general relaxing and enjoying the flight instead of obsessing over the flight parameters.

August 30, 2012

High-sodium passwords

Filed under: Opinions,Work — JLG @ 12:00 AM

Recently I’ve had some interesting conversations about passwords and password policies.

In general I despise password policies, or at least I despise the silly requirements made by most policies.  As I wrote in TCS’s recent Better Passwords, Usable Security white paper, “Why do you require your users’ passwords to look as though somebody sneezed on their keyboard? … Is your organization really better protected if you require your users to memorize a new 14-character password every two months? I argue no!”

In the BPUS white paper — which is behind a paywall, and I understand how that means it’s unlikely you’ll ever read it — I argue for three counterintuitive points:

  1. Password policies should serve your users’ needs, not vice versa.
  2. Passwords shouldn’t be your sole means of protection.
  3. Simpler passwords can be better than complex ones.

Beyond these points, it is also important to implement good mechanisms for storing and checking passwords.

Storing passwords: In June of this year there was a flurry of news articles about password leaks, including leaks at LinkedIneHarmony, and Last.fm.  The LinkedIn leak was especially bad because they didn’t “salt” their stored password hashes.  Salting works as follows:

  • An authentication system typically stores hashes of passwords, not cleartext passwords themselves.  Storing the hash originally made it hard for someone who stole the “password file” to actually obtain the passwords.
  • When you type in your password, the authentication system first takes a hash of what you typed in, then compares the hash with what’s stored in the password file.  If your hash matches the stored hash, you get access.
  • But attackers aren’t dumb.  An attacker can create (or obtain) a “rainbow table” containing reverse mappings of hash value to password.  For example, the SHA-1 hash of “Peter” is “64ca93f83bb29b51d8cbd6f3e6a8daff2e08d3ec”.  A rainbow table would map “64ca93f83bb29b51d8cbd6f3e6a8daff2e08d3ec” back to “Peter”.
  • Salt can foil this attack.  Salt is random characters that are appended to a password before the hash is taken.  So, using the salt “89h29348U#^^928h35″, your password “Peter” would be automatically extended to “Peter89h29348U#^^928h35”, which hashes to “b2d58c2785ada702df68d32744811b1cfccc5f2f”.  For large truly-random salts, it is unlikely that a rainbow table already exists for that salt — taking the reverse-mapping option off the table for the attacker.
  • Each user is assigned a different set of random characters for generating the salted hash, and these would be stored somewhere in your authentication system.  Nathan’s set of random characters would be different from Aaron’s.
  • A big win of salt is that it provides compromise independence.  Even if an attacker has both the password/hash file and the list of salts for each user, the attacker still has to run a brute-force attack against every cleartext password he wants to obtain.

If you don’t salt your passwords, then anyone who can get access to the leaked file can likely reverse many of the passwords, very easily.  This password recovery is especially a problem since many users reuse passwords across sites (I admit that I used to do this on certain sites until fairly recently).

Checking passwords: But it turns out that salt may no longer be solving the world’s password woes.  A colleague sent me a link to a post-LeakedIn interview arguing that cryptographic hashes are passé.  At first I felt that the interviewee was blowing smoke, and wrote the following observations to my colleague:

He confuses the notion of “strong salt” with “strong hash”.

(A) strong salt: you add a lot of random characters to your password before hashing…as a result the attacker has to run a brute force attack against the hash for a looooong time (many * small effort) in order to crack the password.

(B) strong hash: you use a computationally-intensive function to compute the hash…as a result the attacker has to run a brute force attack against the hash for a looooong time (few * large effort) in order to crack the password.

In both cases you get the desirable “looooong time” property.  You can also combine 1 and 2 for an even looooonger time (and in general looooonger is better, though looooong is often long enough).

There can be some problems with approach #2 — the biggest is non-portability of the hash (SHA-1 is supported by pretty much everything; bcrypt isn’t necessarily), another could be remote denial of service attacks against the authentication system (it will have a much higher workload because of the stronger hash algorithm, and if you’re LinkedIn you have to process a lot of authentications per second).

Conclusion: The problem with LinkedIn was the lack of salted passwords.

But I kept thinking about that article, and related posts, and eventually had to eat some of my words (though not all of them).  Looooong is often not long enough.

The best discussion I found on the topic was in the RISKS digest, especially this post and its two particularly interesting references.

My point (A) above may be becoming increasingly less valid due to the massive increases in cracking speed made possible by running crackers directly on GPUs.  Basically, using a salt + password means that you should be using a large/strong enough salt to evade brute force attacks.  So that raises the concern that some people aren’t using a large/strong enough salt.

Beyond salting there are always ways to increase the security of a password-based authentication system.  Instead of a stronger hash, you could require users to type 20 character passwords, or you could require two passwords, etc.

But back to my original point, longer or complex passwords aren’t always the best choice.  That is especially the case when you have two-factor authentication (or other protection mechanisms) — as long as you use the two factors everywhere.  (For example, one company I recently talked with deployed a strong two-factor authentication system for VPN connections but mistakenly left single-factor password authentication enabled on their publicly-accessible webmail server.)

August 6, 2012

Why you should consider graduate school

Filed under: Opinions — JLG @ 11:43 PM

Are you interested in graduate school?  Here’s an hour’s worth of reasons you should consider going:

I gave this talk, “Why You Shouldn’t Write Off Higher Education, Young Grasshopper,” at the H.O.P.E. (Hackers On Planet Earth) Number 9 conference in New York City on July 13, 2012.

My abstract was:

This talk is addressed to that kid in the back who’s wearing a Utilikilt and a black t-shirt that says “I Hack Charities,” who asks, “Why would I bother going to grad school? I’m self-taught, college was a waste of my time, and universities only exist to train wage slaves.” John will draw from personal experience to describe how in graduate school:

1. You get to do what you love.
2. You get to make large structured contributions to the community.
3. You experience personal growth while surrounded by amazing people.
4. You’re part of a meritocracy and a close-knit social circle.
5. The door is open for interesting opportunities afterward.

Included will be a discussion on how hackers can get in.

This talk is one of a series of talks I’ve given about the post-secondary experience, especially as it relates to computer engineering and related disciplines:

  • Life after high school.  Since 1994 I’ve annually visited my Dad’s high school mathematics classes in Alabama to talk with his students about what job opportunities, college opportunities, and travel opportunities are available in the years to come.  I’ve also spoken with middle school students in Maryland and elementary school students in Pennsylvania.
  • Why you shouldn’t write off higher education, young grasshopper.  A talk oriented towards hackers but applicable to anyone considering graduate school (masters or doctoral level), especially in a technical field.  First delivered in 2012.
  • Through the looking glass: What’s next after a systems Ph.D.  A talk for doctoral students who are curious about what general opportunities are available in the years to come.  I’ve given this talk at Carnegie Mellon, at Johns Hopkins, and at the University of North Carolina.  First delivered in 2004.  (See also my computer systems Ph.D. job search page.)
  • What’s next after a systems Ph.D.: A six-month retrospective on corporate research.  A surprisingly bitter talk for doctoral students who are curious about jobs in corporate research laboratories (they’re great jobs but are in many ways not what I expected…don’t let this talk convince you not to take a corporate job).  I’ve given this talk at Carnegie Mellon.  First delivered in 2005.

If you are interested in having me talk with your students (or friends, grandnieces, etc.) on any of these (or related) topics, you are very welcome to contact me.  See my contact information on my home page.

Mad props to Brendan for (a) convincing me to submit my name to the H.O.P.E. speaker committee in the first place, (b) rewriting my abstract so it would be interesting to the speaker committee, and (c) helping shape my talk so it would be interesting to the H.O.P.E. attendees.  The talk was well attended, I think I provided some valuable information to some interested folks, and I had a great set of interested folks come up and talk one-on-one with me in the Q&A session after the talk.

Thanks also to the many people who helped me prepare for H.O.P.E. by talking with me about their own perspectives on graduate school, especially Steve Bellovin, Randal Burns, Angelos Keromytis, Fabian Monrose, Drew Morin, Margo Seltzer, Andreas Terzis, and the anonymous students and industry colleagues who shared their experiences and/or plans.  I also benefited greatly from reading Mor Harchol-Balter’s advice on applying to Ph.D. programs in computer science.

August 3, 2012

Black Hat USA 2012 and DEF CON 20: The future of insecurity

Filed under: Reviews,Work — JLG @ 12:00 AM

I returned to the blistering dry heat of Las Vegas for a second year in a row to attend Black Hat and DEF CON.

The most interesting talk to me was a panel discussion at Black Hat that provided a future retrospective on the next 15 years of security.  Some of the topics discussed:

  • What is the role of the private sector in computer and network security?  One panelist noted that the U.S. Constitution specifies that the government is supposed “to provide for the common defense” — presumably including all domestic websites, commercial networks and intellectual property, and perhaps even personal computers — instead of only claiming to protect the .gov (DHS) and .mil (NSA) domains as they do today.  Another panelist suggested that, as in other sectors, the government should publish “standards” for network and communications security such that individual companies can control the implementation of those standards.
  • Social engineering and the advanced persistent threat.  At a BSidesLV party, someone I met asked whether I felt the APT was just a buzzword or whether it was real.  (My answer was “both”.)  Several speakers played with new views on the APT, such as “advanced persistent detection” (defenders shouldn’t be focused on vulnerabilities; rather they should look at an attacker’s motivation and objectives) and “advanced persistent fail” (real-world vulnerabilities survive long after mitigations are published).
  • How can you discover what evil lurks in the hearts of men and women?  One panelist speculated that we would see the rise of long-term [lifetime?] professional background checks for technological experts.  Current background checks for U.S. government national security positions use federal agents to search back 7-10 years.  I got the impression that the panelist foresees a rise in private-sector background checks (or checks against private databases of personal information) as a prerequisite for hiring decisions across the commercial sector.
  • How can you protect against a 120 gigabit distributed denial of service (DDoS) attack?  A panelist noted that a large recent DDoS hit 120 Gbit/sec, up around 4x from the largest DDoS from a year or two ago.  The panelist challenged the audience to think about how “old” attacks, which used to be easy to mitigate, become less so at global scale when the attacker leverages cloud infrastructure or botnet resources.
  • Shifting defense from a technical basis into a legal, policy, or contractual basis.  So far there hasn’t been an economically viable way to shift network security risks (or customer loss/damage liability) onto a third party — I believe many organizations would willingly exchange large sums of money to be released from these risks, but so far no third party seems willing to accept that bet.  The panel wondered whether (or when) the insurance industry will develop a workable model for computer security.
  • Incentives for computer security.  Following up on the point above, a panelist noted that it is difficult to incent users to follow good security practices.  The panelist asserted how E*TRADE gave away 10,000 security tokens but still had trouble convincing their users to use them as a second factor for authentication.  Another panelist pointed to incentives in the medical insurance industry — “take care of your body” and enjoy lower premiums — and wondered how to provide similar actionable incentives to take care of your network.
  • Maximizing your security return-on-investment (ROI).  A panelist asserted that the best ROI is money spent on your employees:  Developing internal experts in enterprise risk management, forensics and incident response skills, etc.
  • Assume you will be breached.  I’ve also been preaching that message: Don’t just protect, but also detect and remediate.  A panelist suggested you focus on understanding your network and your systems, especially with respect to configuration management and change management.

When asked to summarize the next 15 years of security in five words or fewer, the panelists responded:

  1. Loss of control.
  2. Incident response and cleaning up.
  3. Human factors.

Beyond the panel discussion, some of the work that caught my attention included:

  • Kinectasploit.  Jeff Bryner presented my favorite work of the weekend, on “linking the Kinect with Metasploit [and 19 other security tools] in a 3D, first person shooter environment.”  I have seen the future of human-computer interaction for security analysts — it is Tom Cruise in Minority Report — and the work on Kinectasploit is a big step in us getting there.
  • Near field communications insecurity.  Charlie Miller (“An analysis of the Near Field Communication [NFC] attack surface”) explained that “through NFC, using technologies like Android Beam or NDEF content sharing, one can make some phones parse images, videos, contacts, office documents, even open up web pages in the browser, all without user interaction. In some cases, it is even possible to completely take over control of the phone via NFC, including stealing photos, contacts, even sending text messages and making phone calls” and showed a live demo of using an NFC exploit to take remote control of a phone.
  • Operating systems insecurity.  Rebecca Shapiro and Sergey Bratus from Dartmouth made the fascinating observation that the ELF (executable and linker format) linker/loader is itself a Turing-complete computer: “[we demonstrate] how specially crafted ELF relocation and symbol table entries can act as instructions to coerce the linker/loader into performing arbitrary computation. We will present a proof-of-concept method of constructing ELF metadata to implement [Turing-complete] language primitives and well as demonstrate a method of crafting relocation entries to insert a backdoor into an executable.”  The authors’ earlier white paper provides a good introduction to what they call “programming weird machines”.
  • Wired communications insecurity.  Collin Mulliner (“Probing mobile operator networks”) probed public IPv4 address blocks known to be used by mobile carriers and found a variety of non-phone devices, such as smart meters, with a variety of enabled services with obtainable passwords.
  • Governmental infrastructure insecurity.  My next-to-favorite work was “How to hack all the transport networks of a country,” presented by Alberto García Illera, where he described a combination of physical and electronic penetration vectors used “to get free tickets, getting control of the ticket machines, getting clients [credit card] dumps, hooking internal processes to get the client info, pivoting between machines, encapsulating all the traffic to bypass the firewalls” of the rail network in his home country.
  • Aviation communications insecurity.  There were three talks on aviation insecurity, all focused on radio transmissions or telemetry (the new ADS-B standard for automated position reporting, to be deployed over the next twenty years) sent from or to an aircraft.

Last year I tried to attend as many talks as I could but left Vegas disappointed — I found that there is a low signal-to-noise ratio when it comes to well-executed, well-presented work at these venues.  The “takeaway value” of the work presented is nowhere near as rigorous or useful as that at research/academic conferences like CCS or NDSS.  But it turns out that’s okay; these venues are much more about the vibe, and the sharing, and the inspiration (you too can hack!), than about peer-reviewed or archival-quality research.  DEF CON in particular provides a pretty fair immersive simulation of living inside a Neal Stephenson or Charlie Stross novel.

This year I spent more time wandering the vendor floor (Black Hat) and acquiring skills in the lockpick village (DEF CON), while still attending the most-interesting-looking talks andshows.  By lowering my “takeaway value” expectations a bit I ended up enjoying my week in Vegas much more than expected.

July 6, 2012

Leveling up in the American Dream MMORPG

Filed under: Homeownership — JLG @ 10:55 PM

We bought a house!  Or, as friend Brendan put it:

Grats on leveling up in the American Dream MMORPG!  Your character now has an improved credit score.

We are first-time homebuyers, so everything about the house feels surreal—everything from sitting in the yard (“this is our house?”) to reading mortgage documents that assert “Borrower has promised to…pay the debt in full not later than July 1, 2042.”  2042?

We moved to Boston in October 2011.  Our plan was to rent for a couple of years, decide whether we liked Boston, then maybe dip our toes in the housing market.  The plans changed when our landlord notified us that he would likely raise the rent when our lease renewed in Fall 2012.  Since we indeed like Boston, we decided to dip our toes in ahead of schedule.  And wow did things move fast at that point:

  • April 4, 2012: Started researching potential neighborhoods in Boston, plus looking at online listings to get a feel for prices and availability.
  • April 17: Met with a realtor, discussed what we were looking for, made an appointment to see houses two days later.
  • April 19: Toured eight houses, fell in love with #8, did some quick research, put in an offer, received a counteroffer, sent a counter-counteroffer. (Paid $1,000 deposit.)
  • April 20: Counter-counteroffer accepted by the seller!  At this point we’ve “bought” the house, so long as we are able to get a mortgage commitment, the inspection is satisfactory, and the bank’s appraisal is at least equal to the agreed sale price.
  • April 23: Mortgage application submitted.
  • April 24: Landlord (from whom we were renting) began looking for new tenants, in hopes of allowing us to break our lease early.
  • April 27: Home inspection and radon test completed. ($525)  The inspection revealed concerns about the roof, leading the seller to offer a $2,500 closing credit towards roof repairs.
  • May 1: Mortgage rate locked in with the bank.
  • May 3: Bank’s appraisal of the property completed. ($350)
  • May 4: Purchase and sale (P&S) agreement negotiated and signed. ($20,800 deposit.)
  • May 5: Mortgage underwriting paperwork submitted.
  • May 15: Moving company scheduled.
  • May 23: Mortgage commitment received from the bank, one day before deadline.
  • June 1: Homeowner’s insurance policy obtained.
  • June 5: Landlord signed lease with new tenants to start July 1, saving us $5,200 in rent!
  • June 8: Wire transfer to closing agent’s IOLTA of funds needed to close. ($74,000 deposit plus $20 wire fee.)
  • June 15: Closing.  At this point we took possession of the house.
  • June 20: Locksmith hired to change keys and replace some locks. ($190)
  • June 22: Moved into the house.
  • August 1: First mortgage payment due.

One of the secondary joys of house hunting was the chance to “geek out” and dig deeply into understanding how things like property assessments, securitizable mortgage loans, and purchase and sale agreements work.  For example:

It costs more to buy a house than the price of the house.

On the closing date (the date we signed all the documents and got the keys to the house, plus the date on which our deed to the house was recorded with the Suffolk County Registry of Deeds) we paid in full both the down payment ($87,200) plus a variety of miscellaneous loan-related fees, escrow and tax prepayments, and adjustments ($10,395.77).  Even if we had paid cash for the house we still would have paid at minimum $2,500 for legal representation, title insurance, inspection, appraisal, a homestead declaration, and government recording fees.

Of course, there’s also the interest paid on the mortgage over the life of the loan (we chose a 30-year fixed at 3.625% interest rate).  If we stick to our loan schedule—if we don’t refinance, make any principal prepayments, or sell the house—we will pay $223,853.59 in interest on the $348,800 loan.  (I’m not complaining, though.  Interest rates are at a historical low right now.  Five years ago the interest rate was 6.5%, which would have made the total interest paid $444,870.41.  However, if rates were that high then housing prices would probably be lower than they are now, in which case we could have gotten a smaller loan and owed less interest.)

Our loan requires us to make payments of about $225 monthly into an escrow account to cover property taxes and homeowner’s insurance.  (You may apply for a loan that doesn’t have an escrow requirement, but such loans are more expensive because of the risk to the lender that you won’t make the required payments.)  That amount, plus $1,590.71 monthly towards loan repayment, yields a housing cost of about $1,815/month—meaning that payments on a 4-bedroom house are less expensive than rent for 2-bedroom apartments where I lived in New York City, Arlington (VA), and Boston.  But mortgage interest is federally tax deductible:  In 2013 we will pay $12,436.27 in mortgage interest (netting a $290/month reduction in federal tax), so our effective monthly housing payment next year could be as low as $1,525—significantly cheaper than rent on equivalent detached single family houses in those locations.

…Of course, amortizing the roof (and other) repair/replacement costs will increase our effective monthly housing payment for a while.  As will amortizing the $10,395.77 mentioned above.

Speaking of finance:  By far the most informative resource I found about home buying is The Mortgage Professor.  I recommend you start by reading the “Questions by Topic” on that site; before you know it you’ll have spent days reading almost every article on the entire site.  The site also solicits new questions from readers; I sent the professor (Prof. Jack Guttentag) a query about the effectiveness of principal prepayment and received an informative answer just four hours later.  Another useful site for understanding the mortgage process is Bankrate.com.

Real estate agents make 2.5% commission on each sale.

That is, the agent representing the buyer makes 2.5% and the agent representing the seller makes 2.5%.  So our agents walked away with $10,837.50, and the seller’s agent walked away with $10,837.50.  (In other parts of the country, 3% is common.)

Several websites (especially the interesting but self-serving REALTOR.com blog) claim that you as a buyer shouldn’t worry about agent fees because the seller pays the real estate agents’ commissions.  That’s technically true, but it’s also technically true that I had to eat the cost regardless—as the seller likely hiked up his asking price by 5% to pay the real estate commission.

How can a percentage-based sales commission possibly be construed as ethical by the real estate industry or legitimate by government regulations?  Especially given that one of the agents’ primary tasks is to negotiate the sales price—a price in which the agents have a conflicting compensatory interest?  And, perhaps controversially, assuming that people purchasing $100,000 houses deserve equal agent representation/engagement/effort as people purchasing million dollar homes?

Note that I’m not saying that real estate agents don’t work hard to earn their pay.  Case in point: my first email exchange with my realtors took place after 8pm; later in the process I received messages from them as late as 11:03pm and as early as 7:07am.  Our agents provided excellent advice and guidance throughout the whole process, and we ended up with a great house; I would not hesitate to use them again.

Just as I can’t imagine driving a motorcycle without first taking the MSF RiderCourse, I can’t imagine buying or selling a house (at least for the first time) without agent representation.  Agents certainly deserve professional compensation that is commensurate with the effort and time they expend on behalf of their clients.

However, the other major parties involved in the purchase—the mortgage loan originator and the lawyer serving as the closing agent—both received fixed compensation for their services.  Why are real estate agents paid on commission?

Points on a mortgage—I’d heard of them but had never sat down to figure out what’s the point.

Points are a wager you place with the lender:  If you pay points you’re betting that you’re going to keep the mortgage for longer than 3 years.  The lender makes money either way, but makes more money if you bet wrong.

One point is 1% of the price of the mortgage.  Our mortgage is for $348,800, so one point is $3,488.

When you “pay points,” you are essentially giving extra money at closing to the entity loaning you the money.  As described above, we paid $10,395.77 at closing in miscellaneous fees, prepayments, and adjustments.  Part of that amount was $2,441.60 we paid (equal to 0.7 point) to reduce our interest rate by 0.25%.

The advantages of paying points are:

    1. By paying points you reduce your interest rate.  In our case we could get a 30-year fixed mortgage at a 3.875% interest rate, or we could pay 0.7 point ($2,442) for a 3.625% interest rate, or we could pay 1.3 points ($4,482) for a 3.5% interest rate.
    2. By paying points you also reduce your monthly payment.  In our case we could make loan service payments of $1,640 (at zero points), or $1,591 (0.7 point), or $1,566 (1.3 points).
    3. Points you pay may be tax deductible. Our $2,441.60 payment of points effectively cost only $1,757.95 due to the reduction of our federal taxes.

The disadvantage is:

The money you spend on points doesn’t increase your equity in the house; it’s basically free money for the lender.  It takes 3 to 5 years to “break even” when you pay points [for the reduction in monthly payments to catch up to the amount spent on points]—for example, if we were to pay three points (~$10K) this year, and if we sell the house next year [or if we refinance next year at a lower interest rate] then we’ve effectively flushed $9K down the toilet.

With the 0.7 point we paid, if we sell or refinance the house in three years it’s a wash.  Beyond 3 years we save money; if we keep the loan for 30 years then we will save $17,812.50 in interest payments.

The legal definition of a point is “prepaid interest.”  (That’s why it lowers the interest rate—you’ve already given the lender a guaranteed return on investment.)

There is also a concept of negative points where the lender pays some of your closing costs in exchange for a higher interest rate on the loan (and therefore higher monthly payments).  According to our loan originator at the bank, one scenario where you might agree to negative points is if you believe interest rates will fall in the near future—such that you will be able to refinance soon, at a lower rate, without “flushing” any money as described above.

Real estate case law can be fascinating.

See for example this case about adverse possession, and many of the other articles at that site.

Overall we found it really helpful to solicit advice from our friends who are homeowners.  Some of the best advice:

  • If you think you might buy a house in the next couple of years, go ahead and get your financial ducks in a row now.  I didn’t do this; instead I made a couple of rookie mistakes that could have ended up costing us extra money.  The first mistake was that I (unnecessarily) asked one of my credit card companies to increase my credit limit last fall, which caused an Inquiry to show up on my credit report, which reduced my credit score.  Lower credit scores can result in higher interest rates for mortgage loans.  The second mistake was that I kept some of our down-payment money in an “aggressive strategy” mutual fund until after our offer was accepted.  As a result I had to sell those fund shares immediately, at whatever the current price was, instead of (for example) setting up a systematic withdrawal plan over several months or trying to time the market for a favorable sale.  Also as a result we had to provide extra documentation to the mortgage underwriter to explain why all of this extra money suddenly showed up in our checking account right before we submitted our loan application.
  • Pay close attention to the recommendations and referrals made by your real estate agents.  We expected the house-hunting & offer & sale process to be long and arduous.  It turned out to be very easy because our agents did a lot of the “heavy lifting” for us—for example, instead of just advising us to search building permits before making an offer (to ensure all previous work was permitted and performed to code), our agents performed the search for us while showing us how we could repeat the search when we needed to.  We found our agents by lucky happenstance: They were the listing agents for one of the properties we found listed online, and when we contacted them (by that point the property was “under agreement” and no longer available) we were so impressed by their prompt and helpful responses—this was the after-8pm email exchange—that we asked if they also represented buyers.  Yes.  Soon thereafter we found ourselves putting in the offer on this house.
  • Once you have a signed offer, you don’t have to worry about the seller backing out.  For several weeks I expected someone to call us up with the grim news that the seller had decided that he didn’t want to sell the house after all.  It turns out I needn’t have worried; our lawyer explained that once an offer was signed by both parties, the buyer can sue for specific performance if the seller tries to back out of the deal.  There were also a few deadlines that felt unnecessarily nail-biting; for example, we only had seven days to inspect the house, but the inspectors we tried were all booked more than 7 days in advance.  (In the end our recommended inspector had someone else cancel on him, so we were able to squeeze the inspection in just in time.)  Also our mortgage underwriter waited until the last moment to approve our loan even though we’d submitted all required documents (multiple times for some documents) well before the deadline.
  • “Don’t get freaked out by the home inspection,” advises my colleague Jeremy.  He notes that it’s the inspector’s job to nitpick every small thing they find; during the inspection you’ll likely become afraid that the whole house is going to collapse around you any moment.  Listen for big-picture items and for the inspector’s overall assessment of the house at the end of the inspection.  Our inspector asked if we wanted to perform a $50 radon test; we weren’t sure at first whether we should, but in the end we were glad we chose to run the test.  (2 pCi/L: nothing to worry about.)
  • When you buy a condominium, understand that you’re entering into a business contract with the other condo owners—make very sure they’re the kind of people with whom you’re comfortable being in business.  Prior to this year I’d never wondered how condo associations work, although we got a taste of it last year watching our landlord argue with his condo association over who should pay for a plumber to track down a mystery leak in the building.  (After three plumbers in as many months, it was discovered that the overflow valve on our bathtub had rusted through…so our landlord ate the cost.)  As we looked at condo listings in our neighborhood I came across several articles warning about dysfunctional condo associations or despotic homeowners associations.  In the end we were happy to have found a place subject to neither type of association.
  • Understand your basis and other tax implications.  Basically, read IRS publications 530, 936, and 523, and possibly some other ones I haven’t quite gotten around to reading.  I was surprised to find some key differences between federal and state laws; for example, there is no deduction for mortgage interest on Massachusetts state taxes.
  • Don’t sweat the small stuff.  Our closing agent made a $15.67 error in the seller’s favor.  (Earlier this year the seller had paid the quarterly real estate taxes covering April 1 through June 30, so part of our closing costs on June 15 were an apportioned refund to the seller for June 15-30.  The closing agent miscalculated this amount.)  But so what?  The seller negotiated with us in good faith, he maintained the house in excellent condition, he gave us a few extra items for free (including a $500 portable air conditioner), and he even left us a nice bottle of wine for us to celebrate our new house!  We got a good deal on a house we already adore; we couldn’t be happier.

June 21, 2012

USENIX ATC 2012

Filed under: Reviews — JLG @ 11:47 PM

Last week I attended the USENIX Annual Technical Conference and its affiliated workshops, all here in sunny Boston, Massachusetts.  Here were my takeaways from the conference:

  • Have increases in processing speed and resources on individual computer systems changed the way that parallelizable problems should be split in distributed systems?  In a paper on the “seven deadly sins of cloud computing research”, Schwarzkopf et al. note (for sin #1) that “[even] If we satisfy ourselves that parallel processing is indeed necessary or beneficial, it is also worth considering whether distribution over multiple machines is required.  As Rowstron et al. recently pointed out, the rapid increase in RAM available in a single machine combined with large numbers of CPU cores per machine can make it economical and worthwhile to exploit local, rather than distributed, parallelism.”  The authors acknowledge that there is an advantage to distributed computation, but they note that “Modern many-core machines…can easily apply 48 or more CPUs to processing a 100+GB dataset entirely in memory, which already covers many practical use cases.”  I enjoyed this observation and I hope their observation inspires (additional) research on adaptive systems that can automatically partition a large cloud-oriented workload into a “local” component, sized appropriately for the resources on an individual node, and a “distributed” component for parallelization beyond locally-available resources.  Would it be worth pursuing the development such an adaptive system, especially in the face of unknown workloads and heterogeneous resource availability on different nodes—or is the problem intractable relative to the benefits of local multicore processing?
  • Just because you have a bunch of cores doesn’t mean you should try to utilize all of them all of the time.  Lozi et al. identify two locking-related problems faced by multithreaded applications:  First, when locks are heavily contended (many threads trying to acquire a single lock) the overall performance suffers; second, each thread incurs cache misses when executing over the critical section protected by the lock.  Their interesting solution involves pinning that critical section onto a dedicated core that does nothing but run that critical section.  (The authors cite related work that does similar pinning of critical sections onto dedicated hardware; I found it especially useful to read Section 1 of Lozi’s paper for its description of related work.)  The authors further created a tool that modifies the C code of legacy applications “to replace lock acquisitions by optimized remote procedure calls to a dedicated server core.”  One of my favorite technical/research questions over the past decade is, given ever-increasing numbers of cores, “what will we do with all these cores?”—and I’ve truly enjoyed that the answer is often that using cores very inefficiently is often the best thing you can do for overall application/system performanceAnanthanarayanan et al. presented a similar inefficiency argument for small clustered jobs executing in the cloud: “Building on the observation that clusters are underutilized, we take speculation to its logical extreme—run full clones of jobs to mitigate the effect of outliers.”  One dataset the authors studied had “outlier tasks that are 12 times slower than that job’s median task”; their simulation results showed a 47% improvement in completion time for small jobs at a cost of just 3% additional resources.

Figure 5 from “netmap: a novel framework for fast packet I/O” [Rizzo 2012]

  • There are still (always?) plenty of opportunities for low-level optimization.  Luigi Rizzo presented best-paper-award-winning work on eliminating OS and driver overheads in order to send and receive packets at true 10 Gbps wire speed.  I love figures like the one shown above this paragraph, where a system both (a) blows away current approaches and (b) clearly maxes out the available resources with a minimum of fuss.  I found Figure 2 from this paper to be especially interesting:  The author measured the path and execution times for a network transmit all the way down from the sendto() system call to the ixgbe_xmit() function inside the network card driver, then used these measurements to determine where to focus his optimization efforts—those being to “[remove] per-packet dynamic memory allocations, removed by preallocating resources; [batch] system call overheads, amortized over large batches; and [eliminate] memory copies, eliminated by sharing buffers and metadata between kernel and userspace, while still protecting access to device registers and other kernel memory areas.”  None of these techniques are independently novel, but the author claims novelty in that his approach “is tightly integrated with existing operating system primitives, not tied to specific hardware, and easy to use and maintain.”  Given his previous success creating dummynet (now part of FreeBSD), I am curious to see whether his modified architecture makes its way into the official FreeBSD and Linux codebases.
  • Capturing packets is apparently difficult, despite the years of practice we’ve had in doing so.  Two papers addressed aspects of efficiency and efficacy for packet capture:  Taylor et al. address the problem that “modern enterprise networks can easily produce terabytes of packet-level data each day, which makes efficient analysis of payload information difficult or impossible even in the best circumstances.”  Taylor’s solution involves aggregating and storing application-level information (DNS and HTTP session information) instead of storing the raw packets and later post-processing them.  Section 5 of the paper presents an interesting case study of how the authors used their tool to identify potentially compromised hosts on the University of North Carolina at Chapel Hill’s computer network.  Papadogiannakis et al. assert that “intrusion detection systems are susceptible to overloads, which can be induced by traffic spikes or algorithmic singularities triggered by carefully crafted malicious packets” and designed a packet pre-processing system that “gracefully responds to overload conditions by storing selected packets in secondary storage for later processing”.
  • /bin/true can fail!  Miller et al. described their system-call-wrapping software that introduces “gremlins” as part of automated and deterministic software testing.  A gremlin causes system calls to return legitimate but unexpected responses, such as having the read() system call return only one byte at a time with each repeat call to read().  (The authors note that “This may happen if an interrupt occurs or if a slow device does not have all requested data immediately available.”)  During the authors’ testing they discovered that “the Linux dynamic loader [glibc] failed if it could not read an executable or shared library’s ELF header in one read().”  As a result, any program requiring dynamic libraries to load—including, apparently, /bin/true—fail whenever system calls are wrapped in this manner.  This fascinating result calls to mind Postel’s Law.

Table 1 from “Software Techniques for Avoiding Hardware Virtualization Exits” [Agesen 2012]

  • Switching context to and from the hypervisor is still (somewhat) expensive.  The table above shows the “exit latency” for hardware virtualization processor extensions, which I believe Agesen et al. measured as the round-trip time to trap from guest mode, to the virtual machine monitor (VMM), then immediately back to the guest.  I was surprised that the numbers are still so high for recent-generation architectures.  Worse, the authors verbally speculated that, given the plateau with the Westmere and Sandy Bridge architectures, we may not see much further reduction in exit latency for future processor generations.  To address this high latency, Agesen described a scheme where the VMM attempts to find clusters of instructions-that-will-exit, and where the VMM handles such clusters collectively using only a single exit (instead of returning to guest mode after each handled instruction).  One example of such a cluster: “Most operating systems, including Windows and Linux, [update] 64 bit PTEs using two back-to-back 32 bit writes to memory….This results in two costly exits”; Agesen’s scheme collapses these into a single exit.  The authors are also able to handle complicated cases such as control flow logic between clustered instructions that exit.  I like a lot of work on virtual machines, but I especially liked this work for its elegance and immediate practicality.
  • JLG’s favorite work: We finally have a usable interactive SSH terminal for cellular or other high-latency Internet connections.  Winstein et al. presented long-overdue work on making it easy to interact with remote computers using a cell phone.  The authors’ key innovation was to have the client (on the phone) do local echo, so that you see what you typed without having to wait 4 seconds (or 15 seconds or more) for the dang network to echo your characters back to you.  The authors further defined a communications protocol that preserves session state even over changes in the network (IP) address, meaning that your application will no longer freeze when your phone switches from a WiFi network to a cellular network.  Speaking metaphorically, this work is akin to diving into a pool of cool refreshing aloe cream after years of wading through hot and humid mosquito-laden swamps.  (As you can tell, I find high-latency networks troublesome and annoying.)  The authors themselves were surprised at how much interest the community has shown in their work: “Mosh is free software, available from http://mosh.mit.edu. It was downloaded more than 15,000 times in the first week of its release.”
  • Several papers addressed the cost savings available to clients willing to micromanage their cloud computing allocations or reservations; dynamic reduction of resources is important too.  Among the more memorable:  Ou et al. describe how to use microbenchmark and application benchmarks to identify hardware heterogeneity (and resulting performance variation) within the same instance types in the Amazon Elastic Compute Cloud.  “By selecting better-performing instances to complete the same task, end-users of Amazon EC2 platform can achieve up to 30% cost saving.”  I also learned from Ou’s co-author Prof. Antti Ylä-Jääski that it is permissible to chain umlauted letters; however, I’ve been trying to work out exactly how to pronounce them.  Zhu et al. challenge the assumption that more cache is always better; they show that “given the skewed popularity distribution for data accesses, significant cost savings can be obtained by scaling the caching tier under dynamic load patterns”—for example, a “4x drop in load can result in 90% savings” in the amount of cache you need to provision to handle the reduced load.

In addition to the USENIX ATC I also attended portions of several workshops held coincident with the main conference, including HotStorage (4th workshop on hot topics in storage and file systems), HotCloud (4th workshop on hot topics in cloud computing), Cyberlaw (workshop on hot topics in cyberlaw), and WebApps (3rd conference on web application development).

At HotStorage I had the pleasure of watching Da Zheng, a student I work with at Johns Hopkins, present his latest work on “A Parallel Page Cache: IOPS and Caching for Multicore Systems“.

Since 2010 USENIX has held a “federated conferences week” where workshops like these take place in parallel with the main conference & where your admission fee to the main conference covers your admission to the workshops.  This idea works well, especially since USENIX in its modern incarnations has only a single track of talks.  Unfortunately, however, I came to feel that the “hot” workshops are no longer very hot—I was surprised at the dearth of risky, cutting-edge, groundbreaking, not-fully-fleshed-out ideas presented and the resulting lack of heated controversial (interesting) discussion.  (I did not attend the full workshops, however, so it could simply be that I missed the exciting papers and the stimulating discussions.)

I also sat in on the USENIX Association’s annual membership meeting.  Attendance at this year’s conference was well down from what I remember of the USENIX conference I attended a decade ago (USENIX conferences have been running since the 1970s), and I got the sense that USENIX is scrambling to figure out how to stay relevant, funded/sponsored, attractive to potential authors, and interesting to potential attendees.  The newly-elected USENIX board asked for community feedback so I sent the following thoughts:

Hi Usenix board,

I appreciate the opportunity to participate in your annual meeting earlier today. I wasn’t even aware that the meeting was taking place until one of the board members walked around the hallways trying to get people to attend. I otherwise would have thought that the “Usenix annual meeting” was some boring event like voting for FY13 officers or something. Maybe I missed a memo? Perhaps next year mention the annual meeting in the email blast you sent out beforehand. (When I was at ShmooCon earlier this year the conference organizer actually ran a well-attended session, as part of the conference, about how they run a conference — logistics, costs, etc. — and in that session encouraged the same kind of feedback that you all solicited during the meeting.)

One of the points I brought up in the meeting is that Usenix should place itself at the forefront of cool technology. If there is something awesome that people are using, Usenix could be one of the first to adopt it — both in terms of using the technology during a session (live simulcast, electronic audience interaction, etc.) and in terms of the actual technology demonstrations at the conference. Perhaps you could have a “release event” where people agreed to escrow their latest cool online/network/storage/cloud/whatever tools for a few months and then release them all in bulk at the conference? You could make it into a media/publicity event and give awards for the most Usenix-spirity tools.

My wife suggests that you hold regional gatherings simultaneously as part of a large distributed conference. So perhaps it’s difficult/expensive for me to fly out to WA for Usenix Security, but maybe it’d be easy for me to attend the “northeast regional Usenix Security conclave” where all the sessions from WA are simulcast to MA (or NY or DC) — e.g. hosted at a university somewhere. Even better would be to have a few speakers present their work *here* and have that simulcast to the main conference in WA. [Some USENIX members] would complain about this approach, but think of all the news lately about the massive online university courses — perhaps Usenix could be a leader/trendsetter/definer in massive online conferences.

If I can figure out how, I’d like to be involved in bridging the gap between the “rigorous” academic conference community and the “practical” community conference community. I’m giving a talk next month at a community conference, so I will try to lay the groundwork there by describing to the audience what conferences like Usenix are like. You know, YOU could try to speak at these conferences — e.g. [the USENIX president] could arrange to give an invited talk at OScon, talking about how ordinary hacker/programmer types can submit interesting work to Usenix and eventually see their name at the top of a real conference paper.

Anyway, I appreciate the challenges you face in staying relevant in the face of change. I agree with the lady from the back of the room who expressed that it doesn’t matter what Usenix was 25 years ago, what’s important is what Usenix is now and in the next few years. Good luck!

Within hours the USENIX president replied:

Thanks for your thoughtful mail and especially your desire to be part of the solution — that’s what we love about our members!

I think we got a lot of good feedback last night and there are some great ideas here — the idea of simulcasting to specific remote locales where we have regional events is intriguing to me. You’ve given the board and the staff much to think about and I hope we don’t let you down!

May 19, 2012

Solo nobody can hear you

Filed under: Aviation — JLG @ 11:15 PM

JLG's first solo flight, Lawrence Municipal Airport, May 19, 2012

If you weren’t in sixth grade choir you might not have heard the joke:  “May I sing solo?”  “Sure, you may sing solo…solo nobody can hear you!”

I flew my first solo flight today!

My pilot friends made a big deal of the days leading up to my solo.  One friend said: “Mary told me you are about to solo.  Congrats.  Next to your wedding day and births, it is the best feeling in the world.”  Another: “Good luck Wednesday and with your solo.  Let me know when you solo. That’s a big deal.  Nothing like it.”

At first I wondered, “Is this really such a big deal?”  Before today, the upcoming solo flight just didn’t feel like it would be extraordinary.  One of the things I like about flying is that it’s not stressful, difficult, or onerous — I feel well prepared for each flight (a combination of the flying lessons, ground instruction, and my instructor’s preflight briefing) meaning that I feel comfortable with my decisions aloft and confident in my ability each time I emplane.  So I figured today would simply feel like another fun day of flying.

But when the time came, it was a big deal.  As I taxied N3509Q (“zero niner kaybeck” on radio calls) away from the hanger door, away from my instructor standing in the door waving, a wave of emotions surged through me and my heart started racing.  It was exciting!  It was thrilling!  I felt the pride of accomplishment, especially since it reflects my efforts for the first three-and-a-half months of lessons.  It was very strange being alone in a moving cockpit!  I wasn’t scared, but I knew I’d feel like a chump if I landed in the Merrimack River or pulled a ground loop or crushed the nosewheel and firewall or basically anything that would result in a timid call to the insurance company, so I wanted to avoid those outcomes.

In addition to it being strange being alone in the cockpit, it was also strange how the airplane handled without the extra weight of my 240 pound instructor — in short, the plane went up faster and down more slowly.

It was a beautiful day to fly:  clear visibility, light winds, high pressure. My instructor prefers to have his students fly their first solo flight at Lawrence Municipal Airport (LWM), about 10 minutes northeast of Hanscom Field, to avoid the usual heavy weekend traffic flying the pattern at Hanscom.  (For later solo flights we’ll stay at Hanscom to practice solo flight under more hectic circumstances.)  So I pulled the Boston terminal area chart and LWM airport diagram out of my flight bag and flew us up to Lawrence at 2,500 feet.

One of the amazing things about flying is just how quickly you can get from point A to point B when you’re flying at 120 MPH in a straight line — the 45-minute drive from BED to LWM takes 10 minutes by air.  Of course, those 10 minutes don’t include the time to plan the flight, or to perform the preflight inspection, or to taxi from the parking ramp to the runway and test (runup) the engine, or the hours of delay you might have to wait to take off if the weather is uncooperative.

The first three solo flight lessons at my school are “supervised solos”: 3 or more touch-and-go landings with an instructor, then 3 landings solo.  The goal of these mixed dual/solo lessons are to bolster the student’s confidence (“I was just able to land with my instructor, so I should be able to land fine without my instructor”).  After taking these three supervised-solo flights I’ll be able to take a plane out anytime I want in order to practice landings at Hanscom, and later to practice maneuvers in the practice area.

My three pre-solo landings with my instructor were great; taken together they were easily my best landings to date.  Then my three solo landings were fine, not textbook but still good:

  • The first one was a little flat.  During the pre-touchdown flare you’re supposed to pitch back more and more (lifting the nosewheel higher in the air relative to the main wheels) before the mains touch the ground.  I didn’t pitch back far enough on this landing.  It wasn’t a bad landing — just not a textbook landing — and importantly I didn’t strike the nosewheel first, which would have been very bad.
  • The second one was a little fast.  When you cross the runway threshold in the Cessna 172 you should be flying about 65-70 knots.  I was probably going 75-80 knots on this landing, so I floated down the runway for a while, bleeding off speed, until the wheels finally sank down.
  • The third one was a little high.  As I was taking off for the third circuit the tower asked me to fly a right pattern because there was a banner-tow plane circling to the west of the runway.  While making the right turns I misjudged the base turn and turned too early, meaning I still hadn’t descended enough by the time I had to turn final.  At that point I had several options: I could panic, or I could execute a go-around, or I could execute a forward slip, or I could add more flaps in an effort to go down and slow down.  Since KLWM runway 5 is 5,001 feet long I decided to add flaps since I had plenty of distance to burn off the extra height.

So, on average, my first three solo landings were perfect.  And as my instructor points out, the real skill to landing is recognizing and correcting minor problems in order to prevent them from becoming major problems.  Despite flying flat, fast, and high, I felt both in control and comfortable at all times.  I truly enjoyed the experience of flying solo.  It was a big deal!

Each time I fly I find myself looking forward even more to the next flight.  What often strikes me most is just how beautiful the landscape looks just after takeoff — suddenly you can see for miles in every direction, and there’s just so much to see that you find yourself metaphorically gasping to take it all in.  And today, flying in the traffic pattern at 1,000 feet over the gorgeous buildings of the City of Lawrence, Massachusetts, with the town surrounding me like an impossibly exact hobbyist’s scale model, made for the perfect visual accompaniment to an already beautiful day.

After a few more solo flights we’ll start working on cross-country flight (“cross country” means greater than 50 miles), leading up to two supervised-solo cross-country flights then two solo cross-country flights.  We’ll then do some night flying, some more flight by reference to instruments, and I’ll practice the ground reference maneuvers, and pretty soon I’ll be ready to take the final knowledge and practical tests.  Of course, “pretty soon” is relative — I’d logged 22 hours of flight time before soloing, and I can expect to log 50 to 60 hours total by the time I take the certification tests.

EDIT (May 27): My friend Paul replies: “Remember, it’s OK to sing solo I can’t hear you, but don’t fly solo I can touch you.”

April 14, 2012

Higher quality spam

Filed under: Opinions — JLG @ 6:44 PM

Although I’ve blogged in various forms since 1996 or so, I first set up a WordPress blog in 2008.  That blog was hosted on the Jagged Technology website and was intended to convey information of interest to Jagged and its customers — the idea being that if I provided a high signal-to-noise ratio of useful technical content then it might help my sales figures.  Within a few days I started receiving spam comments on the blog, to which my heavy-handed solution was to disable comments altogether.

Earlier this year I set up a new WordPress blog here on my personal website, in order to have somewhere to post my aviation experiences as I experienced them. Given that I was decommissioning the Jagged website I decided to move my old posts to this site (a process that you’d think would be simple — export from a WordPress site and import into a WordPress site — but wasn’t; in the end an old-fashioned copy-and-paste between browser windows gave the best results in the shortest amount of time).

My good friend Jay asked about the conference reports:

Do you keep the notes public to force yourself to write? a form of self-promotion? or what.

Yes to all three.  My primary motivation for putting the conference reports up is as an archival service to the community; there aren’t that many places you can go to learn about CCS 2009, for example, and since I write the reports anyway (for my own reference and for distribution to my colleagues) I post them in case anyone now or in the future might find them useful.  Everybody has a mission in life, and mine is apparently to provide useful summary content for search engines and Internet archives.

With the new blog I decided to keep comments enabled, first out of curiosity about spam (during a visit to Georgia Tech a few years ago, one of the researchers asked why I didn’t spend more time analyzing spam instead of simply deleting it) and second on the off chance that somebody wanted to reply to one of my aviation posts with, say, suggested VFR sightseeing routes in the greater Massachusetts area.

And wow, has my curiosity about spam been piqued.  I created the blog at 12:16am on February 4, 2012; the first spam message arrived at 2:16am on February 9.  The second arrived at 12:42pm that day.  Recognizing a trend, I hustled to enable the Akismet anti-spam plugin.  Akismet works in part by crowdsourcing:  If someone else on another site marks a WordPress comment as spam, and the comment later gets posted on my site, Akismet automatically marks it as spam.  Since enabling the plugin sixty-eight days ago:

  • Number of spam comments posted on Jagged Thoughts: 312
  • Number of non-spam comments posted on Jagged Thoughts: 0
  • Number of false negatives (comments mistakenly marked as non-spam by Akismet): 1

So I’m averaging 4.6 spam comments per day.  That’s significantly fewer than I expected to receive, though perhaps this site hasn’t yet been spidered by enough search engines to be easily found when spam software searches for WordPress sites.

I was prompted to write this post by an order-of-magnitude improvement in spam quality in a couple of messages I received yesterday.  To date, most of the spam has fit into one of these three categories:

  1. Do you need pharmaceuticals?  We can help!
  2. Would you like more visitors to your site?  We can help!
  3. Are you dissatisfied with who’s hosting your site?  We can help!

Even without Akismet it is easy to identify spam simply by looking at (a) the “Website” link provided by the commenter or (b) any links included inside the comment.  Such links these invariably point to an online pharmacy, or to a Facebook page with a foreign name but a profile picture of Miley Cyrus, or to a provider of virtual private servers, or to some other such site.  Also almost none of the spam comments are attached to the most recent post.  My theory here is that comments on older messages are less likely to be noticed by site admins but are still clearly visible to search engines.  (There’s an option in WordPress to disable comments on posts more than a year old; now I understand why it’s there.)

There are spam comments about my compositional prowess:

This design is wicked! You definitely know how to keep a reader entertained. Between your wit and your videos, I was almost moved to start my own blog (well, almost…HaHa!) Great job. I really enjoyed what you had to say, and more than that, how you presented it. Too cool!

Comments that are clearly copied from elsewhere on the Internet:

In the pre-Internet age, buying a home was a long and arduous task. But the Internet of today helps the buyer to do their own preliminary work-researching neighborhoods, demographics, general price ranges, characteristics of homes in certain areas, etc. Now with a simple click, home buyers can access whole databases featuring statistics about neighborhoods and properties before they have even met the realtor.

Comments that are WordPress-oriented:

Howdy would you mind stating which blog platform you’re using? I’m going to start my own blog in the near future but I’m having a hard time deciding between BlogEngine/Wordpress/B2evolution and Drupal. The reason I ask is because your layout seems different then most blogs and I’m looking for something unique. P.S Sorry for getting off-topic but I had to ask!

Comments written in foreign languages, comments that are nothing but long lists of pharmaceutical products with links, and comments that are gibberish.

Once per week I’ve gone through and skimmed the comments marked as spam, just to make sure that I didn’t miss someone’s useful post debating, say, the merits of purchasing personal aviation insurance or always renting from flying clubs that provide insurance to their members.  Over the past week I’ve received three spam comments containing information that clearly relate to the text of the post.  For example, this comment on my Discovering flight post:

Absolutely but the overall senfuleuss is tied to the complexity of the simulator and cost of the simulator Airlines and places like Flight Safety use large simulators with exact replicas of the cockpits of the specific plane being simulated, mounted on hydraulic systems that provide 3 degrees of motion, and video displays for each window providing outside views. These have become so realistic that you can do most of the flying required for a type certifcate on them, and airlines use them for aircrew checkrides. Moving downward from these multimillion dollar systems, there are aircraft specific sims that have the full cockpit, but without the 3 axis motion, all the way down to the cheapest flight training devices recognized by the FAA. These are not much different than MS Flight Simulator, but have an physical replica of a radio stack, throttle, yoke and rudder pedals. You can used these type of devices to log a small portion of the instrument time required for your instrument rating. One problem common to most simulators is that they tend to be harder to hand fly than an actual airplane is, particularly the lower end sims. If you are refering to a non-FAA approved simulator, like MS Flight sim, it provides no help in learning how to handle a plane. When flying real plane the forces on the controls provide an immense amount of feed back to the pilot that is missing from a PC simulator. The other problem with a PC sim is that you can not easily look around and maintain control trying to fly a proper traffic pattern on FSX is almost impossible. A home sim can be helpfull in practicing rarely used instrument procedures, things like an NDB approach or a DME arc, but it of course it does not count to your instrument currency in any way. I have also used FSX to check out airports that I will be visiting in real life for the first time. It does an accurate enough representation of geographic features that can help you place the airport in relationship to terrain in advance of the flight.

I am thrilled that the spam software authors have started performing analytics to ensure that I receive relevant and topical spam comments!  The above comment includes genuinely useful observations about using home flight simulation software to augment pilot training:

  1. When flying real plane the forces on the controls provide an immense amount of feed back to the pilot that is missing from a PC simulator.
  2. The other problem with a PC sim is that you can not easily look around and maintain control trying to fly a proper traffic pattern [] is almost impossible.
  3. A home sim can be helpfull in practicing rarely used [] procedures
  4. It does an accurate enough representation of geographic features that can help you place the airport in relationship to terrain in advance of the flight.

Early in my own flight training I tried using Microsoft Flight Simulator 2004 along with a USB flight yoke and USB foot pedals (all of which I’d bought back in 2006) to recreate my training flights at home and to squeeze in some extra practice.  For the most part I found the simulator ineffective in improving basic piloting skills — as examples, the simulator did nothing to help me with memorizing the correct relationship between the airplane nose and the horizon when attempting to transition from climb to level flight at 110 knots, and it did not display useful real-world visual references as I flew traffic patterns around area airports.  However, I found the simulator very useful in practicing the preflight and in-flight checklists, in memorizing which instruments were in which location on the Skyhawk’s control panel, in practicing taxi procedures around Hanscom airport given various wind conditions, and in reviewing the directions and speeds my instructor chose when we flew between KBED and KLWM airports.

Of course it’s not surprising that the comment contained insightful and critical commentary, given that it’s taken verbatim from Yahoo! Answers (Is flying with simulators help in real flight training?)  What’s surprising — and exciting — is that I’ve started receiving higher quality, targeted, and relevant spam based on the topics I post!  Randall Munroe would be proud.  Hopefully this trend will continue and spam software will provide me with similarly-useful, carefully-selected, topically-relevant information, helping me to become a better pilot.  (Note to spam software authors:  Just kidding.  Please don’t target this site for extra spam.)

EDIT (August 2, 2012):  I apologize to the spam software authors.  For the past two months this article has received an exponentially increasing amount of spam, currently about 300/day:

Look, folks, I apologize.  I wasn’t trying to piss you off.

I assume your motivation is economic.  (I may be wrong; perhaps you’re nihilistic, anarchistic, or simply interested in chaos theory.)  Spam is a lucrative business.  What I’m saying is that with a few small changes it can be even more lucrative.  Given the equation:

more approved comments = more planted links to your SEO and pharmacy sites = more revenue for you

Your economic goal is therefore to get more comments approved.  You’ve already taken the first step of copying paragraphs of user-generated text from Wikipedia, Yahoo, and the like, instead of relying on stock phrases such as “payday loans uk”.  I bet that simple change significantly increased both your approval percentage and your profit.

The next step is to be more selective in what content your bots copy-and-paste as spam.  Given a blog post about buying a house, you have a greater chance of having your spam comment approved if you include real-estate-oriented (“higher quality”) text rather than, say, unrelated passages about hair loss or railway construction in China.

Beyond that, socialbots (see for example Tim Hwang’s talk I’m not a real friend, but I play one on the Internet) show promise for spammers.  It’s one thing to trick an author into approving your spam comment; it would be another level of efficacy altogether to trick a site’s user community into having a comment-based conversation with your spambot.

So don’t shoot (or spam) the messenger; instead consider using my thoughts as inspiration to step up your game.

March 31, 2012

Keep the blue side up?

Filed under: Aviation — JLG @ 6:40 PM

JLG flying the Super Decathlon, March 30, 2012. Photo courtesy of Will McNamara.

Yesterday I flew upside down for the first time!

My instructor recommended that I take an optional “spin awareness training” flight sometime before my first solo flight.  The purpose of the spin training is:

  1. to experience and recover from actual spins, and
  2. to understand and recognize conditions that could lead to an unintentional spin.

The latter is the most important.  For example, one such condition could happen during an approach to landing if you bank too steeply (and stall) while applying opposite rudder.  It’s bad enough to stall during landing — for example, by pulling back too far on the control wheel — because you will rapidly sink and possibly crash.  But if you spin during landing (a spin is the same as a stall, except you also start to spiral downward) then you will certainly crash.  Hence the training.

A neat aspect of the spin training is that it takes place in an aerobatic airplane.  So, for example:

  • “Would you like to try a loop?” asked the instructor.  Well, yes.  After checking for nearby airplanes, you loop by pitching down and building up speed to 160 MPH, then pulling back on the stick until you discover that you’re upside down.  Take a moment to appreciate the view.  Then keep pulling back on the stick until you’re upside up again.  Wow!
  • Fly a different airplane.  My first 12.6 hours of instruction have been in the Cessna Skyhawk.  It’s an interesting airplane and you learn things like “check all four edges of your door to make absolutely sure it’s closed, instead of just verifying that the door latch is in the ‘locked’ position, or after takeoff you may discover that the door isn’t closed after all.”  The 1.4 hours of spin training were in the American Champion 8KCAB Super Decathlon.  The two planes have significantly different handling characteristics both on the ground and in the air; for example, we experienced a strong crosswind while taxiing and I had to push the right pedal basically to the floor to keep the plane taxiing straight; much more so than I would have needed to in the Skyhawk.  Also the Decathlon uses a surprisingly-responsive joystick-type control stick whereas the Skyhawk uses a control wheel.  I enjoyed the chance to fly something new.
  • Formation flight and wake turbulence training.  We arranged with another pilot to fly briefly in formation, with our airplane just behind the left wing of the lead plane, so I could experience wake turbulence firsthand.  The experience was fascinating — for one, it was mind-boggling to look out the window at 4,000 feet and see another full-sized airplane right there; usually planes in flight are tiny little things off in the distance.  But the wake turbulence itself was also fascinating; one moment you’d be right behind the lead plane, and the next you were smoothly but insistently kicked 50 feet to one side.  During ground school you learn techniques for avoiding the wake of large jets and other planes.  For example, stay above the glidepath (and therefore above the wake) of a large plane in front of you, and land farther down the runway than that plane, because the wake naturally sinks after it’s formed and it disappears once the wing no longer produces lift.
  • Keep your eyes outside the cockpit!   In the Skyhawk there is an array of cockpit instruments that you can fixate on instead of using outside visual references to make decisions.  For example, when flying the traffic pattern I often glance down at the attitude indicator (how much bank do I have in this turn?) and directional gyro (am I lined up 90 degrees from the runway heading bug after turning left base?)  Even while taxiing I often glance at the GPS to determine whether my taxi speed is too fast.  In the Decathlon these instruments aren’t available, so you’re forced to do what you’re supposed to be doing already — for example, figure out your bank by looking at the angle between the horizon and the airplane’s nose.  I am curious to see whether I’m better at controlling the Skyhawk during my next lesson thanks to the forced ‘purity’ of flying the Decathlon.
  • G forces add up.  I was surprised to find that, for all my bravado while walking out to the airplane, I did have a limit to how much aerobatics I could take.  After we did a bunch of spins, loops, and rolls, with me squeezing my stomach muscles as instructed to keep from greying out, I suddenly felt a warm flush throughout my upper body — OK, time to stop.  (I am pleased to report that I didn’t need to make use of the complimentary plastic airsickness bag the instructor handed me at the beginning of the lesson.)  Another aspect of comfort was the surprisingly uncomfortable parachute that we were required to wear by federal regulation — all occupants must wear a parachute in order for the pilot to exceed 60 degrees of bank relative to the horizon, or exceed 30 degrees of nose-up or nose-down attitude relative to the horizon.

I feel as though I’m doing well in my lessons, though there’s always room for improvement:

  • I generally apply far too little rudder when turning, resulting in uncoordinated flight.  To turn correctly you both rotate the control wheel (deflecting the ailerons) and simultaneously press one of the pedals (deflecting the rudder).  Pilots are expected to develop a ‘feel’ for the airplane — for example, subconsciously noting a slightly unbalanced feeling when not using enough rudder — so I’m concentrating on developing this ‘feel’.
  • During landing I often find myself flying too high and too fast as I approach the runway.  This symptom actually indicates two problems:  First, I often don’t reduce power enough, or early enough, in the descent.  Second, I often don’t trim the airplane quickly enough after making a pitch adjustment, resulting in the speed starting to creep up again as I unintentionally relax back pressure on the control wheel.  Practice, practice.
  • During crosswind landings I have had trouble keeping the airplane aligned precisely on the runway centerline.  Often I will apply too much rudder or aileron input, resulting in the airplane making large movements instead of the small corrections needed to stay aligned.  I landed very well during yesterday’s flight, so I’m hoping to perform as well in my upcoming lessons.

Overall I’m having a ton of fun.  My pre-solo checkride is scheduled for May 2, so my first solo flight will likely be in just over a month!

February 29, 2012

Pitch, power, and trim

Filed under: Aviation — JLG @ 10:43 PM

I made my first three landings today!  (That’s three separate landings, not one landing where I bounced twice.)

The plan for today’s flight was to fly over to the practice area (the Wachusett Reservoir, north of Worcester, MA) and have me work on ground reference maneuvers.  These maneuvers include:

  • Turns around a point.  Here you fly 1-mile-diameter circles around a fixed point on the ground while holding an altitude of exactly 1000 feet above ground level.  Flying in a circle is harder than it sounds because you have to constantly adjust your bank angle to account for wind; imagine trying to drive a motorboat in a perfect circle while on a river flowing at 15 MPH.  The technique you use is (a) make steeper turns when you are in the part of a circle where the wind is behind you, and (b) make shallower turns when the wind is ahead of you:

    Figure 6-6 (Turns around a point) from the FAA's Airplane Flying Handbook

  • S-turns across a road.  Again, the key point is correcting for wind by changing the steepness of your turns throughout the maneuver:

    Figure 6-5 (S-Turns) from the FAA's Airplane Flying Handbook

  • The rectangular course.  This maneuver is especially important because it mimics the turns you (and other airplanes) make when flying around a runway before landing.  Not surprisingly, the key point is correcting for wind:

    Figure 6-4 (Rectangular course) from the FAA's Airplane Flying Handbook

For longer descriptions of the maneuvers, and the theory behind them, see chapter 6 of the FAA’s Airplane Flying Handbook.

We also practiced steep turns with bank angles greater than 30 degrees.  At 45 degrees we experienced a 1.5G force and at 60 degrees it was a 2G force.  WOW was that fun!  (My instructor knows of my interest in aerobatic flying and mentioned that I will eventually have even more fun pulling 3G and up during aerobatic maneuvers.)  The objective of a steep turn is to make a 360 degree turn, exiting the turn on the same heading you start with, all while maintaining the exact same altitude (3000 feet for today’s practice).  Maintaining altitude is especially challenging during a steep turn; the plane naturally wants to descend whenever you turn because the lift from the wings no longer points straight up.

When we took off the weather was good but snow was quickly moving in on the horizon, making us worry that the flight would have to be cut short.  My instructor even made sure to bring the materials he needed in case visibility dropped too quickly, in which case he would need to take over to make an instrument approach and landing at the airfield.  After an hour of practice the weather conditions were still okay but my instructor gave me the choice of continuing to fly in circles or to head back to the airport to try a few landings.  Well, duh!

Part of flight training is to learn how to interact with air traffic control so as we neared the airport I made initial contact with the tower:  “Hanscom Tower, Cessna one one five four golf, ten miles west, for touch-and-go, with charlie”.  This phrase identifies who we’re contacting (the control tower), who we are (a Cessna brand aircraft with tail number N1154G), where we are (about 10 miles west of the airport), and what we wanted to do (land at the airport then immediately take off again for another practice landing).  After this initial contact the instructor took over the rest of the conversation with ATC so that I could focus on flying the airplane and preparing to land without being distracted by the radio.

Neat tidbit: To return to the airport from the practice area we use Walden Pond (yes, the Walden Pond) as a visual navigation guide — we fly east until we’re directly over the pond, then make a left turn toward the airport’s control tower.  The pond is easy to identify from the air and lines us up at the perfect angle for how ATC wants us to approach the airport.

When we approached the airport we didn’t fly directly towards the runway; instead we flew a partial loop around the runway called the traffic pattern.  The traffic pattern helps organize multiple planes when they are all trying to land at the same time.  In the figure below, we started at the “Entry” label (flying in from the left-hand side of the picture), then flew a U-shaped loop while slowly descending (“Downwind”, “Base”, and “Final”) until we were right above the numbers at the end of the runway:

Excerpt from Figure 7-1 (Traffic patterns) from the FAA's Airplane Flying Handbook

As we got closer and closer to the numbers I kept expecting my instructor to announce “I have the airplane” and take over the controls from me.  But he never did!  He made occasional suggestions at various points — reduce power to increase the descent rate, pitch the nose of the airplane down to gain a little extra airspeed — then all of a sudden he said “okay, start the flare now” and moments later we sank down onto the main wheels in what I consider to be one heck of a pretty darned good landing for my first attempt.

After we settled down onto the runway he instructed me to add full power and we took off again for another circuit around the traffic pattern and another landing.  I found it surprisingly challenging to try to fly a rectangle that looks exactly like the diagram above; for example, my turns ended up not being sharp 90-degree angles, and instead of a nice smooth descent during the “Base” and “Final” phases of the traffic pattern my descents tended to alternate between “not descending” and “descending too quickly”.  Nonetheless, we made it around the pattern and, with my instructor’s coaching, I executed another pretty good landing.  I made a small error during the flare — I pulled back too abruptly on the control stick — meaning that we floated up too high and had to correct for that (by adding a little power while holding a pitch-up attitude) to keep from coming down too hard on the wheels.

The third and final landing wasn’t great per se but we and the plane all made it down intact.  I made several errors:

  • I came in too high for the landing.  Coming in too high was a problem in all three of my landings; each time I started my descent at what I thought was the correct time, but I still ended up much higher than I expected to be by the time we crossed the threshold of the runway.  As I tried to correct our height on the third landing I ended up causing the airplane to go faster than the desired airspeed (the desired speed is about 80 MPH) as we neared touchdown.  We were ultimately able to burn off the excessive speed before landing (runway 29 at KBED is 7,011 feet long and the Cessna 172 only needs a fraction of that to land, so we just floated above the runway for a few moments while the plane slowed down) but I definitely need to work on controlling my altitude on future flights.
  • By this time the meteorological conditions were deteriorating and there was an increasing amount of crosswind (wind across the runway) that I was supposed to correct for using the rudder, in order to make sure we were lined up exactly with the runway at the moment we touched down.  I didn’t use the rudder correctly so we weren’t exactly lined up straight when we touched down.
  • At the moment that the main (rear) wheels touch down, any pilot will be pulling backwards quite a bit on the control stick in order to keep the airplane’s nose pitched up.  This pitch-up attitude allows the plane to slow down naturally and causes the nose wheel to slowly lower to the ground.  After touching down the final time I released my back-pressure too quickly, meaning that the nose came down more quickly than desired.
  • As we slowed down on the runway I applied the brakes too aggressively and not completely in tandem, causing us to swerve a little bit as we continued down the runway.  The gold standard is to apply gentle pedal inputs whenever you’re careening down the runway at 75 MPH.

Still, we and the plane all made it down intact.  For the next hour I grinned from ear-to-ear from having flown the maneuvers and especially from having landed the airplane.  During the post-flight brief my instructor conveyed that he was impressed with my performance throughout the flight (although he pointed out the above problems, and also noted that I need to work on applying better back-pressure to the control stick near the end of the takeoff roll).  Overall it was a great day.

« Newer PostsOlder Posts »