Wednesday, November 22, 2017

The Path to Becoming a STEM Teacher.

Those who've seen my recent Facebook posts know I finally decided to become a triathlon coach, with the intent to focus on beginners and data-driven coaching.  Literally days after making that decision I received a newsletter from Code.org mentioning that EnCorps was recruiting STEM teachers from the sci/tech community.

I immediately thought: "Woah. Teachers get summers off.  I could coach more during race season!"

Then I thought about the state of my career, and that it may be time for a major change.  Over the past few years I've been encountering significant ageism now that I've become an "older" engineer seeking permanent or contract work. It's certainly not as easy for me to find new business as it used to be!  I call it ageism because I know for a fact my skills are relevant in the market: Some recent job descriptions look as though they were pulled from my resume!  Yet I'm not getting many interviews, not even phone interviews.

EnCorps gave me a phone interview last Monday, a few day after I completed the online application, and they've scheduled an in-face interview for next Wednesday.  The feedback I've received from the EnCorps SoCal recruiter has been totally enthusiastic, despite the fact that only about 18% of EnCorps applicants make it to the classroom.

Still, it is nice to be wanted. I had almost forgotten what it felt like.

I started down this path primarily out of curiosity. I'm getting more excited with each passing day, but also more aware of the huge amount of work ahead and the great responsibilities to come.

But why leave engineering?  It is what I've loved doing for over 30 years, and it forms a core part of my identity.  It is also been the most fun I could ever imagine getting paid for!  Being an engineer has been the perfect fit for me.

I've had to look very closely at my motivations and the downsides.  It could be that I'll be a terrible teacher, though I honestly believe I'll do fine. I've had several teaching experiences during my career, and they all turned out well.  I greatly enjoyed them, and my students did too.

Truth be told, I have hobby projects that will keep me neck-deep in hands-on engineering for years to come. Many of these projects were started so I could learn and apply new technologies, both for fun and for professional development.

I've also been advising crowdfunding projects, participating in several science and tech forums, and answering questions on some of the StackExchange sites.  Which, when you think about it and squint just right, could look a bit more like teaching than engineering.  I wonder if I've been on this path for a while, and simply failed to see it for what it was?  Perhaps, but I suspect it's simply how I like to fill my time. Still, it is relevant.

I'm moving forward with a career switch to STEM teaching.  Wish me luck!

Sunday, November 12, 2017

Failure Modes for Self-Driving Cars: It's All About "Situational Awareness"!

There has been lots of recent discussion concerning when and how self-driving cars should return control to the driver, and how this process should work in a variety of scenarios.

I won't be discussing truly autonomous vehicles, which by definition have only passengers, not drivers.  Self-driving cars, in my use of the term here, always require the presence of a licensed driver, and completely support operation as conventional cars.  I'll use the term "autopilot" (as in the aircraft and Tesla sense) to more clearly distinguish "autonomous" from "self-driving" vehicles.

The first and most important scenario concerns the rapid and total failure of the autopilot system, where control of the vehicle suddenly shifts to the driver.

Even if the car has independent and hardened emergency systems to help out when the autopilot ceases to function normally (either because of damage or exceeding its capabilities), there is always the (low) chance that such backup systems will all fail when the autopilot does.

I remember well the first time I bought an older luxury car with all the nifty powered accessories.  It was also my first car with working A/C.  I was so proud of it, as it was a huge step up from the junkers I had been driving and endlessly fixing.

Late one evening while driving on the highway at speed, the battery cable fell off and hit the body, shorting the entire electrical system to ground.  (I later found the entire battery post had fallen off!)

The headlights and dash lights went out, and I initially felt blinded.  Cruise control cut-out and the car started slowing.  I had no power steering and the car started drifting out of its lane.  Gas pedal response was sluggish and the engine started running rough.  The automatic transmission wouldn't shift automatically.

It was only my experience with a series of junker cars that saved me.  While I never before had everything die all at once, it wasn't rare for one thing or another to go wrong for me during a drive.  Pretty much every car system had failed for me at least once.  In the back of my mind I was always running sub-conscious "what if" scenarios, and adjusting my driving to avoid traffic situations that could make a failure worse.

I firmly gripped the steering wheel and started "driving by Braille" while my eyes adjusted.  Fortunately I was in California, which has "Blot's Dots" bumps and reflectors glued between the lanes and at the outer edges. While the headlights of the cars near me helped, it still took about a full second for my eyes to adapt to the metropolitan sky glow well enough to see the road immediately in front of me.

I had no brake lights; I knew the greatest hazard was the cars behind me and next to me, so I didn't want to slow down too quickly.  I applied some gas and tried to get the transmission to shift (it did respond to manual input).  Only after I reached the shoulder did I apply the brakes and come to a complete stop.

Now, let's instead say I was in a Tesla, with full Autopilot Mode active, when the battery pack suddenly became completely disabled (not really possible, but work with me here).  This raises two main questions:
1) What parts of this scenario can or should be handled by the "dead" self-driving system?
2) How can the driver be kept ready to cope with the "total failure" situation?

Let's discuss the first one first:  Before we can trust a self-driving system to self-drive, we must first trust the self-driving system to exit self-drive mode and bring the car to a safe stop, even without driver help.

This means the car will likely need two separate systems:  The self-drive system, and a separate emergency system that monitors both the self-drive system and the driver and on its own can safely bring the car to a halt.  This emergency system must:
- Have its own controller, wiring and power source, separate from the rest of the car.
- Be able to take steering, propulsion and brake control away from a failed self-drive system.
- Work long enough to get the car from speed down to a safe stop, preferably at a safe location.
- Allow the driver to take control at any time.
- Encourage (not force) the driver to take control when the emergency system itself lacks control of steering and/or brakes (electrical regen and/or mechanical).

There are many other things such an emergency system must do, but they are at a lower priority than the above.  For example, such a system should also snug the seatbelts to ensure the driver is in the right position to take control and (worst case) be ready for airbag deployment.  The system should also pose minimal risk to other traffic by doing its maneuvers in ways that enable other drivers to safely respond (avoid causing accidents).

Such systems already exist and are in common use in other industries.  For example, virtually all industrial robots have independent safety monitoring systems that prevent the robot from harming itself or its environment, especially people nearby.  And NASA has for over half a century pioneered such emergency control systems for aircraft and spacecraft.

Now let's look at the second situation: Even the best emergency backup systems can fail.  Fortunately, old technologies (and existing regulations) ensure the driver can establish emergency control over steering and brakes.  This situation now becomes ensuring the driver is ready to take control.

The emergency backup system is a form of "active" safety.  Before digging deeper, let's talk about "passive" safety systems:  When all else goes wrong (but no collision has occurred), the mechanical systems themselves can provide safer vehicle behavior.  The most familiar examples of this are:
1. The mechanical design and construction of the steering system, where the wheels gradually come to center when the driver (or autopilot) is not exerting direct control.
2. The design of the accelerator and brakes, so that neither engages without the driver (or autopilot) exerting direct control:  The vehicle passively glides to a stop when active control is absent.

Clearly, every self-driving car must preserve all existing passive safety features.  That's actually a significant design complication, that the self-driving actuators by default are safely inactive whenever power or positive control is removed.

Very few drivers today have any experience with unreliable cars.  Cars built over the past 30 years have amazingly low failure rates (assuming you promptly handle all recalls), leading to exceptionally high reliability and driver confidence.

Can we maintain driver confidence, and create such confidence for autopilot systems, while simultaneously keeping the driver ready to take over during a total system failure?

Here's where we finally discuss the title of this post, "situational awareness".  In this case, situational awareness means the driver is continuously informed about, and consciously aware of, the status of the car and the state of the current driving environment.  This level of awareness must especially be maintained while in self-driving mode, when the driver may be focused on other activities.

It is important to understand that awareness is always changing; it fades with time and must be actively refreshed.  The best possible awareness comes only when in full manual control: In all other fully- or semi-automated driving modes, the driver will inherently and inevitably have a significantly reduced level of situational awareness.

The goal then becomes keeping the driver at a "good enough" level of situational awareness that will enable prompt switching to the full, manual control level of situational awareness.

Increasing our level of situational awareness is perhaps one of the hardest tasks for the human mind to do in real-time. It involves not only refocusing our senses, but also activating our musculature, and even changing our posture.

Here's the worst case, the stuff of nightmares: Imagine being asleep, then waking up in the cockpit of a race car in the middle of a race.  Your ears are filled not just with the noises of the car, but also of the other race cars and maybe even the crowd.  Your eyes are assaulted by the brightly lit race course filled with weaving cars, as well as a dash filled with a huge number of gauges.  Your hands feel the shake of the steering wheel as you compulsively tighten your grip.  And who knows what your legs and feet are doing!

Clearly, the first thing is to not make the situation worse.  There must be no visual or audible distractions that get in the way of dealing with the situation: Alarms must be very noticeable, but not shockingly loud or bright.

OK, so that's the worst case when a loss of autopilot happens.  How can we best be prepared for it?  And do so without removing the benefits of self-driving?

Well, obviously the driver must be awake.  Not only that, but the driver must also be alert enough to take control.  The only way I know of to positively ensure this with any degree of reliability is by interactive testing (not via passive monitoring, as others have suggested).  The driver must occasionally take actual full control of the vehicle, or at least demonstrate a precisely equivalent level of readiness by other means.

More importantly, this is not just about manual driving: It is about ensuring the driver is capable of smoothly and safely transitioning from self-driving mode into manual driving.  It's about the process of taking control, a precursor step to to the process of manual driving.

To me, this means the modes of the autopilot can't be simply "on" and "off".  It should have the initial mode of "taking automatic control" and the final mode of "surrendering automatic control".  This last mode should, to the greatest extent possible, also be part of the emergency system.

If the driver can't successfully follow the "surrendering automatic control" process, the system should not turn control over to the driver, and should instead perform an alternative action (continue driving, pull over safely, etc.).

Sunday, November 5, 2017

Writing Documentation That Doesn't Suck

I've often had to write manuals for the products I've developed. The Technical/Maintenance manual is easiest (because the audience is technical), and the User/Operations Manual is by far the hardest (anyone can be a user).

Like many engineers, I took the minimum number of required writing classes.  So it was not a huge surprise that my initial attempts at product documentation were terrible.  Over time I finally became "not horrible" at documentation, and the main path to success was to avoid saying too much!

There are many technical writing guidelines online, but I find most are too narrowly focused to be of general use. A few simple guidelines are generally enough to avoid documentation disaster.

Here are some guidelines that have served me well:

  • "Don't write so that you can be understood, write so that you can't be misunderstood." William Howard Taft
This is partly about getting inside your reader's head, and partly about getting out of your own. It is all too easy to write for people who are near-clones of yourself, and forget the wide range of other folks on the planet.

It's about making your writing as "simple and obvious" as possible. Avoid long-winded explanations when a couple short, carefully-crafted sentences will do the job. That said, always use however many words are needed to make each point clearly and concisely.

Important things may need to be said more than once. What I typically do is "say it once", then show an illustration, then explain the illustration, and finally summarize what was just done (what success looks like).

  • Have some "fresh eyes" available.
Given that we can't understand all possible readers, we must remember that we only truly care about the first-time reader. That means at least some of the folks who review our work must also be as close to a first-time reader as possible.

In particular, this means it must be simple and easy for actual users to provide feedback on the manual itself. Encourage each customer to print and mark-up the instructions, and tell them how to get their input back to you (email, forum, etc.).

  • Include a glossary.
It is way too easy to use too many technical terms, and too hard to get rid of them. Having a glossary and always keeping it current is a great way to track specialty terms and language.

  • Don't get tied down to a Table of Contents: Make it the last thing you generate.
Too many folks start with a Table of Contents as the plan for the document. This is backwards! The document should have whatever organization and structure it needs to get its job done, and the flow is expected to change with time.

That said, it is important to have a "ToDo List" for the document, a detailed set of goals for what must be included and not left out.

Of course, organization is needed, but primarily at the lower levels:
o What is the purpose of this step?
o What tools and parts are needed to accomplish this step?
o What things must I do?
o How can I verify that I did it correctly?

  • There is no such thing as too many good illustrations.
However, there is such a thing as too many bad illustrations! The old saying "A picture is worth a thousand words" isn't totally wrong, but having a picture doesn't mean words aren't necessary. Illustrations should add context and meaning to words, not replace them.

For something like kit assembly, there are going to be situations that words can't express in an understandable way. This is when illustrations matter most, so take the time to create lots of candidates and choose the best. Try to avoid the "one and done" attitude to images or drawings.

  • Layout matters! But not until close to the end.
One important goal is to not force the reader to have to flip back and forth between pages to understand what's going on. Text mixed with images? Images and text in separate columns? Size? Pagination? These are all important to the reader, but not unless and until the needed information is already present in the document.

  • Documentation is really about "teaching", not "telling".
The user has goals, and the documentation must ensure the user will meet those goals with minimal confusion, and minimal need to ask for help. For kit assembly, the initial steps should train the user to become a good assembler, not merely get things put together.

Not all of us learn in the same way, and there are a number of ways by which we learn. These ways are called "learning modalities" (or "learning styles"), and while we all have access to all them, some work much better than others, and which ones do best will vary between individuals.

It is often necessary to say a thing in different ways (words, pictures) in order to engage multiple modalities. It is also important to help the user sharpen the modalities that will be most useful, and that's where training comes in. Take time at the start to build the skills the user will need before making use of them. Even the fundamentals matter:
o What is an "M4 screw"? Is a screw different than a bolt?
o What does it mean to "tighten a screw"? How tight is tight enough? How tight is too tight?
o What does it mean to "crimp a connection"? How can I tell if I did it right?