Thursday, December 7, 2017

Puck Fatreon!

I'm boiling mad at Patreon.  They just shifted their fee structure in a way that increased the cost of my pledges by a whopping 30%, while simultaneously charging Creators a flat 5% fee. That's 35% of my contribution that does NOT make it to the Creators I sponsor!

I prefer to give many small pledges to a large set of Patreon Creators, rather than larger amounts to a few. I feel this best represents my interests, and also ensures I support the less popular of the Creators I admire.

Until today, I have heard nothing negative from the Creators I sponsor about the cut Patreon takes to process and deliver pledges.

I'm listening, and am acting.

I have now decided Patreon has become a vampire, a leach on the system, and is no longer a suitable platform for supporting Creators.

I have halted all my Patreon pledges.

I hope to soon hear what new funding mechanisms my favored Creators prefer, and I will follow them there.

Go to Hell, Patreon.



OK, I feel better now.  The underlying problem would appear to be that the US (and the world?) lacks a cost-effective micro-payments system. Which in turn sets the stage for vampires like Patreon to appear and flourish.

Everything is strangled by the 4 major credit card networks, whose fees are growing despite a reduced cost (as a percentage of money transferred) of doing business in general, and a lower cost of transactions in particular.

The excesses in the credit card system are made perfectly evident by the presence of so many "Cash Rewards" cards, which are simply refunding some of the excess to their customers.  If you don't have one, get one soon!

A better route may be to use the ACH (Automatic Clearing House), the network used to process EFTs (Electronic Fund Transfers) such as checks, which has a vastly lower transaction fees (though this comes with lower guarantees by the network itself, which are instead taken up by the financial institutions themselves).

The best route may be to build a new micro-payments network from scratch, or expand an improve an existing one.

The Patreon debacle should hopefully cause some engagement on this important financial infrastructure issue.

Friday, December 1, 2017

How and Why Did I Become an Engineer?

During a recent interview I was asked why I had become an engineer.  In that context I gave a few key reasons.  Here's the full story.

In high school back in the early 1970's I loved all things Science and Math, but of them all I liked Biology best, by far.  When I got to take a computer programming class (using FORTRAN IV), the first program I wrote on my own was to help me calculate the metabolic rates of the rats were were raising in the Biology lab.  At that point, I saw programming as a useful tool to know, but not as anything I'd want to pursue as a career.

After high school I chose to join the US Navy rather than go directly to college.  There were many reasons encouraging me toward this path, and it turned out to be fantastically right for me.  While in the Navy I was trained in several areas, including Nuclear Propulsion. I got to operate a nuclear reactor at the tender age of 19!  I also learned lots about gyrocompasses, jet engine control systems, other types of electronics, mechanical and hydraulic system, and a ton of engineering in general.

As I was nearing 6 years in the Navy, I realized I was more than ready to start college.  During my last year in the Navy I bought an Apple ][+ with the Language Card and UCSD Pascal. My experience with Pascal was so different from my days with FORTRAN that I immediately knew I wanted to write software for a living.  I chose to attend UCSD because I wanted to be associated with a school that not only developed useful technology, but also got it into people's hands.

While applying to UCSD I learned about their Computer Engineering degree, which combined all of a Computer Science degree with the digital half of an Electrical Engineering degree.  It let me combine my desire to write software with my military engineering experience.

Upon arriving at UCSD I was immediately overwhelmed.  Six years away from high school had taken a toll, and I was far from ready for the rigor of an academic environment.  As a Freshman I was struggling to get through the brick wall of the Math sequence, though I had lots of fun with the Physics sequence, particularly the associated lab class.

In my Sophomore year I got to start on some of my technical electives, so I decided to revisit my first science love, Biology.  I was extraordinarily fortunate to be taught by Dr. Paul Saltman, a world-renowned molecular biologist who loved to teach undergraduates.  I took to the class like a fish to water, my mind absorbing the concepts like a dry sponge, and I aced nearly every test.

Dr. Saltman was unusual in that he created his post-test answer key using not his own answers, but the best of the student answers.  I didn't know this at the time of the first mid-term exam, and was surprised to hear my name and several others were called to meet with the professor one day after class.  He told us of his answer key approach, and asked if he could use our answers.  Of course we all agreed!  He then went around the group asking what year we were and what our majors were.  It went something like: "Biology", "Bio-Chemistry", "Molecular Biology", "Biological Physics", and when he got to me, I said "Computer Engineering", the only non-bio major in the group.

The same thing happened again for the answer key on the second mid-term, and Dr. Saltman started trying to convince me to switch majors.  This was two class sequence, and in the second class he really turned up the pressure, even inviting me to work in his lab if I changed my major!

I kept saying no, but I didn't really have a set of reasons he would understand, much less accept.  Then I finally came up with an analogy that worked!

I told him that every time I tested a program I was developing, it could crash and burn in any of a long list of ways, not only ending the program, but sometimes also causing the host system to lockup and reboot.  Were I to do similar experimentation in a biology lab, I told him I'd need a Biosafety Level Five containment system. Unfortunately, the Biosafety levels top out at 4.  He agreed his profession, and likely the planet, would be safer were I to stay with software, where at least we can pull the plug on the computer.

My engineering experience pointed me toward writing software that interfaced directly with the real world via sensors and actuators, rather than interacting with "users".  These are called "embedded" systems, and often require the software to work as fast as things happen in the real-world, called "real-time".  So I called myself an embedded/real-time systems developer.

My education, experience and career desires came together in my first job after college: I was hired by General Atomics to write software for radiation monitoring systems for commercial nuclear power plants.  I next worked on nuclear reactor monitoring and control systems for a new Navy nuclear submarine. At my next job I worked on automated X-Ray inspections systems for munitions, and on a neutron beam system used to inspect aircraft wings for corrosion.

I've also worked on ultra-high-speed digital video cameras (100,000 fps), on instruments for aircraft, on satellite electronics (for a mission that unfortunately never launched), communication systems for small UAVs, and on many other fascinating systems.

I had many opportunities to step into management roles, but I always chose, perhaps selfishly, to remain a software developer specializing in instrumentation and embedded/real-time systems.

My focus on instrumentation meant I did only a minimum of user interface and web development, and no mobile development at all (other than writing the occasional device driver). I wrote no business or enterprise software, and had only a minimal grasp of IT principles.

While there will always be a need for instrumentation software, my specialization forms an ever smaller fraction of the overall software development landscape. Which has made job searches increasingly more difficult, and interviews more frustrating, as fewer HR people know how to specify and fill positions for instrumentation developers.

That's not to say there's no hope!  The explosion of the Arduino and now the Raspberry Pi into the hobbyist market bodes well for a healthy population of embedded/real-time system developers.

After 30 years I'm finding that choosing to stay out of management has made me a relative fossil among the applicants for instrumentation/embedded/real-time developer positions.  Rather than beat my head against the wall, I've instead decided to take my skills and experience in a new direction: I'm going to become a STEM teacher, and see how many folks I can convince to become the next generation of engineers!

Wednesday, November 22, 2017

The Path to Becoming a STEM Teacher.

Those who've seen my recent Facebook posts know I finally decided to become a triathlon coach, with the intent to focus on beginners and data-driven coaching.  Literally days after making that decision I received a newsletter from Code.org mentioning that EnCorps was recruiting STEM teachers from the sci/tech community.

I immediately thought: "Woah. Teachers get summers off.  I could coach more during race season!"

Then I thought about the state of my career, and that it may be time for a major change.  Over the past few years I've been encountering significant ageism now that I've become an "older" engineer seeking permanent or contract work. It's certainly not as easy for me to find new business as it used to be!  I call it ageism because I know for a fact my skills are relevant in the market: Some recent job descriptions look as though they were pulled from my resume!  Yet I'm not getting many interviews, not even phone interviews.

EnCorps gave me a phone interview last Monday, a few day after I completed the online application, and they've scheduled an in-face interview for next Wednesday.  The feedback I've received from the EnCorps SoCal recruiter has been totally enthusiastic, despite the fact that only about 18% of EnCorps applicants make it to the classroom.

Still, it is nice to be wanted. I had almost forgotten what it felt like.

I started down this path primarily out of curiosity. I'm getting more excited with each passing day, but also more aware of the huge amount of work ahead and the great responsibilities to come.

But why leave engineering?  It is what I've loved doing for over 30 years, and it forms a core part of my identity.  It is also been the most fun I could ever imagine getting paid for!  Being an engineer has been the perfect fit for me.

I've had to look very closely at my motivations and the downsides.  It could be that I'll be a terrible teacher, though I honestly believe I'll do fine. I've had several teaching experiences during my career, and they all turned out well.  I greatly enjoyed them, and my students did too.

Truth be told, I have hobby projects that will keep me neck-deep in hands-on engineering for years to come. Many of these projects were started so I could learn and apply new technologies, both for fun and for professional development.

I've also been advising crowdfunding projects, participating in several science and tech forums, and answering questions on some of the StackExchange sites.  Which, when you think about it and squint just right, could look a bit more like teaching than engineering.  I wonder if I've been on this path for a while, and simply failed to see it for what it was?  Perhaps, but I suspect it's simply how I like to fill my time. Still, it is relevant.

I'm moving forward with a career switch to STEM teaching.  Wish me luck!

Sunday, November 12, 2017

Failure Modes for Self-Driving Cars: It's All About "Situational Awareness"!

There has been lots of recent discussion concerning when and how self-driving cars should return control to the driver, and how this process should work in a variety of scenarios.

I won't be discussing truly autonomous vehicles, which by definition have only passengers, not drivers.  Self-driving cars, in my use of the term here, always require the presence of a licensed driver, and completely support operation as conventional cars.  I'll use the term "autopilot" (as in the aircraft and Tesla sense) to more clearly distinguish "autonomous" from "self-driving" vehicles.

The first and most important scenario concerns the rapid and total failure of the autopilot system, where control of the vehicle suddenly shifts to the driver.

Even if the car has independent and hardened emergency systems to help out when the autopilot ceases to function normally (either because of damage or exceeding its capabilities), there is always the (low) chance that such backup systems will all fail when the autopilot does.

I remember well the first time I bought an older luxury car with all the nifty powered accessories.  It was also my first car with working A/C.  I was so proud of it, as it was a huge step up from the junkers I had been driving and endlessly fixing.

Late one evening while driving on the highway at speed, the battery cable fell off and hit the body, shorting the entire electrical system to ground.  (I later found the entire battery post had fallen off!)

The headlights and dash lights went out, and I initially felt blinded.  Cruise control cut-out and the car started slowing.  I had no power steering and the car started drifting out of its lane.  Gas pedal response was sluggish and the engine started running rough.  The automatic transmission wouldn't shift automatically.

It was only my experience with a series of junker cars that saved me.  While I never before had everything die all at once, it wasn't rare for one thing or another to go wrong for me during a drive.  Pretty much every car system had failed for me at least once.  In the back of my mind I was always running sub-conscious "what if" scenarios, and adjusting my driving to avoid traffic situations that could make a failure worse.

I firmly gripped the steering wheel and started "driving by Braille" while my eyes adjusted.  Fortunately I was in California, which has "Blot's Dots" bumps and reflectors glued between the lanes and at the outer edges. While the headlights of the cars near me helped, it still took about a full second for my eyes to adapt to the metropolitan sky glow well enough to see the road immediately in front of me.

I had no brake lights; I knew the greatest hazard was the cars behind me and next to me, so I didn't want to slow down too quickly.  I applied some gas and tried to get the transmission to shift (it did respond to manual input).  Only after I reached the shoulder did I apply the brakes and come to a complete stop.

Now, let's instead say I was in a Tesla, with full Autopilot Mode active, when the battery pack suddenly became completely disabled (not really possible, but work with me here).  This raises two main questions:
1) What parts of this scenario can or should be handled by the "dead" self-driving system?
2) How can the driver be kept ready to cope with the "total failure" situation?

Let's discuss the first one first:  Before we can trust a self-driving system to self-drive, we must first trust the self-driving system to exit self-drive mode and bring the car to a safe stop, even without driver help.

This means the car will likely need two separate systems:  The self-drive system, and a separate emergency system that monitors both the self-drive system and the driver and on its own can safely bring the car to a halt.  This emergency system must:
- Have its own controller, wiring and power source, separate from the rest of the car.
- Be able to take steering, propulsion and brake control away from a failed self-drive system.
- Work long enough to get the car from speed down to a safe stop, preferably at a safe location.
- Allow the driver to take control at any time.
- Encourage (not force) the driver to take control when the emergency system itself lacks control of steering and/or brakes (electrical regen and/or mechanical).

There are many other things such an emergency system must do, but they are at a lower priority than the above.  For example, such a system should also snug the seatbelts to ensure the driver is in the right position to take control and (worst case) be ready for airbag deployment.  The system should also pose minimal risk to other traffic by doing its maneuvers in ways that enable other drivers to safely respond (avoid causing accidents).

Such systems already exist and are in common use in other industries.  For example, virtually all industrial robots have independent safety monitoring systems that prevent the robot from harming itself or its environment, especially people nearby.  And NASA has for over half a century pioneered such emergency control systems for aircraft and spacecraft.

Now let's look at the second situation: Even the best emergency backup systems can fail.  Fortunately, old technologies (and existing regulations) ensure the driver can establish emergency control over steering and brakes.  This situation now becomes ensuring the driver is ready to take control.

The emergency backup system is a form of "active" safety.  Before digging deeper, let's talk about "passive" safety systems:  When all else goes wrong (but no collision has occurred), the mechanical systems themselves can provide safer vehicle behavior.  The most familiar examples of this are:
1. The mechanical design and construction of the steering system, where the wheels gradually come to center when the driver (or autopilot) is not exerting direct control.
2. The design of the accelerator and brakes, so that neither engages without the driver (or autopilot) exerting direct control:  The vehicle passively glides to a stop when active control is absent.

Clearly, every self-driving car must preserve all existing passive safety features.  That's actually a significant design complication, that the self-driving actuators by default are safely inactive whenever power or positive control is removed.

Very few drivers today have any experience with unreliable cars.  Cars built over the past 30 years have amazingly low failure rates (assuming you promptly handle all recalls), leading to exceptionally high reliability and driver confidence.

Can we maintain driver confidence, and create such confidence for autopilot systems, while simultaneously keeping the driver ready to take over during a total system failure?

Here's where we finally discuss the title of this post, "situational awareness".  In this case, situational awareness means the driver is continuously informed about, and consciously aware of, the status of the car and the state of the current driving environment.  This level of awareness must especially be maintained while in self-driving mode, when the driver may be focused on other activities.

It is important to understand that awareness is always changing; it fades with time and must be actively refreshed.  The best possible awareness comes only when in full manual control: In all other fully- or semi-automated driving modes, the driver will inherently and inevitably have a significantly reduced level of situational awareness.

The goal then becomes keeping the driver at a "good enough" level of situational awareness that will enable prompt switching to the full, manual control level of situational awareness.

Increasing our level of situational awareness is perhaps one of the hardest tasks for the human mind to do in real-time. It involves not only refocusing our senses, but also activating our musculature, and even changing our posture.

Here's the worst case, the stuff of nightmares: Imagine being asleep, then waking up in the cockpit of a race car in the middle of a race.  Your ears are filled not just with the noises of the car, but also of the other race cars and maybe even the crowd.  Your eyes are assaulted by the brightly lit race course filled with weaving cars, as well as a dash filled with a huge number of gauges.  Your hands feel the shake of the steering wheel as you compulsively tighten your grip.  And who knows what your legs and feet are doing!

Clearly, the first thing is to not make the situation worse.  There must be no visual or audible distractions that get in the way of dealing with the situation: Alarms must be very noticeable, but not shockingly loud or bright.

OK, so that's the worst case when a loss of autopilot happens.  How can we best be prepared for it?  And do so without removing the benefits of self-driving?

Well, obviously the driver must be awake.  Not only that, but the driver must also be alert enough to take control.  The only way I know of to positively ensure this with any degree of reliability is by interactive testing (not via passive monitoring, as others have suggested).  The driver must occasionally take actual full control of the vehicle, or at least demonstrate a precisely equivalent level of readiness by other means.

More importantly, this is not just about manual driving: It is about ensuring the driver is capable of smoothly and safely transitioning from self-driving mode into manual driving.  It's about the process of taking control, a precursor step to to the process of manual driving.

To me, this means the modes of the autopilot can't be simply "on" and "off".  It should have the initial mode of "taking automatic control" and the final mode of "surrendering automatic control".  This last mode should, to the greatest extent possible, also be part of the emergency system.

If the driver can't successfully follow the "surrendering automatic control" process, the system should not turn control over to the driver, and should instead perform an alternative action (continue driving, pull over safely, etc.).

Sunday, November 5, 2017

Writing Documentation That Doesn't Suck

I've often had to write manuals for the products I've developed. The Technical/Maintenance manual is easiest (because the audience is technical), and the User/Operations Manual is by far the hardest (anyone can be a user).

Like many engineers, I took the minimum number of required writing classes.  So it was not a huge surprise that my initial attempts at product documentation were terrible.  Over time I finally became "not horrible" at documentation, and the main path to success was to avoid saying too much!

There are many technical writing guidelines online, but I find most are too narrowly focused to be of general use. A few simple guidelines are generally enough to avoid documentation disaster.

Here are some guidelines that have served me well:

  • "Don't write so that you can be understood, write so that you can't be misunderstood." William Howard Taft
This is partly about getting inside your reader's head, and partly about getting out of your own. It is all too easy to write for people who are near-clones of yourself, and forget the wide range of other folks on the planet.

It's about making your writing as "simple and obvious" as possible. Avoid long-winded explanations when a couple short, carefully-crafted sentences will do the job. That said, always use however many words are needed to make each point clearly and concisely.

Important things may need to be said more than once. What I typically do is "say it once", then show an illustration, then explain the illustration, and finally summarize what was just done (what success looks like).

  • Have some "fresh eyes" available.
Given that we can't understand all possible readers, we must remember that we only truly care about the first-time reader. That means at least some of the folks who review our work must also be as close to a first-time reader as possible.

In particular, this means it must be simple and easy for actual users to provide feedback on the manual itself. Encourage each customer to print and mark-up the instructions, and tell them how to get their input back to you (email, forum, etc.).

  • Include a glossary.
It is way too easy to use too many technical terms, and too hard to get rid of them. Having a glossary and always keeping it current is a great way to track specialty terms and language.

  • Don't get tied down to a Table of Contents: Make it the last thing you generate.
Too many folks start with a Table of Contents as the plan for the document. This is backwards! The document should have whatever organization and structure it needs to get its job done, and the flow is expected to change with time.

That said, it is important to have a "ToDo List" for the document, a detailed set of goals for what must be included and not left out.

Of course, organization is needed, but primarily at the lower levels:
o What is the purpose of this step?
o What tools and parts are needed to accomplish this step?
o What things must I do?
o How can I verify that I did it correctly?

  • There is no such thing as too many good illustrations.
However, there is such a thing as too many bad illustrations! The old saying "A picture is worth a thousand words" isn't totally wrong, but having a picture doesn't mean words aren't necessary. Illustrations should add context and meaning to words, not replace them.

For something like kit assembly, there are going to be situations that words can't express in an understandable way. This is when illustrations matter most, so take the time to create lots of candidates and choose the best. Try to avoid the "one and done" attitude to images or drawings.

  • Layout matters! But not until close to the end.
One important goal is to not force the reader to have to flip back and forth between pages to understand what's going on. Text mixed with images? Images and text in separate columns? Size? Pagination? These are all important to the reader, but not unless and until the needed information is already present in the document.

  • Documentation is really about "teaching", not "telling".
The user has goals, and the documentation must ensure the user will meet those goals with minimal confusion, and minimal need to ask for help. For kit assembly, the initial steps should train the user to become a good assembler, not merely get things put together.

Not all of us learn in the same way, and there are a number of ways by which we learn. These ways are called "learning modalities" (or "learning styles"), and while we all have access to all them, some work much better than others, and which ones do best will vary between individuals.

It is often necessary to say a thing in different ways (words, pictures) in order to engage multiple modalities. It is also important to help the user sharpen the modalities that will be most useful, and that's where training comes in. Take time at the start to build the skills the user will need before making use of them. Even the fundamentals matter:
o What is an "M4 screw"? Is a screw different than a bolt?
o What does it mean to "tighten a screw"? How tight is tight enough? How tight is too tight?
o What does it mean to "crimp a connection"? How can I tell if I did it right?

Sunday, October 22, 2017

SexyCyborg: Causing Continual Cultural Conflict

I want to talk a bit about Naomi Wu, a Shenzhen freelance programmer, a Maker, a model and a vlogger, who added an enormous pair of 800 ml breast implants to her already abundant good looks and then established her YouTube persona as SexyCyborg.

When appearing in public, she often wears the tiniest of nano-shorts and a crop-top revealing a slice of under-boob.  At pool parties, her bikini consists of little more than strings.

Naomi is intentionally stirring the interfaces between Makers, engineers, culture, sexism and more.  She has many goals in her life, but the one that fascinates me most is her goal to level the playing field for everyone, female and male, young and old, newbie and expert, both as Makers and in life in general.

I support her on Patreon, and I want to be clear about the reasons why.
  1. She's a female Maker pushing her way into a global phenomenon still dominated by white Western sexist male culture.
  2. She's a talented self-taught Maker who works in multiple areas, from software, to 3D design and printing, to wearables, to work tables and shelves.
  3. She's a Maker on the inside of the Great Chinese Firewall.
Certainly, the above characteristics alone are worthy of support.  The fact that Naomi is also the SexyCyborg really has relevance to only two items in the above list: Maker sex discrimination and her wearables.

There is one enormously important item missing the above list:
  1. Inspiring Makers of all ages, especially female Makers, to pursue their interests despite gender-based or age-based resistance.
This is actually what I most want to support, and would most like to see succeed on a global basis.



I'm making it sound like it's all about Naomi Wu, SexyCyborg.  It isn't.  It's also about me.  It's about how I view Makers and how I view women, and how confused I used to get when both appeared in a single package.

I always thought of myself as an unbiased person, free from common prejudice.  But Naomi arrived like a stick in my eye, jumbling my perceptions, causing me to uncomfortably flip between "Wow, cool Maker" and "Wow, hot woman" as if they were two different things.

I can't count the times Naomi has made very clear the differences between how people in Shenzhen respond to her appearance compared to Western males.  The charming videos with kids and "Aunties" were one thing, but the lack of drooling, leering looks from Shenzhen males when she walked in public finally made the point clear to me.  The only leering looks I can recall in any of her videos came primarily from the Western expats at her pool party modeling gigs.

So, what's the difference between Shenzhen men and Western men?  Is there a difference I can find, understand, learn from, and put to good use?  Well, I'm far from Shenzhen, and don't speak either Mandarin or Cantonese, so direct research is difficult.  I started by rewatching Naomi's videos, and well as videos by other Asian vloggers, both locals and expats.

I was most impressed with the videos by Western expats who had chosen to live in China long-term; had built a career; had married a local; had started a family.  How their videos changed from the earliest to the most recent.  And how they and their spouses reacted during trips to the West.

Then I viewed Naomi's videos again, especially her 360 videos, where I could look at all the folks around her.  I noticed how she got about as many looks from Chinese women as she did from Chinese men, and for about the same brief length of time, with no major facial reaction other than, at most, a small smile, never a frown.  Many of the expats had the same behavior, though there were several very obvious exceptions who stared and leered, turning their bodies when their necks reached a limit.

After a while I finally started to understand something. Perhaps it's not the full picture, but I think Chinese and Americans view beauty, especially female beauty, in very different ways.  It may come down to the perception of beauty itself.

For example, to a Westerner, a beautiful building, a beautiful garden, a beautiful song, and a beautiful woman are likely thought of as representing distinctly different kinds of beauty.  To the Chinese, I believe they are seen as parts of a continuous whole, free of sharp distinctions, sharing more commonality than differences.

Western males judge women according to their perceived beauty, attributing to them attributes that are unrelated to appearance, such as personality or intelligence.  Western women are not immune to similar judgments concerning each other and men.

I believe the Chinese view physical attributes as simply a means to recognize someone from a distance, and not as a representation of who they "are".  The Chinese also view identity a bit differently, not residing entirely within the individual, but also being diffused into family, friends, associates and even society.  To get to know someone in China I believe it must be just as important to spend time with others who know them in addition to individual time.

I believe it does come down to identity, both its definition and perception, particularly in how and when appearance becomes a factor.  And, perhaps, how there is an intentional Chinese cultural emphasis on similarities over differences.

China is a communist country, a "People's Republic", that has been undergoing great social and economic upheavals over the past 40 years, causing stresses that, after a period of easing, now push the government to become more authoritarian, with increasingly invasive public and private monitoring.  The underlying ethos was and is: "we are all in this together, and must work for the common good".

China is also a region with a long dynastic history massively dominated by the Han culture and race.  In China, all of the non-Han taken together are a surprisingly small part of the population (under 10% of the total), with explicit government policies having the goal to absorb and diffuse formerly separate races and cultures into the Han-dominated society.  An example of this is the ongoing government-supported Han migration into Tibet.

Taken together, these factors provide a context that encourages specific social traits, especially among the overwhelming Han majority.  While Westerners may view some some of these traits with disdain, the simple fact is that equality is much more a default assumption within China, particularly among the Han.  And I'm saying "more equal", not "totally equal": China still has many cultural stereotypes related to sex, age, and particularly race, but they are much less noticeable than the equivalent Western traits.

I believe this Chinese equality particularly extends to sexism and judgments based on appearance.  For example, personal style is both accepted and appreciated in China, but not judged by its absence or extremeness. My perception is that fashion and style are much more about expression and entertainment rather than anything about identity.

A Western comparison may be our taste in music:  We sometimes want to listen to Jazz, other times Pop, but our favorite may be Country.  We seldom judge people by what they are listening to in the moment or by their general music tastes, at least not in any way close to how we judge people based on their appearance.

So, I've chosen to try to view the external physical beauty of people more like how I suspect the Chinese do, more like a beautiful song or flower, rather than something that should inflame my hormones.

And it's working!

But I must admit I did have a bit of a head start:  Decades ago I was a semi-pro photographer (which means I took gigs only when I wanted a new piece of equipment).  As a photographer, I saw whatever was in the viewfinder as part of the picture, something to be properly framed, lighted, and composed.  This applied equally to people, places and objects.  I was photographing beautiful things, but it was more about the beauty they shared, and abstract beauty.  I took particular joy in revealing the beauty in things often not thought of as beautiful.

I gave up photography because I had let the camera isolate me from life on the other side of the lens.  I had become increasingly shy in social situations.  Things get quite bad before I realized there was a problem, getting to the point where I rarely when anywhere without my camera.  I quit cold-turkey, which helped immediately, but only decades later did I realize I had also given up the equality of the viewfinder.

The more I try to see "beauty as beauty", the more I see the world as potential photographs.

When I see images of Naomi in minimal clothes, I find I now look first to see if a Maker project is also in the image.  Because that is who Naomi is to me.  See the list at the beginning of this post.

Don't get me wrong: I don't appreciate Naomi's physical beauty any less than I did before!  I now appreciate it as part of the beauty of the greater whole; the Maker, the programmer, the model, and the many other attributes of Naomi Wu, SexyCyborg.

I also view those around me differently.  I like having less of a Pavlovian response to attractive women, less being shy and tongue-tied in their presence, more interested in the rest of who they are.  Most notably, this affects how I interact with female bar and restaurant staff, whom I now seem to adopt as sisters independent of their attractiveness.

And, finally, I must admit to the changes being ongoing and incomplete.  For example, I have come to fully understand just how rude it is to openly stare at people.  So I now do it covertly, behind sunglasses, from the corner of my eye, with my nose pointed straight ahead, with my face neutral.

My journey may not yet be complete, but I can at least try to act as though I'm further along the path.  As most Shenzhen folks do.

One day, one step, one person at a time.

Saturday, October 21, 2017

Cybersecurity *IS* "Defense in Depth".

I'm not any kind of security expert.  I'm just a real-time/embedded instrumentation developer who needs, from a security perspective, to Lock Shit Down.

No matter how much you know about your hardware and software, you really know very little of use until you get down to "Formal Proofs of Correctness" for all parts of the system.  Which are scarce, to say the least, presently limited to academic exercises or special-case military-type implementations.

When I came across anything that could use a network to affect the boot process, I'd ask IT to firewall it in our routers (in case the on-system configuration of that capability failed or was overridden), and write a test to show that the firewall was at least trivially working to meet that need (which the customer would also run to help ensure system security).

Then we came across firewall vendors who explicitly prevented closing such channels, or didn't explicitly show they were open.  We needed to assume our external firewalls would lie to us.

So I started preparing for "Defense in Depth", to have a series of simple firewalls present at every opportunity that wouldn't destroy latency or throughput (such as within network interface hardware and drivers). This worked well enough when on a wired interface, but proved inadequate when our systems started supporting wireless interfaces.

I finally had to move my embedded applications into VMs (at a significant increase in platform hardware cost), and implement firewalls at every opportunity both within the VM and between the VM and the host OS/hypervisor.

All these firewalls primarily concerned blocking traffic that wasn't related to the instrument functionality itself, traffic that was out-of-band relative to the application.  To keep things small and fast, we avoided stateful firewalls.

As the platform hardware capabilities grew (to multicore ARM), we shifted from tiny "secure" RTOSes to Embedded Linux, which meant we also needed to address hundreds of CVEs on our platform if we wanted to sell into certain markets.  This pushed us to consider using stateful firewalls.  Not to detect traffic related to a CVE, but instead to block everything that wasn't valid application traffic.

From a black-box perspective, our system eventually became immune to all know attacks, including fuzzing.  What it took to finally get us there was a formal, provable specification of ONLY our instrumentation application protocol.  This specification took the form of a lightly modified version of the firewall rule syntax: Our specification was executable!  We used it not just to initialize the firewall, but also to generate the application interface code.  A completely separate version of the specification was used for testing and validation.

Best of all, this made the instrument interface-agnostic and also agnostic to higher-level protocols: We no longer cared where the application traffic was coming from, wired or wireless LAN, even including everything from serial interfaces to cellular gateways (including SMS!).

Bottom line:  1) Use simple firewalls to block all-but-application traffic from the platform.  2) Use stateful application protocol firewall(s) to permit only valid application traffic.

Unfortunately, while this works well for "simple" M2M instrumentation interfaces, it is extraordinarily difficult to scale to more complex and versatile environments, such as a web browser on a PC (or even a Raspberry Pi).

However, it SHOULD scale well to the IoT world.  But it MUST do so in a way that consumers will accept, which means keeping security holes small in both number and duration, especially for conveniences like DHCP and semi-automated WiFi configuration.

To do this, the common internet protocols relevant to IoT must be recast in minimal forms that will support the required IoT functionality and nothing more, and sets of stateful firewall rules must be generated for these protocols.  Then every IoT device must include internal stateful firewalls to execute them.

To do so with any hope of both system-level and application-level security, the IoT application itself  must run in a VM. Which means only processors capable of VM support will be useful for "Secure IoT".  At present this rules out the tremendously popular ARM Mx family of embedded processors, but there is hope that upcoming members of this family will inherit the VM capabilities of their larger brothers.

Tuesday, August 15, 2017

About the James Damore screed...

Many have read or at least heard about the 10-page paper by James Damore and his subsequent firing by Google.

What most have failed to note is simply that the workplace performance differences attributable to sex are of the same magnitude as, or smaller than, those attributable to other group classifiers such as age, race, education, economic history, and so on.  There are quantifiable differences everywhere you look.  Few are relevant to anything.

Damore seems to argue that because such differences exist and are measurable, they impose a burden or create negative impact.  Nothing could be further from the truth.  Damore is literally using his numbers backwards.

First, the differences between individuals easily overwhelm the differences between sub-groups, especially in tech, and particularly in software development.  So anything that may appear to be relevant at some level of classification fails miserably when applied to individuals.

Second, Damore conveniently ignores the evidence that diversity (of all types, not just sex) is a net positive in technical environments.  Working with people different from ourselves can literally make the best even better.

I prefer to think of our differences as the aggregate in the concrete that binds us together as a team, making it far stronger than the cement alone.

Personally, I've done some of my best work while part of diverse teams.  In particular, design and code reviews in a diverse environment are much more dynamic, creative and productive.

I've seen the petty passive-aggressive discrimination some express, such as consistently claiming they can't understand a coworker's accent, when in fact they understand just fine.  Or by snubbing others at social team or group activities.  Or by making snide comments behind their back.  Or by doing that fakey "Hello..." subtly sexist greeting.

Most of this is done by folks who look like me, a white male.  But some is also done by members of other various minorities trying to "fit in", which perhaps is the greatest tragedy of all.

This makes me both angry and sad.  My anger keeps me vigilant, always willing to redirect or defuse a situation, and to take more direct action in private.  My sadness keeps me open and sensitive, seeking out my quieter colleagues, looking for signs of exclusion.

I need all my colleagues.  I work best as a member of a team.  Yes, I do have my individual "rockstar" moments, and I treasure them, but they don't happen every day.  My team is what happens every day.  It is what enables my best moments.

Damore simply doesn't get that.  I can only wonder how well he works on any team, much less a diverse team.

Wednesday, July 12, 2017

BuildOne $99 3D Printer on Public Pre-order!

Just got the word that the Build One 3D Printer with Automatic Bed Leveling is now available for public pre-order, with delivery in 2017.

I got in on the Kickstarter for this printer, with mine expected to arrive in September.  The Creator of this printer has had many prior successful crowdfunding campaigns, so I have no doubt he will deliver, and do so very close to his stated schedule (something relatively few tech campaigns manage to do).

This printer costs only $30 more than the 101Hero I backed last year, and is a massively better piece of hardware, with printing speeds 7x to 10x faster than my 101Hero!

Now, I'm very glad to have my 101Hero, mainly because it is a Delta printer and has been a fantastic learning experience.  But it's time for me to do some real printing without breaking the budget.

Saturday, June 24, 2017

What a life!

This is a retrospective describing how fortunate I've been since first arriving in San Diego in 1975, focusing primarily on the technical aspects of my career.  I'll try to make this more of a time-line, with just enough detail to stand on its own.  I can always add more detailed posts later, if needed.

After graduating High School in the Midwest in 1974, I had no burning desire to immediately start college.  There were many issues involved, but the main one was my having no idea of what long-term career I wanted.  I needed an income, so I got some typical low-wage jobs suitable for folks without a degree, and within months decided I needed something more.

I joined the US Navy in February 1975, enlisting under the Nuclear Power Program (NPP).  At that time, four enlisted ratings (job categories) existed in the NPP: Electronics Technician (ET), Interior Communications Electrician (IC), Electrician's Mate (EM) and Machinist's Mate (MM).  My oldest brother was a HAM radio operator and had been an ET in the Navy, and I very much wanted to learn electronics.  Though I had qualified for ET, the most technically advanced of the four ratings, I was told that I would not be assigned to a rating until after I started boot camp.

I was assigned to the IC rating, which was initially a severe disappointment.  However, two factors about the IC rating combined to ease my disappointment:  First, the IC rating was responsible for equipment located throughout the ship, in literally every single part of it, covering a wide range of technologies.  Perhaps none were at the level ETs worked on, but the scope and breadth was very intriguing.  Second, IC school was in San Diego, a place of fables this Midwestern boy had never seen.

I arrived in San Diego in the spring of 1975, seeing my first palm trees as I exited the airport terminal.  I sailed through the IC coursework and totally fell in love not only with electronics, but also with electromechanical systems and the technologies of sensors and actuators.

A few months later I was sent to Vallejo, California for 6 months to attend Nuclear Power School (NPS) at the Mare Island Naval Shipyard (MINSY).  Here I fell in love with applied physics, especially nuclear physics, thermodynamics and hydrodynamics.

After that came 6 months attending Nuclear Prototype at the Idaho National Energy Laboratory reservation (INEL) west of Idaho Falls.  I was assigned to the S5G prototype, the newest one there, and also the most technically interesting.  Unfortunately, the continuous intense effort required exhausted me just before the end, and I was unable to meet graduation requirements.

As my friends and classmates moved on to their new duty stations, I stayed behind while a place in the regular (non-nuclear) fleet was found for me.  During this time I learned about non-destructive testing (NDT) and the differences between quality assurance (QA) and quality control (QC).

I was overjoyed when my first choice of duty station, San Diego, was granted.  Unfortunately, the ship I was assigned to, the USS Blue Ridge (LCC-19), was on deployment in the western Pacific at the time, and I wouldn't get back to San Diego for several months.  This was not a bad thing!  I flew out to meet the ship in Japan, and had lots of shipboard time underway without the distraction of trying to have a life ashore.  This let me focus on quickly learning my new responsibilities.

I also had the opportunity to learn about many of the ship's other ratings and their equipment.  The Blue Ridge was a command ship, and had a large computing suite that was exceeded only by those on aircraft carriers.  I learned hands-on programming of the CP-642B mainframe system, and I instantly knew I wanted my future professional career to include computers.

At the time, the IC rating seemed more like a "catch all" rating for equipment that doesn't quite fit within the responsibilities of other ratings.  Two critical pieces of equipment we were responsible for were the ship's gyrocompasses.  Our current gyro technician was scheduled to leave the ship, and since a replacement was not readily available, I was selected to attend Gyrocompass "C" school, exposing me to yet more theory and its application.

While at gyro school, I learned about the Gas Turbine Controls school, which was training technicians to maintain and operate the Spruance class of jet-powered destroyers (the same technology used today in the Arleigh Burke and Ticonderoga classes of ships).  I applied to change over to the "Gas Turbine Systems Technician (Electrical)" (GSE) rating.

I was selected, and I was exposed to yet more new technologies.  I did very well, and was selected to the pre-commissioning crew of a brand new destroyer that would be based in San Diego.  Getting a new ship from the shipyard into active service is very demanding, and by the time all of our shakedown trials and refits and updates had been completed it was late 1979, and I had only about a year left on my 6-year enlistment.

The Navy had been exceedingly good to me, and I seriously considered becoming a "lifer", staying in until retirement at 20-30 years of service.  However, I had advanced through the ranks very quickly, and my next promotion would have been to Chief Petty Officer, a paygrade focused more on managerial, administrative and training duties, than hands-on equipment operation and maintenance.

I really enjoyed being a hands-on technician and operator, and didn't want to give it up.

Since I had the GI Bill available to me, along with some savings I had accumulated over the years, I decided to let my enlistment expire, so I could take a closer look at my future civilian career path.  I would stay in the Navy Reserves, so I could easily return to active duty should I choose to do so.

During this last year of active duty, I bought my first PC, an Apple ][+ complete with the 48K Language Card and UCSD Pascal, a massive $2000 investment in 1980 dollars (about $6000 in 2017 dollars).  Most of my programming was done in BASIC: I was impressed with UCSD Pascal, but was having problems learning it on my own.

Being in San Diego, and with the University of California at San Diego (UCSD) right here, I immediately decided that, should I choose to go to college, UCSD would get my first application.

I left the Navy and within months was soon making use of my nuclear and electronics training working at General Atomics performing factory calibration of radiation detection systems used in many commercial nuclear power plants.  I was soon working on debugging new prototype instrumentation, and soon after that I became an R&D (research and development) technician assigned to work with the division's chief researcher, who had a PhD.

Working shoulder-to-shoulder with a PhD made one thing very clear to me:  We were both equally smart, but his education permitted him to work at an amazing higher level.  I immediately submitted applications to 6 of the top engineering universities (Cal Tech, UC Berkeley, UC San Diego, Carnegie Mellon, MIT, Champaign-Urbana) and by mid-summer I had been accepted by all but one of them.  Most importantly I was accepted by UCSD, and that's the offer I accepted.

I majored in Computer Engineering, an overloaded degree program that included all of a Computer Science (CS) degree along with the digital half of an Electrical Engineering (EE) degree.  Needless to say, I soon knew I was on the "5 year plan".

I continued to work at General Atomics (GA) during school: Full-time during summers and breaks, but also part-time when classes permitted.  GA was extremely supportive: Every time I learned something useful, they'd find ways to let me use it, simultaneously give me a promotion.

I graduated from college wealthier than when I started!  This was thanks primarily to the combination of the GI Bill, the Navy Reserves, and General Atomics.  And also to UCSD, who gave me opportunities to be a paid tutor and lab proctor for lower-level Physics courses.

Leading up to my graduation in 1986, I was eagerly recruited by several top tech companies, receiving some great job offers.  Fortunately, the best offer by far was also the only one that would keep me in San Diego:  My offer from General Atomics was generous to the point of embarrassment, 30% higher than my next highest offer (which was also very generous).  GA management repeatedly assured me they felt they were getting a bargain. So of course I accepted their offer.

I continued to design and implement radiation detection instruments, and even got to get into the Navy side of things by working on the reactor control and monitoring systems for the Navy's next-generation nuclear attack submarine.

During this time I tried to get involved in work being done in San Diego by other parts of GA.  The San Diego Super Computer Center (SDSCC) was created just as I graduated, and for the next year I tried to get a position there to help bring up their new Cray XMP.  In late 1990 GA's expertise in fusion technologies lead to San Diego becoming the first home for the ITER (International Tokamak Experimental Reactor) project, and again I tried hard to join their early staff, without success.

Then our division's top-level management started making changes that made my job much more difficult.  Perhaps I had been spoiled by having "too much fun" in my career, but I decided to move on.  I first tried to transfer to another division within GA, but there were few openings at the time, so I decided to leave GA for another company that had been started by GA veterans: SAIC (Science Applications International Corporation, now Leidos).

There my radiation instrumentation experience was leveraged to build inspection systems using X-Ray and neutron beams.  In particular, I got to work on bleeding-edge technology for real-time automated video inspection systems.

By 1991 SAIC was "strongly encouraging" (pushing) me to move into technical management. I gave it a try and did well at it, but it gave me little joy. The experience convinced me I was happiest when doing engineering myself, rather than enabling others to do it.  However, having successfully entered the ranks of management, SAIC was reluctant to let me switch back to being an engineer.

Given my wide and deep experience, I decided to become an independent contractor.  I was soon working with yet more technologies and targets for embedded systems, including satellites, cable boxes, and security systems.

I was doing well, but I soon realized I sucked at marketing myself: 100% of my contracts came from referrals.  Within four years I started to encounter significant gaps without a contract.  I took some work through temp agencies to fill these gaps, but just before deciding to throw in the contracting towel and return to a "regular" job, in 1998 the "dotcom bubble" came to my rescue.

While the primary heat was up in Silicon Valley, San Diego became known as "Silicon Beach".  Our relaxed atmosphere and diverse tech community attracted many startups, and I got to help a few of them, working on an ever-increasing array of new technologies.  By late 2000 it was clear the bubble had popped, and my career as an independent contractor evaporated.

One boom I missed was San Diego's biotech explosion. Another boom I missed was the explosion in digital cellular phone technology centered around San Diego's Qualcomm.  But I did catch another important wave: High-speed digital photography.

I became part of a team designing a digital video camera capable of 100,000 frames per second.  I had two different areas of responsibility:  Color processing and the low-level camera command interface.  Both of which exposed me to yet more new theory, technology and applications.

Immediately after releasing our new camera, the company was purchased and relocated to Arizona.  I chose to stay in San Diego and was soon working for an aircraft instrument company.  While the underlying technologies were not new to me, the process of getting an instrument through FAA certification certainly was.  My prior experience in nuclear systems was primarily focused on industrial and operator safety.  Now I was directly affecting human safety: If my instruments malfunctioned, people could easily die.

A few years later I was asked to help a startup making a new radiation detection system for Homeland Security applications, so I left the aircraft instrument company.  Unfortunately, the startup folded 6 months later.

I next worked at a maker of surveillance equipment, most of whose customers were government agencies known as "TLAs" (Three-Letter Agencies, such as the FBI).  Here I got my first exposure working with digital radios, and helped design and implement a broadband point-to-point communication system for smaller UAVs, giving them the ability to handle the same sensors as the "big boys" (such as the Predator and Global Hawk), and do so without need for expensive satellite uplinks.

Since then I've gone back into contracting, both for myself and through temp agencies, and have worked in areas as diverse as cybersecurity and underwater navigation.

All this was done within San Diego, actually within a 30-minute commute from my home.  That's not a bad lifestyle!  I'm also a triathlete (San Diego is the birthplace of the modern triathlon), a volunteer swim instructor, a wanna-be musician, a volunteer supporter of local live theater, and an inveterate hacker on my home automation system and 3D printer.

If you want to find that precious intersection of a fulfilling and diverse technical career with a rich and abundant lifestyle, San Diego is tough to beat!

Tuesday, June 20, 2017

101Hero - Now Less Bendy!

As I experimented with increasing print speed and acceleration, my 101Hero would noticeably shake, twist and flex.

I looked at some of the stiffening and support solutions tried by other 101Hero owners, and to me they all felt like overkill in engineering and/or cost.

I imposed some restrictions on my solution:

  1. It must not require modifying the 101Hero itself.  No new holes, no glue, no bolts.  The solution must be completely removable.
  2. Cheap.  Like the 101Hero.
  3. It must be truly rigid, and not require fussing to make the printer geometry correct.
My solution was simple and provided the extra benefit of also serving as most of an enclosure: Clear Acrylic panels slid in the outer grooves between the pillars/pylons.

My Setup
The Acrylic
The Printer
The acrylic panels had to be about 3 mm thick to be strong enough not to bend and actually add rigidity to the printer.  That's also about all that will fit against the pylons and still leave room for the slides to travel freely.  But the groove width was only about 1.5 mm away from the ends.

So I made three 304.8 mm x 18.5 mm panels from 3 mm acrylic.  Then I reduced the edge thickness to 1.5 mm, and added a 60° bevel on each long edge.

The above pictured aren't very good (need a macro attachment for my phone), but they should get the point across.

The printer is now amazingly rigid!

You'll also notice the $5 Walmart fan up against the pillar: It provides less cooling than it did before adding the panels, but it seems to be enough.

Saturday, June 17, 2017

A Better Temperature Tower

I finally printed a custom temperature tower that shows the useful "indicated" temperature range for my Ziro Gold PLA filament. The following images show the high temperature to the right, as indicated by the grainy surface, to the low temperature on the left, as indicated by the under-extrusion.




The "Sweet Spot" for this filament is an indicated temperature of 180C.  I did some test prints at 175C and 170C, and the layer adhesion was detectably weaker at 170C.  And the stringiness at 180C was much reduced compared to that seen at 185C.

I want to stay far away from possible layer adhesion issues, so I picked 180C as my new default temperature for this filament.

And here's my Benchy printed at 180C with 0.4 mm nozzle and 0.18 layer height, next to the $5 WalMart 5" desk fan I had blowing on it during the print:


The Benchy had some cotton-candy strings on it, most of which I removed prior to taking the picture.  The biggest differences compared to my prior Benchy are that 1) the stringiness is massively reduced, 2) much more detail is present around all holes and openings, and 3) the smoke stack is just about perfect, which is due to the fan.

While this is a massive improvement, all is not yet perfect.  Tall, thin items, such as the Eiffel Tower, still have a bit more stringiness and a bit less detail than I'd prefer, but I'm attributing that to not having a fan right at the nozzle.  An off-printer fan certainly helps, but it doesn't fix everything.

I really need to dig into the 101Hero Marlin firmware to fix a few things, most importantly the temperature sensor calibration, and also some minor delta geometry tweaks.

Unfortunately, the 101Hero folks have so far failed to identify the Marlin version they are using, along with the configuration files.  That's not only a violation of the GPL, but also a PITA for 101Hero users who simply want to make the printer perform better.

Thursday, June 8, 2017

Moron, the Little 3D Printer

Wait.  Did I mean to say "More on the Little 3D Printer"?

No.  No I did not.

My 101Hero has no idea what it's extruder temperature actually is.

The highest possible indicated extruder temperature is 208C. That's as hot as it goes when I set a temperature of 208 or higher.

The internal (firmware) low temperature cutoff is set for about 178C, meaning you get a "Low temperature extrusion prevented" message when trying to print at that temperature or lower.

The Ziro gold PLA filament I'm using has a recommended temperature range of 190C-220C, a range of 30C.  The test prints for the 101Hero all seem to use a temperature of 203C, and with the Ziro filament, the test prints I tried all printed with a dull, sandy finish.

From what Google tells me, dull PLA means over-temperature, essentially cooking it into a runny, stringy mess.

By experimentation, I found that a setting of 185C yields much better-looking prints.

I wanted to understand the accuracy of the temperature indication on my 101Hero 3D printer.  So I chose the Customizable Temperature Calibration Tower from Thingiverse, configured it to cover the range from 208C down to 164C by steps of 4C in a height of 96 mm, which is 12 steps of 8 mm each.

I downloaded the customized model and installed the "Vary Temperature With Height" Cura plugin that came with it.  I then opened the STL file in Cura, selected the plugin and set its values to match the model configuration.

I set the layer thickness to 0.18", with no infill and no top, with only a one-layer base, with a wall thickness of 0.80 (2 layers), and with a starting temperature of 208C.

Then I saved the GCode and started the print.  Which abruptly ended when it tried to print the step at 176C.  Because that value is less than 178C. So instead of printing, it generated countless "low temperature" error messages.

I aborted the print, selected Start/End-GCode -> end.gcode, then inserted an "M302" command to be the next-to-last command in the file.  This command disables the "low extrusion temperature" logic.

I then saved the GCode again and restarted the print, with the following results:

The temperature scale, with hot on the left (bottom) and cold on the right (top).
The legend side, which says: "101Hero / Ziro Gold PLA"
The step test side.
The smooth side.
First, it is important to note that all layers in this test were well bonded together: I tried twisting the tower, and none of the layers separated.  This could be due to having a 2-layer wall - perhaps a single-layer wall would have been a better test, but I wanted to allow lots of time for the temperature to settle early in each step.

Update: I did a single-wall print, and it was very similar to the double-wall, but easier to test the wall quality. Again it was remarkably consistent, with the 184C region maybe being slightly superior.

It is clear that the left end is rough and grainy, a sign of overheated PLA filament. The graininess is not present at (indicated) temperatures of 192C and below.  Ideally, the coolest layer of a temperature tower should show some blobiness due to partial filament melting.  This tells me that I need to repeat the test down to even lower (indicated) temperatures.

The step test side looks best at 184C, though it really doesn't look bad in any of the other steps.  This is the only clue I have that my guessed temperature of 185C is anywhere close to ideal.

If we say an indicated temperature of 185C is near the middle of the range given on the spool label, then the actual extruder temperature is closer to 205C, an error of 20C!

Bottom line, the indicated temperature reading of my 101Hero is insane.

Which doesn't really matter: It's just a number!  So long as I know the right number to use for my prints, it's not a problem.

The lesson here is to always print a temperature tower for each new filament.

Next test: A Speed Calibration Tower.  I've been printing everything at a rate of 10 mm/s because that's what most users have recommended.  But how fast can the 101Hero go, and how bad does it get with increasing speed?

Monday, June 5, 2017

Tiny, Cheap 3D Printer Working!

My ultra-inexpensive 101Hero 3D printer arrived about a month ago with a bad motor.  Fortunately, another owner had started a group-buy for better replacement motors, which I joined, and which had arrived just before the printer.  I replaced the bad motor, though I had to swap the pink and orange wires in the connector to get it to rotate in the right direction.  I probably should replace them all, but I want to see how long the other two original motors will last.

I covered the build plate with the horrible yellow masking tape that came with the printer, because the blue tape I had was dried out and I didn't want to wait to get more.

I set up Cura using the 101Hero config file provided by the manufacturer.

Next, I followed the abundant setup advice on the 101Hero User Forum (the site is independent of the 101Hero manufacturer).

I added rubber bands to the arms to reduce the looseness/shakiness of the print carriage.  The arms are so flexible that putting the rubber bands in the middle caused bending (which would throw off the geometry), so I put them up near the top of each arm pair.  The rubber band tension was as low as I could get it and still have them stay in place.

I did a few 1-3 layer prints to calibrate the printer (and use up some of the questionable white filament that came with it).

Despite the print bed being level and at the right height, I still had adhesion issues, so I made the first layer 50% thicker (rather than do a brim or raft).

I then switched to this inexpensive PLA filament from Amazon because I could get it delivered the same day for free (via Amazon Prime).  It came well packaged, including a zip-lock storage bag.

Then I tried my first "real" print, a tiny phone stand.  The print is completely functional, but is far from perfect.

I saved the GCode, primed the extruder until the new color came out, then started the print.  I waited four long hours, hovering over the machine like a husband during a delivery.

The resulting print was strong (good layer adhesion), but does have issues.  Here's what it looks like:

  

   


Here's what I observed, and what I think it means.
  • Not in photo: The skirt (the outline printed around the part to prime the extruder) was almost completely missing, and the little bit that was there was thread-thin.  This happened despite priming the extruder moments before starting the print.  Extrusion rate too low?  Blocked extruder nozzle?
  • Print is fuzzy. I removed most of the fuzz prior to taking the photos, but some is still visible in the interior.  Too little retraction?  Temperature too high?
  • Bottom has a combination of adhesion failure and Elephant's Foot.  Easy: I screwed up the calibration.  And I really should get some blue tape.  Any other factors?
  • Fill isn't tightly joined to outer wall.  Under-extrusion?  Need to increase fill overlap?
  • One layer about 1/4" from the start is totally squished.  Easy: I bumped the printer, hard.
  • About halfway up the whole print steps over a bit.  I didn't have a spool stand and the filament was getting pulled tighter and tighter.  This is when I moved the spool onto a stick.  Any other possible factors?
  • There are small "waves" crossing multiple layers in the later half of the print.  My guess is I still have filament tension problems that will be fixed by getting a real spool holder.  Are there other causes?

Those are all the defects I see (as a 3D printing newbie).  Are there more?  Are the photos good enough to tell?

I then put Blue Tape on the print bed and rigged an emergency spool holder using a paint roller (which works awesomely):




Print quality immediately improved to an amazing degree.  I then printed the other stand included in the above link (visible above), and then printed a Star Trek TNG communicator badge (which was printing when I took the above photo).  Here's a close-up of it:



The upper surface is a bit rough, but that's no surprise given the 0.18 mm layer thickness.  I may try it again at 0.10.

My next plan is to do some prints for dimensional (geometry) calibration and for temperature calibration (should be done for each filament spool).