Wednesday, March 25, 2020

Working from Home without a Home Office? Think Big!

I've been WFH for nearly a month, with less than two days total at the office.  My home office has been unused for over a decade, and is totally unable to support anything close to the three 24" 1080p monitors I use at work.

The only suitable surface at home was the large table presently occupied by my 3D printers, and the laptop I use to run Cura to prepare the models for printing.

I first planned to duplicate my work setup and buy three 24" 1080p monitors for under $100 each.  But what about after Personal Isolation and Social Distancing are over?  Will I still want those three monitors?

I dashed into work (which was nearly deserted), grabbed the monitors from my desk, and gave it a try at home.  It was totally impractical: I'd need to take my desk and chair too  Cloning work wouldn't even work for work!

I had been wanting to learn 3D CAD, but it was an exercise in frustration on my laptop screen, even though it is a large 15.6" display.  And it wasn't any better on three 24" screens.

So I bought a 4K monitor, which is equivalent to four 1080p screens.  I actually got a curved 55" 4K TV, since I normally angled my smaller monitors into an arc.

My first test was to get some information from each of nearly two dozen schematic drawings.  At my desk, I had been continuously panning, scrolling and zooming to find the information needed.  On the big TV, with the schematic filling the screen, I could easily read all if it directly, with no movements or adjustments required.  Win!

That evening I went through a beginner tutorial for Fusion 360.  Having a massive screen gave me tons of work area in the center for the model, while having all the menus and lists I needed conveniently displayed around the edges.  By the end of that one video I understood the basic Fusion 360 workflow, something that had eluded me with all prior attempts, including other programs such as TinkerCAD, SketchUp, 3D Builder and more.

I suppose I may look weird sitting 4 feet away from a 55" TV, but it is one heck of a productivity device.

Sunday, September 15, 2019

PCA and Me.

I recently watched Computerphile's video sequence on Data Analysis, which prompted me to share my own experience with PCA:

Early in my engineering career I worked in a field where we had to measure certain quantities with extraordinarily high accuracy.  When we found ourselves an a situation where we needed to buy an instrument that cost well over $1,000,000 (the instrument was so rare it was impossible to rent one), management suggested we start a side project to build our own instrument that would at least meet our immediate needs, and if we did it well, we could go on to sell it to compete against that million dollar instrument.  Our physicists and other scientists immediately dreamed up a "novel physics" sensor they predicted would be both more sensitive and less expensive.  They then built a "Proof of Concept" device in the lab, and it's performance looked very promising.

My job was first to turn that table-covering lab experiment into something useful to our engineers, then determine if it could be manufactured and sold.  The lab device functioned horribly when removed from the lab.  The lab was temperature controlled, vibration isolated (optical table), light controlled (dark), sound controlled (anechoic wall coverings), EM controlled (Faraday cage, shields), and so on.

What they did in the lab was expose the sensor to known levels of the stimulus we wanted to detect and measure, then develop algorithms to map the raw sensor signal to the applied stimulus, then do several test runs to gather enough data to determine accuracy, precision and repeatability.  My job was to determine what would be needed to build a device that worked outside the lab, with enough quality and performance to meet our needs.

My first test was simply to repeat the lab test on my engineering workbench.  As previously mentioned, the results were horrible:  A quick plot instantly proved the value from the device appeared to be utterly unrelated to the applied stimulus, even after the raw output was post-processed with the algorithms used in the lab. In fact, the raw output looked more like random noise.

This was no surprise!  Few sensors, if any, ever measure only one thing.  For example, the voltage sensor in a common hand-held multimeter is a circuit that is affected by many other environmental stimuli other than the voltage present on the probes, such as temperature, electrical noise,  pressure, humidity, and so on.  Yet portable multimeters with 6-digit accuracy can be had for only a few hundred dollars: Clearly, these other stimuli can be engineered out of the final product to the extent that 6-digit precision is achieved.

The lab environment is what's called a "single variable system": Everything but the desired stimulus was held constant.  My workbench was far "noisier".  The next step was to intentionally vary as many environmental factors as possible, and see how the sensor responded.  Ideally, only one environmental factor would be varied at a time, but that's simply not practical outside a far larger lab.  So you go the opposite way, taking data with as much stable as possible, then vary the factors one at a time or in combination, whichever was most practical (fast, easy, cheap), the primarily requirement being to measure everything that could be measured in parallel with the desired applied stimulus.

The "pièce de résistance" of this effort was a long data set taken over days while simultaneously (and very carefully) varying as many environmental factors as possible, which in this case took place inside a temperature+humidity chamber that contained a miniature shake table (basically, a speaker with a plate on top instead of a cone), to which I added accelerometers to measure motion, and whatever other instruments I could find that measured anything and everything else to as high a precision as possible.

This setup made Frankenstein's Monster look pretty, and Rube Goldberg's devices look simple, elegant and sensible.   Getting all this data correctly gathered and recorded was its own nightmare, the most critical item being correctly tagging each and every measurement with the precise time at which it was taken.  (Timing deserves it's own separate post.)

Each data point contains the time at which the data was taken, the value for each environmental parameter being measured (including the desired stimulus), and finally the raw value output by the sensor itself.  The correct term for a data point created from multiple measurements is a "sample", done specifically to remind us that we aren't seeing "actual physics", but only what our instruments are revealing to us.

Note: What I've described above is "time-series" data, which enables many additional analytical techniques to be applied, because time connects adjacent data points in ways few other parameters permit.  Most importantly, time-series data can be analyzed in both the linear domain (much as is done in the video series) and also in the frequency (or complex) domain.  The most well-known tool connecting these domains is the FFT (Fast Fourier Transform), though there are others.

At the end, what you have is a truckload of data.  Several truckloads.  Millions of data points, each with up to a dozen attributes.   At that point, the data collection stops and the analysis starts.  The best place to start is with the largest, messiest data set.  First you condition the data as described in the videos.  Then the best tool to apply is PCA.

I followed an iterative process:
1. Run PCA.
2. Determine which environmental factor best correlates to PC1.
3. Remove that factor from the sensor data.
4. Repeat from Step 1 until PC1 no longer correlates with any of the environmental factors.

Step 3 may sound simple, but it is very complex to do correctly.  Simply subtracting a normalized value from the raw sensor data is seldom useful, as the effect is seldom purely additive or purely linear.  We must determine if there are known/common transformations that will permit the environmental factor to account for most of PC1.  Temperature, for example, often has an exponential factor in its effect.

"But wait!" I hear you say. "Isn't PCA strictly a linear process?  How can you use it to derive an exponential correction?"  The simple answer is you can't, not directly.  So you cheat.  Given enough data, PCA can be applied to shorter chunks, permitting piece-wise linear corrections to be determined, from which the governing non-linear (exponential or polynomial) correction may be derived.  That's why multiple millions of samples are taken.

Not surprisingly, the first PC1 correlated with temperature, validating the truism "All sensors are thermometers".  Which is why every measurement instrument applies at least one, and often multiple, temperature corrections.

Next was vibration, with the matching truism "All sensors are microphones", which explains the shock mounts used within many instruments.  Just rapping your knuckle on the case of a $20K oscilloscope will often be enough to cause it to trigger due to piezo-electric effects present in the MLC capacitors used in the sensitive input amplifiers.  (See the EEVBlog videos on this.)

The above process has one huge, massive, terrible downside: It accumulates/amplifies all noise present in the data.  In my case, Step 4 was reached even before the stimulus of interest was matched by correlation with PC1!  The noise was dominant and correlated with nothing.

Which means we toss out all the data and start over, this time directly removing the environmental factors having the highest correlations.  There are two ways to remove the effect of an environmental factor from a sensor: Either hold it constant or remove a signal to cancel its effect.

For the example of temperature, the sensor could be actively heated/cooled to keep it at a known temperature, something commonly done for precision time references such as crystals and atomic clocks (called putting them in an "oven").  This is called "effect prevention", and it is relatively expensive to implement within an instrument, to be avoided unless absolutely necessary.  There are some sensor materials that work best only at a single temperature, so an oven is the only choice if that sensor is to be used.

The other alternative is to reduce the effect as best we can, then generate a signal that matches the remaining effect and remove it from the sensor value.  This is called "effect compensation", and is relatively cheap to implement, though it is always preferred to find sensor materials that don't need compensation.  For temperature, it can be as simple as wrapping the sensor in fiberglass with a temperature sensor inside.  The sensor could be an RTD, diode or thermocouple, which ever best matches the behavior observed in the raw sensor signal.  Then that signal is subtracted or divided from the sensor signal.

Then it's time to repeat the data gathering run in the exact same way as before, and repeat the above analysis.  We repeat the process of correlating and removing effects and doing the data gathering until as many correlations as practically possible have been removed.  It should be no surprise that temperature had to be "removed" multiple times, generally by adding higher-order terms to the correction.

We soon reached the point where we could reliably extract a useful measurement for the desired stimulus from the single system sitting on my workbench.  It performed significantly worse than the million dollar instrument, but it met our immediate needs.  That device was more properly called an "escaped lab rat", in that while it definitely worked outside the lab, it wasn't anything close to being a commercial instrument.

A commercial instrument has two key features:  It can be calibrated, and once calibrated it provides useful results for an extended period of time.  In the example of the million dollar instrument, it had to be calibrated every time it was turned on, which meant it could never be turned off!  (This is not uncommon for ultra-high-end instruments.)  So part of the purchase price was an uninterruptible power supply.

Fortunately, our "escaped lab rat" could be turned on and off as needed, requiring only a 5-10 minute stabilization period before producing usable measurements.  Which gave us a very good reason to keep working, and management agreed, giving us a generous budget.  The larger budget was needed because this effort would be far greater than a single engineer at a workbench.

The project started just as did my prior efforts, with taking lots of data and doing lots of analysis.  Only this time the goal was to optimize everything to find the best solution for each correction, which meant testing multiple alternatives, and sometimes combining them.  This is when R&D (Research and Development) becomes "Product Development".  Being primarily an R&D engineer, I stayed with the project long enough to share my work, then moved on to other projects.

That instrument did make it to market, then completely took it over.  I'd love to say what that instrument was, or who made it, or what it sensed, but none of it was patented: The technologies used were considered so bleeding-edge that filing patents would expose them to the world, encouraging others to engineer around the patents rather than create something from first-principles as we did.  That makes what we did a Trade Secret, something I can't share until it otherwise becomes common knowledge (and is the reason why some NDAs have no expiration date).

The process shared above is common practice in sensor R&D, and is the best reason to become multidisciplinary:  While my university degree is in Computer Engineering (100% CS + 30% EE), I was an electronics lab technician before and during college, and also did well in physics, math and statistics.  I'm now a Systems Engineer, where I get to work at the highest product and technology levels, yet I'm still able to get some lab time in when a knotty problem arises.

Of all the skills I've accumulated, the most useful has been knowing when and how to use PCA, and knowing what to do with the results.  Statistics and data analysis methods allow us to tame the chaos of the real world, and to make sense of it.

Thursday, May 9, 2019

Moving Bash scripts to Python...

So, I have a bunch of hardware test scripts that became a bit more than Bash could comfortably handle (e.g., I needed n-dimensional arrays, when Bash stops at 1), so I decided to start moving things to Python, truly my favorite "Git 'er done" language on the planet.

But I quickly hit a block.  Many of my scripts need to check the hardware state every second, mainly to check the "I'm On Fire" bit.  To do this, I use Bash's read -rs -t1 -n1 command, which waits up to one second to grab a character into the $REPLY variable.

Remarkably, Python has nothing like this simple function!  WTF?  Not even in a standard package?

I went on a quest to find this missing link, even going to the Hive Mind at StackExchange for advice.  And I wound up answering my own question.  It turns out I had entered Python's "Dead Zone", asking for things that looked easy, yet were too hard to include in either the language or a standard package included with the language.

If you didn't click the link above, please do so now.  The "correct" answer (IMHO) is to simply let Bash do what Bash does best.

I suppose I really should generate a PEP for this, but now that I have my answer, I'm just too lazy to change it: It works!

And doesn't that really make it abide with the Zen of Python?

Wednesday, October 31, 2018

LED Headlight Bulbs

At 6 years and 95K miles on my car, the first bulb (of any type) finally burned out: The driver's side low-beam.  My car lacks DRLs, so I drive with my low-beams on all the time, meaning the bulbs had been operating for every one of those 95K miles driven over those 6 years.

Not bad for a tungsten-halogen bulb, right?

Leading up to this I had been thinking I was losing my night vision, as the headlights seemed dimmer and more orange.  Turns out a large part of that was the bulb actually dimming and turning more orange: While the halogen gas within the bulb massively reduces the rate at which tungsten vaporizes off the hot filament, it doesn't zero it.  And what does come off makes its way to the cooler glass walls, slowly making them darker and acting as a filter that preferentially blocks shorter light frequencies, leading to both the dimming and the orange-ing.

I jumped onto Amazon and ordered a pair of replacement halogen bulbs, knowing they'd be good to use.  But a minute after pressing "Buy" I decided to check out LED replacements, just for comparison.

What I found surprised me, though it shouldn't have: LED bulbs putting out the same light as 55 watt halogen bulbs use 1/10 the power!  At the other extreme, I could get 10x the light for the same power!

I canceled my order to give me time to think about this.  Would LED bulbs be a better way to go?

The first decision concerned what power level to get.  While the halogen bulbs used 55 watts, none of the LED bulbs got close to that, maxing out at about 40 watts for ones that could illuminate the moon.  And some very powerful, popular, and highly-rated bulbs were available for barely over $20 per pair.

But in the pictures they all looked way bigger and bulkier than I expected, making me wonder if I'd have enough room to install them.

It turns out they were big because all had fans in them, to keep the LEDs at a safe operating temperature.  I immediately knew there was no way such tiny high-speed fans would last the next 95K miles.  Some of the bulbs even advertised having fans that were "easy-to-replace".  Not really a good sign.

So I searched for "fanless LED headlight bulbs", and the results were more to my liking.  I chose the highest-rated of the mid-priced ones, which cost $31 for the pair, compared to the $11 for the halogen replacement bulbs.  The LED bulbs consumed only 8 watts each, but put out nearly 3x the light of the original halogen bulbs.  So, 3x the light for 3x the price (and 1/7 the power) sounded like a good value to me.

I placed the order, they arrived the next day, and I installed them that evening.  The difference was, well, like night and day.  The higher color temperature reveals much more color at night, and the reflective paint on the road and signs was visible at least 4x further away.

I immediately felt safer.  And my eyes worked just fine.

If you're still driving on tungsten-halogen low-beams, please stop!  The upgrade to LED bulbs is well worth it from a safety standpoint alone.

Saturday, September 15, 2018

Yes, my new curved 65" 4K TV is also my HTPC monitor!

My old 42" TV/monitor was fading.  Literally.  It had a cold-cathode fluorescent backlight that was showing its age.  Even at full brightness, it was becoming an issue in a day-lit room.

So I measured the place it occupied, and decided to get the biggest TV that would fit, which turned out to be 65" (1.65m) diagonal.

I also wanted to get a curved screen, because I sit relatively close to my TV/monitor (2-3 meters), and the angle to the corners of a flat screen had a clearly different appearance (a combination of dimming and color shift), so I wanted the corners to be aimed more toward me.

That was over two years ago, and that TV had only become worse since.  But I'm a cheap bas, uh, person, and I didn't want to pay the prices for even the cheapest curved 65" TVs available.  I also didn't want or need a "smart" TV, but that was a basic feature of all curved TVs.

So I waited.  And waited.  About a year ago it became clear that the curved screen "fad" had peaked, and fewer models were being offered.  Then the generic brands started to release curved TVs, but their initial efforts were truly terrible.

Earlier this year Sceptre introduced their second-generation curved screen (the C650 series), and it seemed to be the best of all the generic brands.  I give it a few months for reviews from reputable review sites to be posted, but none ever were!  It was as if this TV was too cheap to be worth their time.

The largest distributor was Walmart, which only sells things that make them money, meaning few returns for any reason.  So I started reading user reviews, and was impressed by how few flaws were revealed.

I continued to wait, hoping for a sale of some kind.  The price gradually drifted lower, but only a little.  When it didn't go on sale for Labor Day, I finally decided to pull the trigger, and ordered the Sceptre C658 for US$500.  I bought it on Amazon, but from Walmart, since Amazon also offered an awesome 4-year warranty extension for $25:  The cheaper something is, the more important it becomes to get an extended warranty.

I ordered not only the TV, but also an Amazon Fire TV Cube, which was on sale for $79.  The main reason for that was I had no source of 4K content: My HTPC is a powerful but rather old i5 with internal graphics and a video resolution that tops out at 1080p60.  And I'm too cheap to upgrade it with a new video card, when I really should upgrade the whole machine.

I got a text Friday that FedEx had made the delivery, so I headed home during lunch to get it indoors and off the driveway.  It was more bulky than heavy.  I immediately unboxed it and plugged it in to the Fire TV Cube that had arrived the day before, and was greeted with wonderful 4K video, with a very bright screen and zero bad pixels.

The bloody thing is huge in size, but doesn't weigh all that much more than the 42" TV it replaced.

I spent Friday evening getting everything connected and tested, with no problems whatsoever.

Best of all, it finally made all my other equipment play nice together: My old TV had an early version of CEC, and didn't really play well with my other equipment.  Now my Sony AV receiver, LG DVD player and Dell HTPC all happily play with this Sceptre TV.

What's weird is I can use any remote to control everything.  The Fire TV Cube is the hardest to use, because I still don't have the right Alexa skills configured.  But the Sony AV remote can control the TV, and the TV remote can control both the Sony and the LG.  I haven't yet tried using the LG remote because I can't find it.

As for the video quality, I'm not much of a judge, but it looks beyond awesome to me.  The TV does very good upsizing of 1080p to 4k, with only a few tiny motion artifacts occasionally being visible.  Of course, those artifacts could have been there all along, but my old TV was unable to let me see them.

I've been watching some 4K content on Amazon Prime, and I have to say, bigger is better.  The 4K resolution itself isn't a big deal for me where movies are concerned, only looking slightly better than 1080p to me.  But sitting so close to that huge curved screen makes movies more immersive, and much more watchable and enjoyable.

Surfing the web is also much easier, as the bigger and brighter screen is much easier on the eyes.  I hadn't realized just how small and dim my old TV really was.

This is SO much better, and a very worthy investment that I hope will last longer than the extended warranty!

Saturday, May 26, 2018

Monoprice Mini Delta 3D Printer

There I was, stranded between two sources of 3D printing pain.

First, my only 3D printer was the horribly slow 101Hero, which insists on revealing more of its limitations as my skills have grown.

Second, nearly a year ago I backed the BuildOne 3D printer Kickstarter campaign.  Production and delivery are 6 months late at this point (due to bad part vendors that in turn caused a complete printer redesign), though I expect to have it before the end of this year.

But that's way too long to persist with the 101Hero, which was only meant to be a disposable "training wheels" printer.

Last week I snapped.  I desperately needed to print faster than the 10 mm/sec (yes, that's right) best rate I was seeing on a printer that had lousy positioning accuracy.  It was ruining over half of my prints, and that's accounting for the great strides I've made in getting past many of the 101Hero's other limitations.

I really wanted my next printer to be a Creality CR-10, but I didn't want to order one just yet.  I really wanted another small printer, just one that would print faster and better.

On paper, the $159 Monoprice Mini Delta (MPMD) was exactly the machine I wanted: A small fast delta that included a heated bed, a color LCD control panel and even WiFi support.  It arrived 3 days after I placed my order, and I had started the test print provided on the uSD card less than 30 minutes after first touching the box.  There was literally no setup to do other than getting it out of the box, loading the filament,  inserting the provided uSD card, and plugging it in.

The test print, a waving cat (that didn't wave), came out perfect!

My next task was to configure Cura for the MPMD.  This proved to be unexpectedly cumbersome, since no Cura configuration was provided that would work with the latest Cura (3.3.1).  Cura version 15.04.6 is included on  the uSD card, but it lacks features I've come to rely upon.  The Cura configuration was soon complete, thanks to the abundant community help online, which is fortunate since Monoprice support seems not to exist (or, more likely, is overwhelmed).

I loaded up the STL for the single-layer delta calibration print I've been using with the 101Hero, scaled it to fit the MPMD, saved it to the uSD card, then started the print.

I watched the system first warm up the bed then the hot-end (done in sequence to limit power supply loading), Next I saw the system do it's auto-leveling process (double taps near each tower).  Then I watched the effector slam into the print bed and start digging gouges into the bed's plastic surface.  It took me a moment to figure out how to abort the print, during which time more ruts were dug.

Were I thinking a little faster, I would have simply cut the power instead.  Which would have required pulling the cord, since the MPMD has no power switch.  Something that even the crippled 101Hero has!

A tiny bit more research revealed the MPMD "auto-leveling" feature is completely broken.  There is a work-around, but it requires repositioning the end-stops to be within a fraction of a millimeter of each other.  Which I did, over and over again, for nearly an hour, with much cursing.

After which I had to update the Cura printer start gcode to use a slightly different auto-level command that was far more reliable than the one recommended by Monoprice.

And that was the end of the major drama.  My MPMD has been printing nearly non-stop since then, with great results.

But not perfect!  Most of the print defects are due to the MPMD's floating print bed: It literally rests on top of three spring switches that are used for auto-leveling, which, despite having alignment pegs, means the bed has nearly 1 mm of sideways slop, which is enough to create a noticeable layer shift.

The MPMD has other minor imperfections.  The first is the noise.  Even when not running, the fan in the bottom is surprisingly loud.  I modified a 60 mm fan mount design that is in the queue for printing.  The noise gets far worse when printing, mainly due to the bushings on the steel rods.  My plan is to clean them then lubricate with lithium grease.

The final significant noise source is conducted sound from the steppers.  I haven't yet selected a remedy for that, mainly because there are so many to choose from!  There are sound-isolating mounts, stepper smoothers, and switching to improved driver chips, such as the Trinamic drivers.  There are also things the slicer can do to reduce noise, starting with the acceleration, but also by limiting travel speed.  I'll try those first.

As I learn more about 3D printing, I want to twist more of the knobs.  On a Marlin system I can permanently tweak the configurations in the source code, and can tweak settings stored in EEPROM, as well as tweak settings during each individual print.  The MPMD doesn't use Marlin, though the gcode does appear to be very compatible.

The display on a Marlin system can be used to tweak many settings while the printer is printing, making it much faster and easier to get things dialed-in.  The MPMD display does permit a few things to be tweaked, but almost none compared to Marlin.  Still, the MPMD display does all the basics quite well, and does so while looking fantastic.  It's really great to have a printer display I can read from across the room!

One odd thing is the location of the spool holder onthe back of the printer: The filament has to take a needlessly tortuous path to the extruder  I printed a quick bracket that lets my use a paint roller as a spool holder, placing the spool above and in line with the extruder, as well as closer to the printer's center of gravity.  A paint roller really is an awesome spool holder.

That loud fan in the base also causes another problem: It blows air directly on the bed heater, meaning it takes longer for the bed to heat up.  The hole under the bed heater is easily filled by printing a cup designed for the job, which I found on Thingiverse.  The performance difference was immediately noticeable.

Once you adjust the limit switches and get auto-leveling to behave, the MPMD is truly a fantastic value, despite the other minor annoyances (all of which have solutions, or at least work-arounds).

Then again, I'm coming from a 101Hero: ANYTHING else would be a huge step up for me!

Sunday, April 29, 2018

Creating an External Power Monitoring app for Android

I needed to make a power monitoring system to let people know when the AC power to a critical pump had failed.

I realized an old cell phone would be ideal for this task, so long as it ran Android 4.4 (Kit Kat) or later.  I found several phones between US$20 and US$30 that would do, and these were either fully-functional used phones, or in one case, a new phone!  That's less expensive than a Raspberry Pi, and far more capable, as the phone already includes the cellular modem, the battery system, the display and the touchscreen, not to mention a whole bunch of other sensors and capabilities.

A cheap cell phone is an awesome platform!

The phone would always be plugged into its charger, and the charger would be plugged in to the AC power system to be monitored.  When the phone charger loses power, a text would be sent, and another would be sent when power was restored. The only UI needed was for the user to enter the phone number(s) to which the texts would be sent.

Being a Python rapid-development fanboi, I installed SL4A (via the QPython package in Google Play) and soon had a short console script running that did the basics of what was needed.


Simple, right?

Unfortunately, providing this simple script with a GUI and turning it into an installable Android application package (apk) was frustratingly difficult, so I decided to look elsewhere. But at least I now knew it was truly a trivial app, and I expected no significant barriers on the development side.

I installed Android Studio, the standard Android integrated development environment (IDE), and was overwhelmed by the infrastructure needed to create even the simplest app.  I'm primarily an embedded/real-time algorithm developer, so I'm used to programming very close to the hardware.  There was nothing "simple" here!  The Android platform is extremely capable, and is also quite complex.

One very pleasant surprise using Android Studio was my introduction to Kotlin, a language that pretty much eliminates the boilerplate verbosity and bloat of Java without losing any significant features, while delivering an elegant high-productivity language. I want more of this!

For fun, I also installed Visual Studio for Android, mainly to see if Xamarin's Mono would let me use Microsoft's excellent C# language to quickly develop my app.  Again, I was unable to get even the simplest demo to load, much less build.  And the full installation was nearly 60GB!

I had thought the best way forward would be to get a minimal demo app loaded and built, then modify it to meet my needs.  I was beginning to think I had a real problem trying to do even a "simple" thing with these heavy-weight programming environments.

The other recommended Android development alternative was Eclipse for Android.  But I've had an ambivalent off-and-on relationship with Eclipse for over a decade.  When it works, it's sensational.  But when it doesn't work it can be frustrating to remedy.  So, no, I didn't bother to install it.

I really wanted a straightforward tool that would take code as simple as the Python above, then with a single click would generate a fully functional apk. I had no need to see behind the curtain, so I wanted a wizard to handle all that stuff for me.

A quick search brought me to App Inventor, the ex-Google now MIT project that used the Scratch programming environment that also has a simple drag'n'drop GUI editor.  I had been wanting to play with Scratch anyway, since I plan to help with the local Scratch Day in May.  This seemed like an excellent opportunity to meet multiple goals with a single project!

While everything initially went as planned, I soon found there were features of my app that App Inventor would not support.  I then learned that App Inventor has not been receiving very much love, and there was little hope the shortcomings would be addressed any time soon.

I was delighted to find that the App Inventor code base (it's Open Source) had been cloned and improved by several groups, all of which could import App Inventor's "aia" project file format.  A quick search brought me to Thunkable, where my app both built and ran as expected.

Here's the mockup of the app from the Thunkable GUI editor:


And here's the code that drives the above GUI, with some features added since the Python prototype.  Yes, the code is an image: The Scratch language is itself graphical.


App Inventor and it's clones all share multiple ways to test your app:  Via USB Debugging, via a WiFi interface app, and via a generated apk that can be installed by scanning a QR Code.

What could be easier!

The last piece of the puzzle was to get the Power Monitoring app to always launch when Android started.  The Startup Manager app in Google Play was the right tool for the job, as well as also being able to prevent a slew of default phone apps from starting.