Multitasker Or Many Monotaskers?

In Al Williams’s marvelous rant he points out a number of the problems with speaking to computers. Obvious problems with voice control include things like multiple people talking over each other, discerning commands from background conversations, and so on. Somehow, unlike on the bridge in Star Trek, where the computer seems to understand everyone just fine, Al sometimes can’t even get the darn thing to play his going-to-sleep playlist, which should be well within the device’s capabilities.

In the comments, [rclark] suggests making a single button that plays his playlist, no voice interaction required, and we have to admit that it’s a great solution to this one particular problem. Heck, the “bedtime button” would make fun project in and of itself, and it’s such a limited scope that it could probably only be an weekend’s work for anyone who has touched the internals of their home automation system, like Al certainly has. We love the simplicity of the idea.

But it ignores the biggest potential benefit of a voice control system: that it’s a one-size-fits-all solution for everything. Imagine how many other use cases Al would need to make a single button device for, and how many coin cell batteries he’d be signing himself up to change out over the course of the year. The trade-off is that the general purpose solution tends not to be as robust as a single-tasker like the button, but also that it can potentially simplify the overall system.

I suffer this in my own home. It’s much more a loosely-coupled web of individual hacks than an overall system, and that has pros and cons. Each individual part is easier to maintain and hack on, but the overall system is less coordinated than it could be. If we change the WiFi password on the home automation router, for instance, I’m going to have to individually log into about eight ESP8266s and change their credentials. Yuck!

It’s probably a matter of preference, but I’ll still take the loose, MQTT-based system that I’ve got now over an all-in-one. Like [rclark], I value individual device simplicity and reliability above the overall system’s simplicity, but because our stereo isn’t even hooked up to the network, I can’t play myself to sleep like Al can. Or at least like he can when the voice recognition is working.

Valentine Heart

Valentine’s Day…Hacks?

How do you reconcile your love for hacking projects together with your love for that someone special? By making him or her a DIY masterpiece of blinking red LEDs, but in heart shape. Maybe with some custom animations, and in a nice frame with a capacitive touch sensor to turn it on or off.

Or at least, that’s what I did. The good news is that my girlfriend, now wife, understands that this sort of present comes from a place of love. And it probably didn’t hurt that I also picked up some flowers to frame it with, and cooked her favorite lunch later that afternoon.

But if I’m 100% frank with myself, I’d have to admit that this was about 50% “present” and 50% “project”. Of course it also helps that she gets me, and that she knows that I put a bunch of effort into making it look as good as it did, and maybe because of that she forgives the 50% project.

Valentine’s day projects are a high-wire balancing act. If any other project fails, you can just try again. But here, the deadline is firm. Cosmetics matter a lot more on Valentine’s day than the other 364 days of the year, too. And finally, you really have to know the gift-receiver, and be sure that you’re not falling deeper into the excuse-for-a-cool-project trap than I did. And don’t forget the flowers.

I pulled it off with this one, at least, but I do feel like it was close, even today. Have you ever made a Valentine’s hacking project? How’d it go?

(Note: Featured image isn’t my project: It’s a lot more colorful!)

Software In Progress

Open source software can be fantastic. I run almost exclusively open software, and have for longer than I care to admit. And although I’m not a serious coder by an stretch, I fill out bug reports when I find them, and poke at edge cases to help the people who do the real work.

For 3D modeling, I’ve been bouncing back and forth between OpenSCAD and FreeCAD. OpenSCAD is basic, extensible, and extremely powerful in the way that a programming language is, and consequently it’s reliably bug-free. But it also isn’t exactly user friendly, unless you’re a user who likes to code, in which case it’s marvelous. FreeCAD is much more of a software tool than a programming language, and is a lot more ambitious than OpenSCAD. FreeCAD is also a program in a different stage of development, and given its very broad scope, it has got a lot of bugs.

I kept running into some really serious bugs in a particular function – thickness for what it’s worth – which is known to be glitchy in the FreeCAD community. Indeed, the last time I kicked the tires on thickness, it was almost entirely useless, and there’s been real progress in the past couple years. It works at least sometimes now, on super-simple geometries, and this promise lead me to find out where it still doesn’t work. So I went through the forums to see what I could do to help, and it struck me that some people, mostly those who come to FreeCAD from commercial programs that were essentially finished a decade ago, have different expectations about the state of the software than I do, and are a lot grumpier.

Open source software is working out its bugs in public. Most open source is software in development. It’s growing, and changing, and you can help it grow or just hang on for the ride. Some open-source userland projects are mature enough that they’re pretty much finished, but the vast majority of open-source projects are coding in public and software in progress.

It seems to me that people who expect software to be done already are frustrated by this, and that when we promote super-star open projects like Inkscape or Blender, which are essentially finished, we are doing a disservice to the vast majority of useful, but still in progress applications out there that can get the job done anyway, but might require some workarounds. It’s exactly these projects that need our help and our bug-hunting, but if you go into them with the “finished” mentality, you’re setting yourself up for frustration.

Time Vs Money, 3D Printer Style

A few months ago, Hackaday’s own Al Williams convinced me to buy a couple of untested, returned-to-manufacturer 3D printers. Or rather, he convinced me to buy one, and the incredible success of the first printer spurred me on to the second. TL;DR: Lightning didn’t strike twice, but I’d still rate it as worth my time. This probably isn’t a good choice for your first printer, but if you’ve done the regular maintenance on your first printer already, I’d recommend it for your second or twelfth.

As background, Al has been volunteering with local schools to teach a 3D printing summer class, and this means outfitting them with a 3DP lab on the dirt cheap. His secret is to buy last year’s model which has all of the features he needs – most importantly for the kids, automatic bed height probing – but to buy it from the scratch-and-dent shelf at Creality. Why? Because they are mid-grade printers, relatively new, but on deep discount.

How deep? I found an essentially endless supply of printers that retail for $300 on discount for $90 each. The catch? It might work, it might not. I bought my son one, because I thought that it would at least make a good project for us to work on together. Those plans were spoiled – it worked absolutely flawlessly from the moment we bolted it together, and he runs 24-hour jobs on the thing without fear. From the look of the build plate, it had been used exactly once and returned for whatever reason. Maybe the owner just didn’t want a 3D printer?

The siren song of straightforward success was too much for me to resist, and I picked another up to replace my aging A8 which was basically a kit for a 3D printer, and not a particularly good one at that, but could be made to work. My scratch-and-dent Creality came with a defective bed-touch sensor, which manifest itself as a random absolute refusal to print.

I took it apart, but the flaw is in the design of the V1 touch sensors – the solenoid requires more current to push down than the 3DP motherboard can reliably deliver. It works 100% of the time on my bench power supply, but in situ it fails about 30% of the time, even after hitting it with graphite and making sure everything is mechanically sound. Creality knows this and offers a free trade-in, just not for me. The new version of the Creality probe costs $50 new, but you can get cheap knock-off BL Touch models for $14. Guess what I did?

And guess what bit me? The cheapo touch probe descends a bit slower than the Creality version should, and the firmware is coded to time-out in an extra-short timeframe. Thankfully, Creality’s modifications to Marlin are all open source, and I managed to tweak and flash a new firmware that made it work 100% of the time, but this was at a cost of probably eight hours of bug-hunting, part-ordering, and firmware-compiling. That said, I got some nice extra features along the way, which is the advantage of a printer running open-source firmware.

So my $300 printer cost me $105, plus eight hours of labor. I only charge one coffee per hour for fun hardware debugging tasks, but you may have a different valuation. Taken together with my son’s printer, we have $600 worth of printer for under $200 plus labor, though, which starts to sound a little better.

Is gambling on an untested return 3D printer worth it? For us, I would say it was, and I’d do it again in a few years. For now, though, we’ve got three printers running and that’s all we need. Have you gone down this perilous path?

Networking History Lessons

Do they teach networking history classes yet? Or is it still too soon?

I was reading [Al]’s first installment of the Forgotten Internet series, on UUCP. The short summary is that it was a system for sending files across computers that were connected, intermittently, by point-to-point phone lines. Each computer knew the phone numbers of a few others, but none of them had anything like a global routing map, and IP addresses were still in the future. Still, it enabled file transfer and even limited remote access across the globe. And while some files contained computer programs, others files contained more human messages, which makes UUCP also a precursor to e-mail.

What struck me is how intuitively many of this system’s natural conditions and limitations lead to the way we network today. From phone numbers came the need for IP addresses. And from the annoyance of having know how the computers were connected, and to use the bang notation to route a message from one computer to another through intermediaries, would come our modern routing protocols, simply because computer nerds like to automate hassles wherever possible.

But back to networking history. I guess I learned my networking on the mean streets, by running my own Linux system, and web servers, and mail servers. I knew enough networking to get by, but that mostly focused on the current-day application, and my beard is not quite grey enough to have been around for the UUCP era. So I’m only realizing now that knowing how the system evolved over time helps a lot in understanding why it is the way it is, and thus how it functions. I had a bit of a “eureka” moment reading about UUCP.

In physics or any other science, you learn not just the status quo in the field, but also how it developed over the centuries. It’s important to know something about the theory of the aether to know what special relativity was up against, for instance, or the various historical models of the atom, to see how they inform modern chemistry and physics. But these are old sciences with a lot of obsolete theories. Is computer science old enough that they teach networking history? They should!

Learn New Tools, Or Hone Your Skill With The Old?

Buried in a talk on AI from an artist who is doing cutting-edge video work was the following nugget that entirely sums up the zeitgeist: “The tools are changing so fast that artists can’t keep up with them, let alone master them, before everyone is on to the next.” And while you might think that this concern is only relevant to those who have to stay on the crest of the hype wave, the deeper question resounds with every hacker.

When was the last time you changed PCB layout software or refreshed your operating system? What other tools do you use in your work or your extra-curricular projects, and how long have you been using them? Are you still designing your analog front-ends with LM358s, or have you looked around to see that technology has moved on since the 1970s? “OMG, you’re still using ST32F103s?”

It’s not a simple question, and there are no good answers. Proficiency with a tool, like for instance the audio editor with which I crank out the podcast every week, only comes through practice. And practice simply takes time and effort. When you put your time in on a tool, it really is an investment in that it helps you get better. But what about that newer, better tool out there?

Some of the reluctance to update is certainly sunk-cost fallacy, after all you put so much sweat and tears into the current tool, but there is also a real cost to overcome to learn the new hotness, and that’s no fallacy. If you’re always trying to learn a new way of doing something, you’re never going to get good at doing something, and that’s the lament of our artist friend. Honing your craft requires focus. You won’t know the odd feature set of that next microcontroller as well as you do the old faithful – without sitting down and reading the datasheet and doing a couple finger-stretching projects first.

Striking the optimal balance here is hard. On a per-project basis, staying with your good old tool or swapping to the new hotness is a binary choice, but across your projects, you can do some of each. Maybe it makes sense to budget some of your hacking time into learning new tools? How about ten percent? What do you think?

In Praise Of Simple Projects

Hackaday was at Chaos Communication Congress last week, and it’s one of those big hacker events that leaves you with so much to think about that I’m still processing it. Just for scope, the 38th CCC is a hacker event with about 15,000 attendees from all around Europe, and many from even further. If I were to characterize the crowd on a hardware-software affinity scale, I would say that it skews heavily toward the software side of the hacker spectrum.

What never ceases to amaze me is that there are a couple of zones that are centered on simple beginner soldering and other PCB art projects that are completely full 20 hours of the day. I always makes me wonder how it is possible to have this many hackers who haven’t picked up a soldering iron. Where do all these first-timers come from? I think I’m in a Hackaday bubble where not only does everyone solder at least three times a day, some of us do it with home-made reflow ovens or expensive microscopes.

But what this also means is that there’s tremendous reach for interesting, inviting, and otherwise cool beginner hardware projects. Hands-on learning is incredibly addictive, and the audience for beginner projects is probably ten times larger than that for intermediate or advanced builds. Having watched my own son putting together one of these kits, I understand the impact they can have personally, but it’s worth noting that the guy next to him was certainly in his mid-30s, and the girl across the way was even a few years younger than my son.

So let’s see some cool beginner projects! We’d love to feature more projects that could lure future hackers to the solder-smoky side.