Operation Backfire: Witness To The Rocket Age

As the prospects for Germany during the Second World War began to look increasingly grim, the Nazi war machine largely pinned their hopes on a number of high-tech “superweapons” they had in development. Ranging from upgraded versions of their already devastatingly effective U-Boats to tanks large enough to rival small ships, the projects ran the gamut from practical to fanciful. After the fall of Berlin there was a mad scramble by the Allied forces to get into what was left of Germany’s secretive development facilities, with each country hoping to recover as much of this revolutionary technology for themselves as possible.

V-2 launch during Operation Backfire

One of the most coveted prizes was the Aggregat 4 (A4) rocket. Better known to the Allies as the V-2, it was the world’s first liquid fueled guided ballistic missile and the first man-made object to reach space. Most of this technology, and a large number of the engineers who designed it, ended up in the hands of the United States as part of Operation Paperclip. This influx of practical rocketry experience helped kick start the US space program, and its influence could be seen all the way up to the Apollo program. The Soviet Union also captured V-2 hardware and production facilities, which subsequently influenced the design of their early rocket designs as well. In many ways, the V-2 rocket was the spark that started the Space Race between the two countries.

With the United States and Soviet Union taking the majority of V-2 hardware and personnel, little was left for the British. Accordingly their program, known as Operation Backfire, ended up being much smaller in scope. Rather than trying to bring V-2 hardware back to Britain, they decided to learn as much as they could about it in Germany from the men who used it in combat. This study of the rocket and the soldiers who operated it remains the most detailed account of how the weapon functioned, and provides a fascinating look at the incredible effort Germany was willing to expend for just one of their “superweapons”.

In addition to a five volume written report on the V-2 rocket, the British Army Kinematograph Service produced “The German A.4 Rocket”, a 40 minute film which shows how a V-2 was assembled, transported, and ultimately launched. Though they are operating under the direction of the British government, the German soldiers appear in the film wearing their own uniforms, which gives the documentary a surreal feeling. It could easily be mistaken for actual wartime footage, but these rockets weren’t aimed at London. They were being fired to serve as a historical record of the birth of modern rocketry.

Continue reading “Operation Backfire: Witness To The Rocket Age”

What Happened To The 100,000-Hour LED Bulbs?

Early adopters of LED lighting will remember 50,000 hour or even 100,000 hour lifetime ratings printed on the box. But during a recent trip to the hardware store the longest advertised lifetime I found was 25,000 hours. Others claimed only 7,500 or 15,000 hours. And yes, these are brand-name bulbs from Cree and GE.

So, what happened to those 100,000 hour residential LED bulbs? Were the initial estimates just over-optimistic? Was it all marketing hype? Or, did we not know enough about LED aging to predict the true useful life of a bulb?

I put these questions to the test. Join me after the break for some background on the light bulb cartel from the days of incandescent bulbs (not a joke, a cartel controlled the life of your bulbs), and for the destruction of some modern LED bulbs to see why the lifetimes are clocking in a lot lower than the original wave of LED replacements.

Continue reading “What Happened To The 100,000-Hour LED Bulbs?”

NVIDIA’s A.I. Thinks It Knows What Games Are Supposed Look Like

Videogames have always existed in a weird place between high art and cutting-edge technology. Their consumer-facing nature has always forced them to be both eye-catching and affordable, while remaining tasteful enough to sit on retail shelves (both physical and digital). Running in real-time is a necessity, so it’s not as if game creators are able to pre-render the incredibly complex visuals found in feature films. These pieces of software constantly ride the line between exploiting the hardware of the future while supporting the past where their true user base resides. Each pixel formed and every polygon assembled comes at the cost of a finite supply of floating point operations today’s pieces of silicon can deliver. Compromises must be made.

Often one of the first areas in games that fall victim to compromise are environmental model textures. Maintaining a viable framerate is paramount to a game’s playability, and elements of the background can end up getting pushed to “the background”. The resulting look of these environments is somewhat more blurry than what they would have otherwise been if artists were given more time, or more computing resources, to optimize their creations. But what if you could update that ten-year-old game to take advantage of today’s processing capabilities and screen resolutions?

NVIDIA is currently using artificial intelligence to revise textures in many classic videogames to bring them up to spec with today’s monitors. Their neural network is able fundamentally alter how a game looks without any human intervention. Is this a good thing?

Continue reading “NVIDIA’s A.I. Thinks It Knows What Games Are Supposed Look Like”

FAA Proposes Refined Drone Regulations

The wheels of government move slowly, far slower than the pace at which modern technology is evolving. So it’s not uncommon for laws and regulations to significantly lag behind the technology they’re aimed at reigning in. This can lead to something of a “Wild West” situation, which could either be seen as a good or bad thing depending on what side of the fence you’re on.

In the United States, it’s fair to say that we’ve officially moved past the “Wild West” stage when it comes to drone regulations. Which is not to say that remotely controlled (RC) aircraft were unregulated previously, but that the rules which governed them simply couldn’t keep up with the rapid evolution of the technology we’ve seen over the last few years. The previous FAA regulations for remotely operated aircraft were written in an era where RC flights were lower and slower, and long before remote video technology moved the operator out of the line of sight of their craft.

To address the spike in not only the capability of RC aircraft but their popularity, the Federal Aviation Administration was finally given the authority to oversee what are officially known as Unmanned Aerial Systems (UAS) with the repeal of Section 336 in the FAA Reauthorization Act of 2018. Section 336, known as the “Special Rule for Model Aircraft” was previously put in place to ensure the FAA’s authority was limited to “real” aircraft, and that small hobby RC aircraft would not be subject to the same scrutiny as their full-size counterparts. With Section 336 gone, one could interpret the new FAA directives as holding manned and unmanned aircraft and their operators to the same standards; an unreasonable position that many in the hobby strongly rejected.

At the time, the FAA argued that the repealing Section 336 would allow them to create new UAS regulations from a position of strength. In other words, start with harsh limits and regulations, and begin to whittle them down until a balance is found that everyone is happy with. U.S. Secretary of Transportation Elaine L. Chao has revealed the first of these refined rules are being worked on, and while they aren’t yet official, it seems like the FAA is keeping to their word of trying to find a reasonable middle ground for hobby fliers.

Continue reading “FAA Proposes Refined Drone Regulations”

Digital License Plates Are Here, But Do We Need Them?

It’s a story as old as time: you need to swap between your custom license plates, but you can’t find a screwdriver and you’re already running late for a big meeting at the Business Factory. You called AAA to see if they could come out and do it for you, but as luck would have it something must be wrong with your phone because the line was disconnected as soon as you explained the situation. As if life in the First World couldn’t get any more difficult.

Luckily, a company called Reviver Auto has come up with a thoroughly modern solution to this age old problem. Assuming you live in Arizona, California, and Michigan and are willing to pay $800 USD (plus a small monthly service fee), you can join the Rplate revolution! Less a license plate and more of a “cool-looking, multi-functional digital display and connected vehicle platform”, the Rplate will ensure you never again find yourself stuck on the side of the road with an unfashionable license plate.

What’s that? You’ve had the same license plate for years, possibly decades, and have never given it much thought? Well, in that case the Rplate might be sort of a tough sell. Did we mention that someday you might be able to display the current weather on it while your car is parked? Of course, if you can see the license plate you’re already outside, so…

This all might sound like an out of season April Fool’s joke, but as far as I can tell from reading the Reviver Auto site and watching their promotional videos, this is essentially the value proposition of their line of Rplate digital license plates. There are some admittedly interesting potential extensions of the technology if they can convince other companies and systems to plug into their ecosystem, but given the cost of the Rplate and the few states in which it’s currently legal to use, that seems far from a given at this point.

But of course we’re fans of weird and wonderful technology here at Hackaday, so we should give this device a fair shake. On the surface it might seem to be a solution looking for a problem, but that’s often said of technology ahead of its time. So what exactly is the Rplate, how does it work, and where does it go from here?

Continue reading “Digital License Plates Are Here, But Do We Need Them?”

Linux Fu: Easier File Watching

In an earlier installment of Linux Fu, I mentioned how you can use inotifywait to efficiently watch for file system changes. The comments had a lot of alternative ways to do the same job, which is great. But there was one very easy-to-use tool that didn’t show up, so I wanted to talk about it. That tool is entr. It isn’t as versatile, but it is easy to use and covers a lot of common use cases where you want some action to occur when a file changes.

Continue reading “Linux Fu: Easier File Watching”

AI On Raspberry Pi With The Intel Neural Compute Stick

I’ve always been fascinated by AI and machine learning. Google TensorFlow offers tutorials and has been on my ‘to-learn’ list since it was first released, although I always seem to neglect it in favor of the shiniest new embedded platform.

Last July, I took note when Intel released the Neural Compute Stick. It looked like an oversized USB stick, and acted as an accelerator for local AI applications, especially machine vision. I thought it was a pretty neat idea: it allowed me to test out AI applications on embedded systems at a power cost of about 1W. It requires pre-trained models, but there are enough of them available now to do some interesting things.

You can add a few of them in a hub for parallel tasks. Image credit Intel Corporation.

I wasn’t convinced I would get great performance out of it, and forgot about it until last November when they released an improved version. Unambiguously named the ‘Neural Compute Stick 2’ (NCS2), it was reasonably priced and promised a 6-8x performance increase over the last model, so I decided to give it a try to see how well it worked.

 

I took a few days off work around Christmas to set up Intel’s OpenVino Toolkit on my laptop. The installation script provided by Intel wasn’t particularly user-friendly, but it worked well enough and included several example applications I could use to test performance. I found that face detection was possible with my webcam in near real-time (something like 19 FPS), and pose detection at about 3 FPS. So in accordance with the holiday spirit, it knows when I am sleeping, and knows when I’m awake.

That was promising, but the NCS2 was marketed as allowing AI processing on edge computing devices. I set about installing it on the Raspberry Pi 3 Model B+ and compiling the application samples to see if it worked better than previous methods. This turned out to be more difficult than I expected, and the main goal of this article is to share the process I followed and save some of you a little frustration.

Continue reading “AI On Raspberry Pi With The Intel Neural Compute Stick”