Crew Dragon’s Short Hop Begins The Era Of Valet Parking At The ISS

They weren’t scheduled to return to Earth until April 28th at the earliest, so why did NASA astronauts Michael Hopkins, Victor Glover, and Shannon Walker, along with Japan Aerospace Exploration Agency (JAXA) astronaut Soichi Noguchi, suit up and climb aboard the Crew Dragon Resilience on April 5th? Because a previously untested maneuver meant that after they closed the hatch between their spacecraft and the International Space Station, there was a chance they weren’t going to be coming back.

On paper, moving a capsule between docking ports seems simple enough. All Resilience had to do was undock from the International Docking Adapter 2 (IDA-2) located on the front of the Harmony module, itself attached to the Pressurized Mating Adapter 2 (PMA-2) that was once the orbital parking spot for the Space Shuttle, and move over to the PMA-3/IDA-3 on top of Harmony. It was a short trip through open space, and when the crew exited their craft and reentered the Station at the end of it, they’d only be a few meters from where they started out approximately 45 minutes prior.

The maneuver was designed to be performed autonomously, so technically the crew didn’t need to be on Resilience when it switched docking ports. But allowing the astronauts to stay aboard the station while their only ride home undocked and flew away without them was a risk NASA wasn’t willing to take.

What if the vehicle had some issue that prevented it from returning to the ISS? A relocation of this type had never been attempted by an American spacecraft before, much less a commercial one like the Crew Dragon. So while the chances of such a mishap were slim, the crew still treated this short flight as if it could be their last day in space. Should the need arise, all of the necessary checks and preparations had been made so that the vehicle could safely bring its occupants back to Earth.

Thankfully, that wasn’t necessary. The autonomous relocation of Crew Dragon Resilience went off without a hitch, and SpaceX got to add yet another “first” to their ever growing list of accomplishments in space. But this first relocation of an American spacecraft at the ISS certainly won’t be the last, as the comings and goings of commercial spacecraft will only get more complex in the future.

Continue reading “Crew Dragon’s Short Hop Begins The Era Of Valet Parking At The ISS”

History Of Closed Captions: The Analog Era

Closed captioning on television and subtitles on DVD, Blu-ray, and streaming media are taken for granted today. But it wasn’t always so. In fact, it was quite a struggle for captioning to become commonplace. Back in the early 2000s, I unexpectedly found myself involved in a variety of closed captioning projects, both designing hardware and consulting with engineering teams at various consumer electronics manufacturers. I may have been the last engineer working with analog captioning as everyone else moved on to digital.

But before digging in, there is a lot of confusing and imprecise language floating around on this topic. Let’s establish some definitions. I often use the word captioning which encompasses both closed captions and subtitles:

Closed Captions: Transmitted in a non-visible manner as textual data. Usually they can be enabled or disabled by the user. In the NTSC system, it’s often referred to as Line 21, since it was transmitted on video line number 21 in the Vertical Blanking Interval (VBI).
Subtitles: Rendered in a graphical format and overlaid onto the video / film. Usually they cannot be turned off. Also called open or hard captions.

The text contained in captions generally falls into one of three categories. Pure dialogue (nothing more) is often the style of captioning you see in subtitles on a DVD or Blu-ray. Ordinary captioning includes the dialogue, but with the addition of occasional cues for music or a non-visible event (a doorbell ringing, for example). Finally, “Subtitles for the Deaf or Hard-of-hearing” (SDH) is a more verbose style that adds even more descriptive information about the program, including the speaker’s name, off-camera events, etc.

Roughly speaking, closed captions are targeting the deaf and hard of hearing audience. Subtitles are targeting an audience who can hear the program but want to view the dialogue for some reason, like understanding a foreign movie or learning a new language.

Continue reading “History Of Closed Captions: The Analog Era”

Zinc Fever: A Look At The Risks Of Working With Hot Metal

For as raucous as things can get in the comments section of Hackaday articles, we really love the give and take that happens there. Our readers have an astonishing breadth of backgrounds and experiences, and the fact that everyone so readily shares those experiences and the strongly held opinions that they engender is what makes this community so strong and so useful.

But with so many opinions and experiences being shared, it’s sometimes hard to cut through to the essential truth of an issue. This is particularly true where health and safety are at issue, a topic where it’s easy to get bogged down by an accumulation of anecdotes that mask the underlying biology. Case in point: I recently covered a shop-built tool cabinet build and made an off-hand remark about the inadvisability of welding zinc-plated drawer slides, having heard about the dangers of inhaling zinc fumes once upon a time. That led to a discussion in the comments section on both sides of the issue that left the risks of zinc-fume inhalation somewhat unclear.

To correct this, I decided to take a close look at the risks involved with welding and working zinc. As a welding wannabe, I’m keenly interested in anything that helps me not die in the shop, and as a biology geek, I’m also fascinated by the molecular mechanisms of diseases. I’ll explore both of these topics as we look at the dreaded  “zinc fever” and how to avoid it.

Continue reading “Zinc Fever: A Look At The Risks Of Working With Hot Metal”

Code Your Own Twitch Chat Controls For Robots — Or Just About Anything Else!

Twitch Plays Pokemon burst onto the then nascent livestreaming scene back in 2014, letting Twitch viewers take command of a Game Boy emulator running Pokemon Red via simple chat commands. Since then, the same concept has been applied to everything under the sun. Other video games, installing Linux, and even trading on the New York Stock Exchange have all been gameified through Twitch chat.

TwitchPlaysPokemon started a craze in crowdsourced control of video games, robots, and just about everything else.

You, thirsty reader, are wondering how you can get a slice of this delicious action. Fear not, for with a bit of ramshackle code, you can let Twitch chat take over pretty much anything in, on, or around your computer.

It’s Just IRC

The great thing about Twitch chat is that it runs on vanilla IRC (Internet Relay Chat). The protocol has been around forever, and libraries exist to make interfacing easy. Just like the original streamer behind Twitch Plays Pokemon, we’re going to use Python because it’s great for fun little experiments like these. With that said, any language will do fine — just apply the same techniques in the relevant syntax.

SimpleTwitchCommander, as I’ve named it on Github, assumes some familiarity with basic Python programming. The code will allow you to take commands from chat in two ways. Commands from chat can be tabulated, and only the one with the most votes executed, or every single command can be acted on directly. Actually getting this code to control your robot, video game, or pet viper is up to you. What we’re doing here is interfacing with Twitch chat and pulling out commands so you can make it do whatever you like. With that said, for this example, we’ve set up the code to parse commands for a simple wheeled robot. Let’s dive in.

Continue reading “Code Your Own Twitch Chat Controls For Robots — Or Just About Anything Else!”

Field Guide To Shipping Containers

In the 1950s, trucking magnate Malcom McLean changed the world when he got frustrated enough with the speed of trucking and traffic to start a commercial shipping company in order to move goods up and down the eastern seaboard a little faster. Within ten years, containers were standardized, and the first international container ship set sail in 1966. The cargo? Whisky for the U.S. and guns for Europe. What was once a slow and unreliable method of moving all kinds of whatever in barrels, bags, and boxes became a streamlined operation — one that now moves millions of identical containers full of unfathomable miscellany each year.

When I started writing this, there was a container ship stuck in the Suez canal that had been blocking it for days. Just like that, a vital passage became completely clogged, halting the shipping schedule of everything from oil and weapons to ESP8266 boards and high-waist jeans. The incident really highlights the fragility of the whole intermodal system and makes us wonder if anything will change.

A rainbow of dry storage containers. Image via xChange

Setting the Standard

We are all used to seeing the standard shipping container that’s either a 10′, 20′, or 40′ long box made of steel or aluminum with doors on one end. These are by far the most common type, and are probably what come to mind whenever shipping containers are mentioned.

These are called dry storage containers, and per ISO container standards, they are all 8′ wide and 8′ 6″ tall. There are also ‘high cube’ containers that are a foot taller, but otherwise share the same dimensions. Many of these containers end up as some type of housing, either as stylish studios, post-disaster survivalist shelters, or construction site offices. As the pandemic wears on, they have become so much in demand that prices have surged in the last few months.

Although Malcom McLean did not invent container shipping, the strict containerization standards that followed in his wake prevent issues during stacking, shipping, and storing, and allow any container to be handled safely at any port in the world, or load onto any rail car with ease. Every bit of the container is standardized, from the dimensions to the way the container’s information is displayed on the end. At most, the difference between any two otherwise identical containers is the number, the paint job, and maybe a few millimeters in one dimension.

Standard as they may be, these containers don’t work for every type of cargo. There are quite a few more types of shipping containers out there that serve different needs. Let’s take a look at some of them, shall we?

Continue reading “Field Guide To Shipping Containers”

Death Of The Turing Test In An Age Of Successful AIs

IBM has come up with an automatic debating system called Project Debater that researches a topic, presents an argument, listens to a human rebuttal and formulates its own rebuttal. But does it pass the Turing test? Or does the Turing test matter anymore?

The Turing test was first introduced in 1950, often cited as year-one for AI research. It asks, “Can machines think?”. Today we’re more interested in machines that can intelligently make restaurant recommendations, drive our car along the tedious highway to and from work, or identify the surprising looking flower we just stumbled upon. These all fit the definition of AI as a machine that can perform a task normally requiring the intelligence of a human. Though as you’ll see below, Turing’s test wasn’t even for intelligence or even for thinking, but rather to determine a test subject’s sex.

Continue reading “Death Of The Turing Test In An Age Of Successful AIs”

AI Upscaling And The Future Of Content Delivery

The rumor mill has recently been buzzing about Nintendo’s plans to introduce a new version of their extremely popular Switch console in time for the holidays. A faster CPU, more RAM, and an improved OLED display are all pretty much a given, as you’d expect for a mid-generation refresh. Those upgraded specifications will almost certainly come with an inflated price tag as well, but given the incredible demand for the current Switch, a $50 or even $100 bump is unlikely to dissuade many prospective buyers.

But according to a report from Bloomberg, the new Switch might have a bit more going on under the hood than you’d expect from the technologically conservative Nintendo. Their sources claim the new system will utilize an NVIDIA chipset capable of Deep Learning Super Sampling (DLSS), a feature which is currently only available on high-end GeForce RTX 20 and GeForce RTX 30 series GPUs. The technology, which has already been employed by several notable PC games over the last few years, uses machine learning to upscale rendered images in real-time. So rather than tasking the GPU with producing a native 4K image, the engine can render the game at a lower resolution and have DLSS make up the difference.

The current model Nintendo Switch

The implications of this technology, especially on computationally limited devices, is immense. For the Switch, which doubles as a battery powered handheld when removed from its dock, the use of DLSS could allow it to produce visuals similar to the far larger and more expensive Xbox and PlayStation systems it’s in competition with. If Nintendo and NVIDIA can prove DLSS to be viable on something as small as the Switch, we’ll likely see the technology come to future smartphones and tablets to make up for their relatively limited GPUs.

But why stop there? If artificial intelligence systems like DLSS can scale up a video game, it stands to reason the same techniques could be applied to other forms of content. Rather than saturating your Internet connection with a 16K video stream, will TVs of the future simply make the best of what they have using a machine learning algorithm trained on popular shows and movies?

Continue reading “AI Upscaling And The Future Of Content Delivery”