The news was recently abuzz with stories of how the Mars 2020 mission, which launched from Cape Canaveral at the end of July, had done something that no other spacecraft had done before: it had successfully charged the batteries aboard a tiny helicopter that is hitching a ride in the belly of the Mars 2020 rover, Perseverance.
Although the helicopter, aptly named Ingenuity, is only a technology demonstrator, and flight operations will occupy but a small fraction of the time Mars 2020 is devoting to its science missions, it has still understandably captured the popular imagination. This will be humanity’s first attempt at controlled, powered flight on another planet, after all, and that alone is enough to spur intense interest in what amounts to a side-project for NASA. So here’s a closer look at Ingenuity, and what it takes to build a helicopter that will explore another world.
Lithium ion batteries have been a revolutionary technology. Their high energy and power density has made the electric car a practical reality, enabled grid storage for renewable energy, and put powerful computers in the palm of the hand. However, if there’s one thing humanity is known for, it’s always wanting more.
Potential contenders for the title of ultimate battery technology are out there, but it will take a major shift to dethrone lithium-ion from the top of the tree.
A heat wave spreading across a large portion of the west coast of the United States is not surprising for this time of year, but the frequency and severity of these heat waves have been getting worse in recent years as the side effects from climate change become more obvious. In response to this, the grid operators in California have instituted limited rolling blackouts as electricity demand ramps up.
This isn’t California’s first run-in with elective blackouts, either. The electrical grid in California is particularly prone to issues like this, both from engineering issues and from other less obvious problems as well.
The Raspberry Pi was initially developed as an educational tool. With its bargain price and digital IO, it quickly became a hacker favorite. It also packed just enough power to serve as a compact emulation platform for anyone savvy enough to load up a few ROMs on an SD card.
Video game titans haven’t turned a blind eye to this, realising there’s still a market for classic titles. Combine that with the Internet’s love of anything small and cute, and the market was primed for the release of tiny retro consoles.
Often selling out quickly upon release, the devices have met with a mixed reception at times due to the quality of the experience and the games included in the box. With so many people turning the Pi into a retrogaming machine, these mini-consoles purpose built for the same should have been immediately loved by hardware hackers, right? So what happened?
In 2020, our digital world and the software we use to create it are a towering structure, built upon countless layers of abstraction and building blocks — just think about all the translations and interactions that occur from loading a webpage. Whilst abstraction is undoubtedly a great thing, it only works if we’re building on solid ground; if the lower levels are stable and fast. What does that mean in practice? It means low-level, compiled languages, which can be heavily optimised and leveraged to make the most of computer hardware. One of the giants in this area was Frances Allen, who recently passed away in early August. Described by IBM as “a pioneer in compiler organization and optimization algorithms,” she made numerous significant contributions to the field. Continue reading “Frances Allen Optimised Your Code Without You Even Knowing”→
It’s true what they say — you never know what you can do until you try. Russell Kirsch, who developed the first digital image scanner and subsequently invented the pixel, was a firm believer in this axiom. And if Russell had never tried to get a picture of his three-month-old son into a computer back in 1957, you might be reading Hackaday in print right now. Russell’s work laid the foundation for the algorithms and storage methods that make digital imaging what it is today.
Russell reads SEAC’s last printout. Image via TechSpot
Russell A. Kirsch was born June 20, 1929 in New York City, the son of Russian and Hungarian immigrants. He got quite an education, beginning at Bronx High School of Science. Then he earned a bachelor’s of Electrical Engineering at NYU, a Master of Science from Harvard, and attended American University and MIT.
In 1951, Russell went to work for the National Bureau of Standards, now known as the National Institutes of Science and Technology (NIST). He spent nearly 50 years at NIST, and started out by working with one of the first programmable computers in America known as SEAC (Standards Eastern Automatic Computer). This room-sized computer built in 1950 was developed as an interim solution for the Census Bureau to do research (PDF).
Standards Eastern Automatic Computer (SEAC) was the first programmable computer in the United States. Credit: NIST via Wikimedia
Like the other computers of its time, SEAC spoke the language of punch cards, mercury memory, and wire storage. Russell Kirsch and his team were tasked with finding a way to feed pictorial data into the machine without any prior processing. Since the computer was supposed to be temporary, its use wasn’t as tightly controlled as other computers. Although it ran 24/7 and got plenty of use, SEAC was more accessible than other computers, which allowed time for bleeding edge experimentation. NIST ended up keeping SEAC around for the next thirteen years, until 1963.
The Original Pixel Pusher
This photo of Russell’s son Walden is the first digitized image. Public Domain via Wikimedia
The term ‘pixel’ is a shortened portmanteau of picture element. Technically speaking, pixels are the unit of length for digital imaging. Pixels are building blocks for anything that can be displayed on a computer screen, so they’re kind of the first addressable blinkenlights.
As the drum slowly rotated, a photo-multiplier moved back and forth, scanning the image through a square viewing hole in the wall of a box. The tube digitized the picture by transmitting ones and zeros to SEAC that described what it saw through the square viewing hole — 1 for white, and 0 for black. The digital image of Walden is 76 x 76 pixels, which was the maximum allowed by SEAC.
In in the video below, Russell discusses the idea and proves that variable pixels make a better image with more information than square pixels do, and with significantly fewer pixels overall. It takes some finagling, as pixel pairs of triangles and rectangles must be carefully chosen, rotated, and mixed together to best represent the image, but the image quality is definitely worth the effort. Following that is a video of Russell discussing SEAC’s hardware.
Russell retired from NIST in 2001 and moved to Portland, Oregon. As of 2012, he could be found in the occasional coffeehouse, discussing technology with anyone he could engage. Unfortunately, Russell developed Alzheimer’s and died from complications on August 11, 2020. He was 91 years old.
The crown jewels of the Earth’s mountain ranges, the Himalayas, are unsurpassed in their beauty, their height, and their deadly attraction to adventurers, both professional and amateur. The gem of the Himalayas is, of course, Mount Everest, known as Sagarmatha to the Nepalis and Chomolungma to the Tibetans. At 8,848 meters (29,029 ft) — or more; it’s a geologically young mountain that’s still being thrust upward by tectonic activity — it’s a place so forbidding that as far as we know the summit was never visited until 1953, despite at least 30 years of previous attempts, many of which resulted in death.
The conquest of Everest remains a bucket list challenge for many adventurers, and despite advances in technology that have made the peak accessible to more people — or perhaps because of that — more than 300 corpses litter the mountain, testament to what can happen when you take the power of Mother Nature for granted.
To get better data on the goings-on at the Roof of the World, an expedition recently sought to install five weather stations across various points on the route up Mount Everest, including one at its very peak. The plan was challenging, both from a mountaineering perspective and in terms of the engineering required to build something that would be able to withstand some of the worst conditions on the planet, and to send valuable data back reliably. It didn’t all go exactly to plan, but it’s still a great story about the intersection of science and engineering.