Morse code — that series of dots and dashes — can be useful in the strangest situations. As a kid I remember an original Star Trek episode where an injured [Christopher Pike] could only blink a light once for yes and twice for no. Even as a kid, I remember thinking, “Too bad they didn’t think to teach him Morse code.” Of course odd uses of Morse aren’t just for TV and Movies. Perhaps the strangest real-life use was the case of the Colombian government hiding code in pop music to send messages to hostages.
In 2010, [Jose Espejo] was close to retirement from the Colombian army. But he was bothered by the fact that some of his comrades were hostages of FARC (the Revolutionary Armed Forces of Colombia; the anti-government guerrillas), some for as many as ten years. There was a massive effort to free hostages underway, and they wanted them to know both to boost morale and so they’d be ready to escape. But how do you send a message to people in captivity without alerting their captors?
The image of the crackpot inventor, disheveled, disorganized, and surrounded by the remains of his failures, is an enduring Hollywood trope. While a simple look around one’s shop will probably reveal how such stereotypes get started, the image is largely not a fair characterization of the creative mind and how it works, and does not properly respect those who struggle daily to push the state of the art into uncharted territory.
That said, there are plenty of wacky ideas that have come down the pike, most of which mercifully fade away before attracting undue attention. In times of war, though, the need for new and better ways to blow each other up tends to bring out the really nutty ideas and lower the barrier to revealing them publically, or at least to military officials.
Of all the zany plans that came from the fertile minds on each side of World War II, few seem as out there as a plan to use birds to pilot bombs to their targets. And yet such a plan was not only actively developed, it came from the fertile mind of one of the 20th century’s most brilliant psychologists, and very nearly resulted in a fieldable weapon that would let fly the birds of war.
If you were to create a short list of women who influenced software engineering, one of the first picks would be Margaret Hamilton. The Apollo 11 source code lists her title as “PROGRAMMING LEADER”. Today that title would probably be something along the line of “Lead software engineer”
Margaret Hamilton was born in rural Indiana in 1936. Her father was a philosopher and poet, who, along with grandfather, encouraged her love of math and sciences. She studied mathematics with a minor in philosophy, earning her BA from Earlham College in 1956. While at Earlham, her plan to continue on to grad school was delayed as she supported her husband working on his own degree from Harvard. Margaret took a job at MIT, working under Professor Edward Norton Lorenz on a computer program to predict the weather. Margaret cut her teeth on the desk-sized LGP-30 computer in Norton’s office.
Hamilton soon moved on to the SAGE program, writing software which would monitor radar data for incoming Russian bombers. Her work on SAGE put Margaret in the perfect position to jump to the new Apollo navigation software team.
The Apollo guidance computer software team was designed at MIT, with manufacturing done at Raytheon. To say this was a huge software project for the time would be an understatement. By 1968, over 350 engineers were working on software. 1400 man-years of software engineering were logged before Apollo 11 touched down on the lunar surface, and the project was lead by Margaret Hamilton. Continue reading “Margaret Hamilton Takes Software Engineering To The Moon And Beyond”→
On April 2nd, 2018 a Falcon 9 rocketed skywards towards the International Space Station. The launch itself went off without a hitch, and the Dragon spacecraft delivered its payload of supplies and spare parts. But alongside the usual deliveries, CRS-14 brought a particularly interesting experiment to the International Space Station.
Developed by the University of Surrey, RemoveDEBRIS is a demonstration mission that aims to test a number of techniques for tackling the increasingly serious problem of “space junk”. Earth orbit is filled with old spacecraft and bits of various man-made hardware that have turned some areas of space into a literal minefield. While there have been plenty of ideas floated as to how to handle this growing issue, RemoveDEBRIS will be testing some of these methods under real-world conditions.
The RemoveDEBRIS spacecraft will do this by launching two CubeSats as test targets, which it will then (hopefully) eliminate in a practical demonstration of what’s known as Active Debris Removal (ADR) technology. If successful, these techniques could eventually become standard operating procedure on future missions.
Pointers — you either love them, or you haven’t fully understood them yet. But before you storm off to the comment section now, pointers are indeed a polarizing subject and are both C’s biggest strength, and its major source of problems. With great power comes great responsibility. The internet and libraries are full of tutorials and books telling about pointers, and you can randomly pick pretty much any one of them and you’ll be good to go. However, while the basic principles of pointers are rather simple in theory, it can be challenging to fully wrap your head around their purpose and exploit their true potential.
So if you’ve always been a little fuzzy on pointers, read on for some real-world scenarios of where and how pointers are used. The first part starts with regular pointers, their basics and common pitfalls, and some general and microcontroller specific examples.
Self-driving cars have been in the news a lot in the past two weeks. Uber’s self-driving taxi hit and killed a pedestrian on March 18, and just a few days later a Tesla running in “autopilot” mode slammed into a road barrier at full speed, killing the driver. In both cases, there was a human driver who was supposed to be watching over the shoulder of the machine, but in the Uber case the driver appears to have been distracted and in the Tesla case, the driver had hands off the steering wheel for six seconds prior to the crash. How safe are self-driving cars?
Trick question! Neither of these cars were “self-driving” in at least one sense: both had a person behind the wheel who was ultimately responsible for piloting the vehicle. The Uber and Tesla driving systems aren’t even comparable. The Uber taxi does routing and planning, knows the speed limit, and should be able to see red traffic lights and stop at them (more on this below!). The Tesla “Autopilot” system is really just the combination of adaptive cruise control and lane-holding subsystems, which isn’t even enough to get it classified as autonomous in the state of California. Indeed, it’s a failure of the people behind the wheels, and the failure to properly train those people, that make the pilot-and-self-driving-car combination more dangerous than a human driver alone would be.
A self-driving Uber Volvo XC90, San Francisco.
You could still imagine wanting to dig into the numbers for self-driving cars’ safety records, even though they’re heterogeneous and have people playing the mechanical turk. If you did, you’d be sorely disappointed. None of the manufacturers publish any of their data publicly when they don’t have to. Indeed, our glimpses into data on autonomous vehicles from these companies come from two sources: internal documents that get leaked to the press and carefully selected statistics from the firms’ PR departments. The state of California, which requires the most rigorous documentation of autonomous vehicles anywhere, is another source, but because Tesla’s car isn’t autonomous, and because Uber refused to admit that its car is autonomous to the California DMV, we have no extra insight into these two vehicle platforms.
Nonetheless, Tesla’s Autopilot has three fatalities now, and all have one thing in common — all three drivers trusted the lane-holding feature well enough to not take control of the wheel in the last few seconds of their lives. With Uber, there’s very little autonomous vehicle performance history, but there are leaked documents and a pattern that makes Uber look like a risk-taking scofflaw with sub-par technology that has a vested interest to make it look better than it is. That these vehicles are being let loose on public roads, without extra oversight and with other traffic participants as safety guinea pigs, is giving the self-driving car industry and ideal a black eye.
If Tesla’s and Uber’s car technologies are very dissimilar, the companies have something in common. They are both “disruptive” companies with mavericks at the helm that see their fates hinging on getting to a widespread deployment of self-driving technology. But what differentiates Uber and Tesla from Google and GM most is, ironically, their use of essentially untrained test pilots in their vehicles: Tesla’s in the form of consumers, and Uber’s in the form of taxi drivers with very little specific autonomous-vehicle training. What caused the Tesla and Uber accidents may have a lot more to do with human factors than self-driving technology per se.
You can see we’ve got a lot of ground to cover. Read on!
China’s first space station, Tiangong-1, is expected to do an uncontrolled re-entry on April 1st, +/- 4 days, though the error bars vary depending on the source. And no, it’s not the grandest of all April fools jokes. Tiangong means “heavenly palace”, and this portion of the palace is just one step of a larger, permanent installation.
But before detailing just who’ll have to duck when the time comes, as well as how to find it in the night sky while you still can, let’s catch up on China’s space station program and Tiangong-1 in particular.