There is more than one way to lead a successful life. Some people have all the opportunity in the world laid out before them, and it never does them any good. Others have little more than the determination and desire they’ve dredged up within themselves, and that grit turns out to be the abrasive that smooths the path ahead.
Ronald McNair succeeded despite poverty, racism, and an education system designed to keep Black people down. He became an accidental revolutionary at the age of nine, when he broke the color barrier in his small South Carolina town via the public library. This act of defiance in pursuit of education would set the course for his relatively short but full life, which culminated in his career as a Space Shuttle mission specialist.
Rule-Breaker with a Slide Rule
Ronald McNair was born October 21, 1950 in Lake City, South Carolina, the second of three sons, to Pearl and Carl McNair. His mother was a teacher, and encouraged his love of reading. Ronald’s father, Carl was an auto mechanic who never finished high school and always regretted it. Though the family was poor, Ron grew up surrounded by books, music, and support.
3D printers have become a staple in most makerspaces these days, enabling hackers to rapidly produce simple mechanical prototypes without the need for a dedicated machine shop. We’ve seen many creative 3D designs here on Hackaday and [jegatheesan.soundarapandian’s] Baby MIT Cheetah Robot is no exception. You’ve undoubtedly seen MIT’s cheetah robot. Well, [jegatheesan’s] hack takes a personal spin on the cheetah robot and his results are pretty cool.
The body of the robot is 3D printed making it easy to customize the design and replace broken parts as you go. The legs are designed in a five-bar linkage with two servo motors controlling each of the four legs. An additional servo motor is used to rotate an HC-SR04, a popular ultrasonic distance sensor, used in the autonomous mode’s obstacle avoidance mechanism. The robot can also be controlled over Bluetooth using an app [jegatheesan] developed in MIT App Inventor.
Overall, the mechanics could use a bit of work — [jegatheesan’s] baby cheetah probably won’t outpace MIT’s robot any time soon — but it’s a cool hack and we’re looking forward to a version 3. Maybe the cheetah would make a cool companion bot?
The MIT Media Lab’s Open Agriculture Initiative (OpenAg) promised to revolutionize urban farming with their Food Computers: essentially miniature automated gardens that could be installed in racks to maximize growing space. Each unit would be provided with a “Recipe” that allowed it to maintain the ideal environmental conditions for the species it contained, which meant that even the novice gardener to produce a bumper crop no whether they lived in the Arctic Circle or the Sahara.
With such lofty goals, success certainly wasn’t assured. But we still didn’t expect to hear that the program had to be permanently closed after a string of startling accusations came to light. From engaging in scientific dishonesty to setting off a minor ecological disaster, the story just gets worse and worse. Who could have imagined that one day we’d have to report on an open source project having direct ties to Jeffrey Epstein?
Food Computer v3.0
According to reports, MIT Media Lab Director Joichi Ito and OpenAg principal researcher Caleb Harper attempted to secure $1.5 million in funding for the program during a 2017 meeting with the disgraced financier. Epstein apparently wasn’t impressed by what he saw, and no money ever changed hands. Given the information we now have about the project, this might actually be the least surprising part of the story.
It has since come to light that the Food Computers never worked consistently, and indeed never made it past the prototype stage. This despite the fact that Harper claimed that functional units had already been deployed to refugee camps during presentation to potential investors. A scientist working with the project has even come forward with claims that staff were instructed to place plants brought from local garden centers into the prototype Food Computers prior to tours of the lab so visitors would think they had been grown in the devices.
A former researcher working on the OpenAg program, Babak Babakinejad, also went public with his concerns over the environmental impact of dumping waste water from the Food Computers. The lab had a permit to pump nitrogen-infused water into an underground disposal well, but according to Babakinejad, internal testing showed the nitrogen levels in the water would occasionally top 20 times the stated limit. After his concerns were ignored by Harper and other MIT staff, he eventually took his concerns directly to the Massachusetts Department of Environmental Protection which led to an investigation and ultimately a fine of $25K.
We first covered the Open Agriculture Initiative back in 2016, and readers expressed doubts about the concept even then. While we certainly don’t relish making an update like this about a project we’ve featured, it’s an important reminder that honesty and integrity can’t take a backseat to technical achievement.
Though mostly known for its releases on countless 8-bit personal computers from the 1970s and 1980s, the game of Zork began its life on a PDP-10 mainframe. Recently, MIT released the original source code for this version of Zork. As we covered a while ago, the history of Zork is a long and lustrous one, a history that is based on this initial version written in MDL.
To recap, MDL is a LISP-derived language that excels at natural language processing. It was developed and used at MIT’s AI and LCS (now CSAIL) departments for a number of projects, and of course to develop games with. The use of MDL gave Zork as a text-based adventure a level of interaction that was far ahead of its time.
What MIT has made available is the source code from Zork as it existed around 1977, at a time when it was being distributed to universities around the US. For purely educational purposes, obviously. This means that it’s a version of Zork before it was commercialized (~1979), showing a rare glimpse of the game as it was still busily being expanded.
Running the game will take a bit of effort, however. These files were retrieved from an original MIT backup tape that was used with their PDP-10 machines. Ideally one would use a 1970s-era PDP-10 mainframe with an MDL compiler, but in a pinch one could run a PDP-10 emulator as well.
Let us know whether you got it to run. Screenshots (ASCII or not) are highly encouraged.
Ventilators are key in the treating the most dire cases of coronavirus. The exponential growth of infections, and the number of patients in respiratory distress, has outpaced the number of available ventilators. In times of crisis, everyone looks for ways they can help, and one of the ways the hardware community has responded is in work toward a ventilator design that can be rapidly manufactured to meet the need.
The difficult truth is that the complexity of ventilator features needed to treat the sickest patients makes a bootstrapped design incredibly difficult, and I believe impossible to achieve in quantity on this timeline. Still, a well-engineered and clinically approved open source ventilator might deliver many benefits beyond the current crisis. Let’s take a look at some of the efforts we’ve been seeing recently and what it would take to pull together a complete design.
When you put a human driver behind the wheel, they will use primarily their eyes to navigate. Both to stay on the road and to use any navigation aids, such as maps and digital navigation assistants. For self-driving cars, tackling the latter is relatively easy, as the system would use the same information in a similar way: when to to change lanes, and when to take a left or right. The former task is a lot harder, with situational awareness even a challenge for human drivers.
In order to maintain this awareness, self-driving and driver-assistance systems use a combination of cameras, LIDAR, and other sensors. These can track stationary and moving objects and keep track of the lines and edges of the road. This allows the car to precisely follow the road and, at least in theory, not run into obstacles or other vehicles. But if the weather gets bad enough, such as when the road is covered with snow, these systems can have trouble coping.
Looking for ways to improve the performance of autonomous driving systems in poor visibility, engineers are currently experimenting with ground-penetrating radar. While it’s likely to be awhile before we start to see this hardware on production vehicles, the concept already shows promise. It turns out that if you can’t see whats on the road ahead of you, looking underneath it might be the next best thing. Continue reading “Navigating Self-Driving Cars By Looking At What’s Underneath The Road”→
Most often, humans and robots do not have to work directly together, instead working on different parts in a production pipeline or with the robot performing tasks instead of a human. In such cases any human-robot interaction (HRI) will be superficial. Yet what if humans and robots have to work alongside each other? This is a question which a group of students at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) have recently studied some answers to.
In their paper on human-robot collaborative tasks (PDF), they cover the three possible models one can use for this kind of interaction: there can be no communication (‘silent’), the communication can be pre-programmed (state machine), or in this case a Markov model-based system. This framework which they demonstrate is called CommPlan and it uses observation data from human subjects to construct a Markov model that can integrate sensor data in order to decide on its next action.
In the experiment they performed (the preparation of a meal; see the embedded video after the break), human subjects had to work alongside a robot. Between the three different approaches, the CommPlan one was the fastest, using voice interaction only when it deemed it to be necessary. The experiment’s subjects expressed hereby a preference for bidirectional communication, much as would occur between human workers.