How important is it to identify killer asteroids before they strike your planet? Ask any dinosaurs. Oh, wait… Granted you also need a way to redirect them, but interest in finding them has picked up lately including a new privately funded program called the Asteroid Institute.
Using an open-source cloud platform known as ADAM — Asteroid Discovery Analysis and Mapping — the program, affiliated with B612 program along with others including the University of Washington, has already discovered 104 new asteroids and plotted their orbits.
What’s interesting is that the Institute doesn’t acquire any images itself. Instead, it uses new techniques to search through existing optical records to identify previously unnoticed asteroids and compute their trajectories.
You have to wonder how many other data sets are floating around that hold unknown discoveries waiting for the right algorithm and computing power. Of course, once you find the next extinction asteroid, you have to decide what to do about it. Laser? Bomb? A gentle push at a distance? Or hope for an alien obelisk to produce a deflector ray? How would you do it?
In CPU design, there is Ahmdal’s law. Simply put, it means that if some process is contributing to 10% of your execution, optimizing it can’t improve things by more than 10%. Common sense, really, but it illustrates the importance of knowing how fast or slow various parts of your system are. So how fast are Linux pipes? That’s a good question and one that [Mazzo] sets out to answer.
The inspiration was a highly-optimized fizzbuzz program that clocked in at over 36GB/s on his laptop. Is that a common speed? Nope. A simple program using pipes on the same machine turned in not quite 4 GB/s. What accounts for the difference?
When NASA astronauts aboard the International Space Station have to clamber around on the outside of the orbiting facility for maintenance or repairs, they don a spacesuit known as the Extravehicular Mobility Unit (EMU). Essentially a small self-contained spacecraft in its own right, the bulky garment was introduced in 1981 to allow Space Shuttle crews to exit the Orbiter and work in the craft’s cavernous cargo bay. While the suits did get a minor upgrade in the late 90s, they remain largely the product of 1970s technology.
Not only are the existing EMUs outdated, but they were only designed to be use in space — not on the surface. With NASA’s eyes on the Moon, and eventually Mars, it was no secret that the agency would need to outfit their astronauts with upgraded and modernized suits before moving beyond the ISS. As such, development of what would eventually be the Exploration Extravehicular Mobility Unit (xEMU) dates back to at least 2005 when it was part of the ultimately canceled Constellation program.
NASA’s own xEMU suit won’t be ready by 2025.
Unfortunately, after more than a decade of development and reportedly $420 million in development costs, the xEMU still isn’t ready. With a crewed landing on the Moon still tentatively scheduled for 2025, NASA has decided to let their commercial partners take a swing at the problem, and has recently awarded contracts to two companies for a spacesuit that can both work on the Moon and replace the aging EMU for orbital use on the ISS.
As part of the Exploration Extravehicular Activity Services (xEVAS) contract, both companies will be given the data collected during the development of the xEMU, though they are expected to create new designs rather than a copy of what NASA’s already been working on. Inspired by the success of the Commercial Crew program that gave birth to SpaceX’s Crew Dragon, the contract also stipulates that the companies will retain complete ownership and control over the spacesuits developed during the program. In fact, NASA is even encouraging the companies to seek out additional commercial customers for the finished suits in hopes a competitive market will help drive down costs.
There’s no denying that NASA’s partnerships with commercial providers has paid off for cargo and crew, so it stands to reason that they’d go back to the well for their next-generation spacesuit needs. There’s also plenty of incentive for the companies to deliver a viable product, as the contact has a potential maximum value of $3.5 billion. But with 2025 quickly approaching, and the contact requiring a orbital shakedown test before the suits are sent to the Moon, the big question is whether or not there’s still enough time for either company to make it across the finish line.
Depending on who you ask, there’s either 2 vulnerabilities at play in Follina, only one, or according to Microsoft a week ago, no security problem whatsoever. On the 27th of last month, a .docx file was uploaded to VirusTotal, and most of the tools there thought it was perfectly normal. That didn’t seem right to [@nao_sec], who raised the alarm on Twitter. It seems this suspicious file originated somewhere in Belarus, and it uses a series of tricks to run a malicious PowerShell script. Continue reading “This Week In Security: Follina, Open Redirect RCE, And Annoyware”→
There’s a danger in security research that we’ve discussed a few times before. If you discover a security vulnerability on a production system, and there’s no bug bounty, you’ve likely broken a handful of computer laws. Turn over the flaw you’ve found, and you’re most likely to get a “thank you”, but there’s a tiny chance that you’ll get charged for a computer crime instead. Security research in the US is just a little safer now, as the US Department of Justice has issued a new policy stating that “good-faith security research should not be charged.”
While this is a welcome infection of good sense, it would be even better for such a protection to be codified into law. The other caveat is that this policy only applies to federal cases in the US. Other nations, or even individual states, are free to bring charges. So while this is good news, continue to be careful. There are also some caveats about what counts as good-faith — If a researcher uses a flaw discovery to extort, it’s not good-faith. Continue reading “This Week In Security: Good Faith, Easy Forgery, And I18N”→
Outside of very small applications, Nikola Tesla’s ideas about transmitting serious power without wires have not been very practical. Sure, we can draw microwatts from radio signals in the air, and if you’re willing to get your phone in just the right spot, you can charge it. But having power sent to your laptop anywhere in your home is still a pipe dream. Sending power from a generating station to a dozen homes without wire is even more fantastic. Or is it? [Paul Jaffe] of the Naval Research Laboratory thinks it isn’t fantastic at all and he explains why in a post on IEEE Spectrum.
Historically, there have been attempts to move lots of power around wirelessly. In 1975, researchers sent power across a lab using microwaves at 50% efficiency. They were actually making the case for beaming energy down from solar power satellites. According to [Jaffe], the secret is to go beyond even microwaves. A 2019 demonstration by the Navy conveyed 400 watts over 300 meters using a laser. Using a tightly confined beam on a single coherent wavelength allows for very efficient photovoltaic cells that can far outstrip the kind we are used to that accept a mix of solar lighting.
Wait. The Navy. High-powered laser beams. Uh oh, right? According to [Jaffe], it is all a factor of how dense the energy in the beam is, along with the actual wavelengths involved. The 400-watt beam, for example, was in a virtual enclosure that could sense any object approaching the main beam and cut power.
Keep in mind that 400 watts isn’t enough to power a hair dryer. Besides, point-to-point transmission with a laser is fine for sending power to a far-flung community but not great for keeping your laptop charged no matter where you leave it.
Still, this sounds like exciting work. While it might not be Tesla’s exact vision, laser transmission might be closer than it seemed just a few years ago. We’ve seen similar systems that employ safety sensors, but they are all relatively low-power. We still want to know what’s going on in Milford, Texas, though.
It was a melancholy Monday this week in the Big Apple as the last public payphone was uprooted from midtown Manhattan near Times Square and hauled away like so much garbage. That oughta be in a museum, you’re thinking, if you’re anything like us. Don’t worry; that’s exactly where the pair is headed.
This all started in 2014 when mayor de Blasio pledged to move the concept of street-level public utility into the future. Since then, NYC’s payphones have been systematically replaced with roughly 2,000 Link Wi-Fi kiosks that provide free domestic phone calls, device charging, and of course, Internet access. They also give weather, transit updates, and neighborhood news.
There are still a few private payphones around the city, so Superman still has places to change, and Bill and Ted can continue to come home. But if you need to make a phone call and have nowhere to turn, a Link kiosk is the way to go.