NASA Taps Webb To Help Study 2032 Asteroid Threat

In all likelihood, asteroid 2024 YR4 will slip silently past the Earth. Based on the data we have so far, there’s an estimated chance of only 2.1% to 2.3% that it will collide with the planet on December 22nd, 2032. Under normal circumstances, if somebody told you there was a roughly 98% chance of something not happening, you probably wouldn’t give it a second thought. There’s certainly a case to be made that you should feel that way in regards to this particular event — frankly, it’s a lot more likely that some other terrible thing is going to happen to you in the next eight years than it is an asteroid is going to ruin your Christmas party.

That being said, when you consider the scale of the cosmos, a 2+% chance of getting hit is enough to raise some eyebrows. After all, it’s the highest likelihood of an asteroid impact that we’re currently aware of. It’s also troubling that the number has only gone up as further observations of 2024 YR4’s orbit have been made; a few weeks ago, the impact probability was just 1%. Accordingly, NASA has recently announced they’ll be making time in the James Webb Space Telescope’s busy scientific schedule to observe the asteroid next month.

So keeping in mind that we’re still talking about an event that’s statistically unlikely to actually occur, let’s take a look at what we know about 2024 YR4, and how further study and analysis can give us a better idea of what kind of threat we’re dealing with.

Continue reading “NASA Taps Webb To Help Study 2032 Asteroid Threat”

Lagrange Points And Why You Want To Get Stuck At Them

Visualization of the Sun-Earth Lagrange points.

Orbital mechanics is a fun subject, as it involves a lot of seemingly empty space that’s nevertheless full of very real forces, all of which must be taken into account lest one’s spacecraft ends up performing a sudden lithobraking maneuver into a planet or other significant collection of matter in said mostly empty space. The primary concern here is that of gravitational pull, and the way it affects one’s trajectory and velocity. With a single planet providing said gravitational pull this is quite straightforward to determine, but add in another body (like the Moon) and things get trickier. Add another big planetary body (or a star like our Sun), and you suddenly got yourself the restricted three-body problem, which has vexed mathematicians and others for centuries.

The three-body problem concerns the initial positions and velocities of three point masses. As they orbit each other and one tries to calculate their trajectories using Newton’s laws of motion and law of universal gravitation (or their later equivalents), the finding is that of a chaotic system, without a closed-form solution. In the context of orbital mechanics involving the Earth, Moon and Sun this is rather annoying, but in 1772 Joseph-Louis Lagrange found a family of solutions in which the three masses form an equilateral triangle at each instant. Together with earlier work by Leonhard Euler led to the discovery of what today are known as Lagrangian (or Lagrange) points.

Having a few spots in an N-body configuration where you can be reasonably certain that your spacecraft won’t suddenly bugger off into weird directions that necessitate position corrections using wasteful thruster activations is definitely a plus. This is why especially space-based observatories such as the James Webb Space Telescope love to hang around in these spots.

Continue reading “Lagrange Points And Why You Want To Get Stuck At Them”

Timeline of the universe. A representation of the evolution of the universe over 13.77 billion years. The far left depicts the earliest moment we can now probe, when a period of "inflation" produced a burst of exponential growth in the universe. (Size is depicted by the vertical extent of the grid in this graphic.) For the next several billion years, the expansion of the universe gradually slowed down as the matter in the universe pulled on itself via gravity. More recently, the expansion has begun to speed up again as the repulsive effects of dark energy have come to dominate the expansion of the universe. The afterglow light seen by WMAP was emitted about 375,000 years after inflation and has traversed the universe largely unimpeded since then. The conditions of earlier times are imprinted on this light; it also forms a backlight for later developments of the universe. (Credit: NASA)

ESA’s Euclid Space Telescope And The Quest For Dark Energy

Most of what humankind and other mammalian species on Earth experience of the Universe is primarily restricted to the part of the electromagnetic spectrum which our optical organs can register. Despite these limitations, we have found ways over the centuries which enable us to perceive the rest of the EM spectrum, to see both what is incredibly far away, and what is incredibly small, to constantly get a little bit closer to understanding what makes the Universe into what we can observe today, and what it may look like in the future.

An essential element of this effort are space telescopes, which gaze into the depths of the Universe with no limitations imposed by the Earth’s atmosphere, or human activity. Among the many uses of space telescopes, the investigation of the expansion of the Universe is perhaps the most fascinating, as this brings us ever closer to the answers to the most fundamental questions about not only its shape, but also to its future, which may include hitherto unknown types of matter and energy.

With the recently launched Euclid space telescope, another chapter is being opened in the saga on dark energy and matter, and their nature and effects on the Universe, as well as whether they exist at all. Yet how exactly do you use a space telescope to ferret out the potential effects of dark energy?

Continue reading “ESA’s Euclid Space Telescope And The Quest For Dark Energy”

3D Printing Blueprints And Other Wall Art

Today if you want to reproduce a big schematic or a mechanical drawing, you just ask it to print or plot from the CAD model. But back in the day, you drew on big sheets at a drafting table. How do you make copies? Sure, there were a few large-format copiers, but they were expensive. A more common method was to use a heliographic copier which, often but not always, resulted in a blueprint — that is a blue page with white lines or vice versa. These days, you are more likely to see a blueprint as an artistic wall hanging, and since [Basement Creations] wanted some, he figured out how to make them with a 3D printer.

These prints aren’t really blueprints. They use the printer as a plotter and deposit white ink on a blue page. In the video below, he shows a number of ways to use a printer to create interesting wall art, even if you want it to be bigger than the print bed. Some of the wall art uses multiple 3D printed parts, and others use the printer as a plotter.

Continue reading “3D Printing Blueprints And Other Wall Art”

Hackaday Links Column Banner

Hackaday Links: March 26, 2023

Sad news in the tech world this week as Intel co-founder Gordon Moore passed away in Hawaii at the age of 94. Along with Robert Noyce in 1968, Moore founded NM Electronics, the company that would later go on to become Intel Corporation and give the world the first commercially available microprocessor, the 4004, in 1971. The four-bit microprocessor would be joined a few years later by the 8008 and 8080, chips that paved the way for the PC revolution to come. Surprisingly, Moore was not an electrical engineer but a chemist, earning his Ph.D. from the California Institute of Technology in 1954 before his postdoctoral research at the prestigious Applied Physics Lab at Johns Hopkins. He briefly worked alongside Nobel laureate and transistor co-inventor William Shockley before jumping ship with Noyce and others to found Fairchild Semiconductor, which is where he made the observation that integrated circuit component density doubled roughly every two years. This calculation would go on to be known as “Moore’s Law.”

Continue reading “Hackaday Links: March 26, 2023”

How Does The James Webb Telescope Phone Home?

When it comes to an engineering marvel like the James Webb Space Telescope, the technology involved is so specialized that there’s precious little the average person can truly relate to. We’re talking about an infrared observatory that cost $10 billion to build and operates at a temperature of 50 K (−223 °C; −370 °F), 1.5 million kilometers (930,000 mi) from Earth — you wouldn’t exactly expect it to share any parts with your run-of-the-mill laptop.

But it would be a lot easier for the public to understand if it did. So it’s really no surprise that this week we saw several tech sites running headlines about the “tiny solid state drive” inside the James Webb Space Telescope. They marveled at the observatory’s ability to deliver such incredible images with only 68 gigabytes of onboard storage, a figure below what you’d expect to see on a mid-tier smartphone these days. Focusing on the solid state drive (SSD) and its relatively meager capacity gave these articles a touchstone that was easy to grasp by a mainstream audience. Even if it was a flawed comparison, readers came away with a fun fact for the water cooler — “My computer’s got a bigger drive than the James Webb.”

Of course, we know that NASA didn’t hit up eBay for an outdated Samsung EVO SSD to slap into their next-generation space observatory. The reality is that the solid state drive, known officially as the Solid State Recorder (SSR), was custom built to meet the exact requirements of the JWST’s mission; just like every other component on the spacecraft. Likewise, its somewhat unusual 68 GB capacity isn’t just some arbitrary number, it was precisely calculated given the needs of the scientific instruments onboard.

With so much buzz about the James Webb Space Telescope’s storage capacity, or lack thereof, in the news, it seemed like an excellent time to dive a bit deeper into this particular subsystem of the observatory. How is the SSR utilized, how did engineers land on that specific capacity, and how does its design compare to previous space telescopes such as the Hubble?

Continue reading “How Does The James Webb Telescope Phone Home?”

Hackaday Links Column Banner

Hackaday Links: July 24, 2022

OK, maybe that won’t buff right out. NASA has released a more detailed analysis of the damage suffered by the James Webb Space Telescope in a run-in with a micrometeoroid, and has deemed the damage “uncorrectable”. Not that any damage to JWST is correctable, at least in the sense that the Hubble Space Telescope was able to be fitted with optics to fix its precisely-yet-inaccurately-ground main mirror. JWST is far too remote for a service call, so correctability in this case refers to a combination of what can be accomplished by tweaking the shape and position of the affected mirror segment, and what can be taken care of with image processing. The damage to segment C3, as well as damage to the other segments in a total of six collisions in the half year Webb has been on station, are assessed via “wavefront sensing”, which looks at how out of phase the light coming from each mirror segment is. The damage sounds bad, and it certainly must hurt for the techs and engineers who so lovingly and painstakingly built the thing to see it dinged up already, but in the long run, this damage shouldn’t hamper Webb’s long-term science goals.

In other space news, we hear that the Perseverance rover has taken its first chunk out of the ancient river delta in Jezero Crater. The rover has been poking around looking for something interesting to sample, but everything it tried out with its abrading tool was either too brittle, too hard to get at, or scientifically dull. Eventually the rover found a good spot to drill, and managed to bring up a 6.7-cm core sample. This makes the tenth core sample collected overall, and the first from the delta area, which is thought to have the best chance to contain evidence of ancient Martian life.

Closer to home, we’ve all likely heard of robotic surgery, but the image that conjures up doesn’t really comport with reality. Robot-assisted surgery is probably a better term, since surgical robots are generally just ultra-precise remote manipulators that are guided by a skilled surgeon. But if a study on surgery robot performance is any indication, the days of human surgeons might be numbered. The study compared accuracy and speed of both a human surgeon controlling a standard Da Vinci surgical robot and an autonomous version of the robot alone, using a depth camera for sensing. Using a standard surgical skills test, the autonomous system matched the human surgeons in terms of failures — thankfully, no “oopsies” for either — but bested the humans in speed and positional accuracy. It’ll probably be a while before fully autonomous surgeons are a thing, but we wouldn’t be betting against it in the long run.

Most readers will no doubt have heard the exciting news that Supercon will be back this year as an in-person event! Make sure you set aside the first weekend in November to make the pilgrimage to Pasadena — it’ll be great seeing everyone again after the long absence. But if you just can’t wait till November for an IRL con, consider dropping by SCALE 19X, coming up this week in Los Angeles. The Southern California Linux Expo is being held July 28 through 31, and features a ton of speakers, including a keynote by Vint Cerf. Hackaday readers can save 50% on tickets with promo code HACK.

And finally, as a lover of Easter eggs of all kinds, but specifically of the hidden message in software variety, we appreciated this ode to the Easter egg, the embedded artistry that has served as a creative outlet for programmers over the years. The article lists a few great examples of the art form, along with explaining why they’re actually important artifacts of the tech world and what they’re good for. We tried out a few of the ones listed in the article that we hadn’t heard of before; some hits, some misses, but they’re all appreciated. Well, most of them — the corporate rah-rah kind can bugger straight off as far as we’re concerned.