__ __ __ ___ / // /__ _____/ /__ ___ _ / _ \___ ___ __ / _ / _ `/ __/ '_/ / _ `/ / // / _ `/ // / /_//_/\_,_/\__/_/\_\ \_,_/ /____/\_,_/\_, / retro edition /___/Now optimized for embedded devices!!
|About||Successes||Retrocomputing guide||Email Hackaday|
Have you ever wished we could peek at all these exoplanets that have been recently discovered? We aren’t likely to visit anytime soon, but it would be possible to build a truly giant telescope that could take a look at something like that. At least according to [SciShow Space] in a recent video you can see below.
The idea put forth in a recent scientific paper is to deliberately create the conditions that naturally form gravitational lenses. If you recall, scientists have used these naturally-occurring lenses to image the oldest star ever observed. These natural super-telescopes have paid off many times, but you can’t pick what you want to look at. It is all a function of the distance to the star creating the lens and the direction a line between us points.
But what if you could create your own gravity lens? Granted, we probably aren’t going to do that in our garages. However, a recent paper talks about launching an optical detector that you could maneuver so that it was on a line that would pass through the object you want to see and our own sun. We clearly have the technology to do this. After all, we have several nice space telescopes, and several probes operating far away from the sun.
That is one of the biggest catches, though. This new telescope will need to be some 550 AU from the sun to get good results. For the record, the Earth is 1 AU (about 8 light minutes) out. Pluto — maybe not a planet anymore, but still a signpost on the way out of the solar system — is a scant 39 AU out. Voyager I, which has been racing away from the sun since 1977 is only about 156 AU out.
Because the craft would be so far out, it would be practically a one-shot mission. You also have to have something reliable enough to go the 17 years it would take with today’s technology to get in place. You also need a way to get the data back over that distance. All doable, but non-trivial.
The paper simulates what the Earth would look like using this technique from a nearby star. The images are shockingly good, especially after a bit of post-processing. Meanwhile, we may have to settle for more modest images. You might not see detail, but it is possible to find exoplanets with reasonably modest equipment.
Continue reading →
It looks like it’s soon to be lights out for the Mars InSight lander. In the two years that the lander has been studying the geophysics of Mars from its lonely post on Elysium Planitia, InSight’s twin solar arrays have been collecting dust, and now are so dirty that they’re only making about 500 watt-hours per sol, barely enough to run the science packages on the lander. And that’s likely to worsen as the Martian winter begins, which will put more dust in the sky and lower the angle of the Sun, reducing the sunlight that’s incident to the panels. Barring a “cleaning event” courtesy of a well-placed whirlwind, NASA plans to shut almost everything down on the lander other than the seismometer, which has already captured thousands of marsquakes, and the internal heaters needed to survive the cold Martian nights. They’re putting a brave face on it, emphasizing the continuing science and the mission’s accomplishments. But barely two years of science and a failed high-profile experiment aren’t quite what we’ve come to expect from NASA missions, especially one with an $800 million price tag.
Closer to home, it turns out there’s a reason sailing ships have always had human crews: to fix things that go wrong. That’s the lesson learned by the Mayflower Autonomous Ship as it attempted the Atlantic crossing from England to the States, when it had to divert for repairs recently. It’s not clear what the issue was, but it seems to have been a mechanical issue, as opposed to a problem with the AI piloting system. The project dashboard says that the issue has been repaired, and the AI vessel has shoved off from the Azores and is once more beating west. There’s a long stretch of ocean ahead of it now, and few options for putting in should something else go wrong. Still, it’s a cool project, and we wish them a fair journey.
Have you ever walked past a display of wall clocks at the store and wondered why someone went to the trouble of setting the time on all of them to 10:10? We’ve certainly noticed this, and always figured it had something to do with some obscure horological tradition, like using “IIII” to mark the four o’clock hour on clocks with Roman numerals rather than the more correct “IV”. But no, it turns out that 10:10 is more visually pleasing, and least on analog timepieces, because it evokes a smile on a human face. The study cited in the article had volunteers rate how pleasurable watches are when set to different times, and 10:10 won handily based on the perception that it was smiling at them. So it’s nice to know how easily manipulated we humans can be.
If there’s anything more pathetic than geriatric pop stars trying to relive their glory days to raise a little cash off a wave of nostalgia, we’re not sure what it could be. Still, plenty of acts try to do it, and many succeed, although seeing what time and the excesses of stardom have wrought can be a bit sobering. But Swedish megastars ABBA appear to have found a way to cash in on their fame gracefully, by sending digital avatars out to do their touring for them. The “ABBA-tars,” created by a 1,000-person team at Industrial Light and Magic, will appear alongside a live backing band for a residency at London’s Queen Elizabeth Olympic Park. The avatars represent Benny, Bjorn, Agnetha, and Anni-Frid as they appeared in the 1970s, and were animated thanks to motion capture suits donned while performing 40 songs. It remains to be seen how fans will buy into the concept, but we’ll say this — the Swedish septuagenarians look pretty darn good in skin-tight Spandex.
And finally, not that it has any hacking value at all, but there’s something shamefully hilarious about watching this poor little delivery bot getting absolutely wrecked by a train. It’s one of those food delivery bots that swarm over college campuses these days; how it wandered onto the railroad tracks is anyone’s guess. The bot bounced around a bit before slipping under the train’s wheels, with predictable results once the battery pack is smooshed.
[dropbear] recently found herself in a pickle. Dumping some data out of an Android app at a specific point for reverse engineering purposes. While it worked great in the simulator, it was painfully slow on hardware via lldb. The solution was to write a patch and apply it to the ELF file.
Writing the AArch64 assembly to dump the buffer is relatively trivial, but adding it to the existing ELF and repackaging it into a new APK leads to strange errors. The relative offsets into
.rodata are now all wrong. For those who don’t routinely interface with the format of ELF files, we have a fantastic resource to take you into the dark depths. But the quick summary version is that sections contain various resources, and you find parts of those resources by relative offsets. The program header describes what type of resources each section contains.
[dropbear] found a NOTE section that just contained some metadata. She created a new section at the end of the file for her custom assembly and modified the header to declare the NOTE section as a LOAD section that pointed at her new section, which would get mapped into memory. All that was left to do was tweak the assembly in the actual code to jump to her new code that dumps. The BSS section was extended by a few bytes so that her program could store its state there.
It’s an impressive technique, and her program for modifying the program header is on her website under a BSD-3 license.
After the zombie apocalypse or whatever is coming, folks like us will be in high demand as the people who know how to fix things, generate electricity, and scavenge parts. But keeping out marauding zombies and neighbors requires fencing. Can you make your own chain link fence? If you watch [Diamleon]’s recent video, you might be able to. Admittedly, the bulk of the video is about fabricating the jig and you should expect to do some welding and cutting.
However, you might be able to make a similar jig with a little less work. The jig is essential a spool on a shaft with a crosswise cut to guide the wire. The whole thing is powered by an electric drill turning a sprocket much like a bicycle.
One pass through the machine makes a nice twisty wire. Once you’ve run off a few lengths of twisty wire it is relatively easy to interlace them into fencing panels. It is one of those things that is hard to visualize until you see it. We were impressed with the drill drive and immediately thought about modifying the design to wind large coils. There are probably many other uses for such a thing. So even if you don’t want to build a fence, you might want to check it out.
As for us, we’ll probably just make our fence out of wood. Or do something electric. Oddly enough, we saw a hand-crank version of this same type of machine last year.
Continue reading →
Sending postcards to loved ones used to be standard procedure for travelers back when travel was glamorous and communications were slow. While some travelers still keep this tradition alive, many have replaced stamps and post offices with instant messaging and social media — faster and more convenient, but a lot less special than receiving a postcard with a handwritten message from a faraway land.
[Cameron] designed a postcard picture frame that aims to bring back a bit of that magic. It’s a wooden frame that holds an e-ink display, which shows pictures sent to it by your friends. All they need to do is open the unique link that you sent them beforehand and upload an interesting photo; the picture frame will cycle through the submissions based on an adjustable schedule. A web interface allows you to change settings and delete any inappropriate images.
The wooden frame is beautifully made, but the sleek black PCB inside is an true work of art. It holds a battery and a USB-C charging circuit, as well as an ESP32 that connects to WiFi, stores images and downscales them to the 800×480 monochrome format used by the display. [Cameron] has not accurately measured the current consumption, but estimates that it should work for about one year on a single charge thanks to the extremely low power requirements of e-ink displays.
Having your friends decide on the images shown in your house is an interesting idea, if you can trust them to keep it decent. If you like to have more control over your e-ink display, have a look at this solar-powered model or this wall-mounted newspaper display.
Continue reading →
A new video from [Make Anything] shows off a nice combo that has a real visual impact: ambiguous shapes that look different depending on what angle they are viewed at, combined with an unusual filament that enhances the effect greatly. As you can see in the image above that shows off just such an object in front of a mirror, the results are pretty striking.
Japanese mathematician and artist [Kokichi Sugihara] figured out the math behind such objects, of which his ambiguous cylinder illusion is probably the most well-known. That inspired [Make Anything] to create his own strange objects, which he showcases happily. He adds one more twist, however.
What is a natural complement to an object that looks different based on the direction from which it is viewed? A filament whose color depends on what direction it is viewed, of course! The filament in question is MatterHackers Quantum dual-color PLA, and this unusual filament is split right down the middle in two different colors, resulting in a printed object whose exact color depends entirely on the viewing angle, and the object geometry.
The resulting objects look especially striking when demonstrated with the help of a mirror, because as the object turns and changes, so does the color change as well. You can watch it all in action in the video below (embedded after the page break) which showcases quite a few different takes on the concept, so check it out to see them all.
3D printing has certainly opened up a new doors when it comes to brain-bending optical effects, like this hypnotic Moiré pattern, and perhaps dual-color filament can enhance those as well.
Continue reading →
Google Glass didn’t take off as expected, but — be honest — do you really want to walk around with that hardware on your head? The BBC recently covered Mojo, a company developing smart contact lenses that not only correct vision but can show a display. You can see a video from CNET on the technology below.
The lenses have microLED displays, smart sensors, and solid-state batteries similar to those found in pacemakers. The company claims to have a “feature-complete prototype” and are going to start testing, according to the BBC article. We imagine you can’t get much of a battery crammed into a contact lens, but presumably, that’s one of the things that makes it so difficult to develop this sort of tech.
The article mentions other smart contacts under development, too, including a University of Surrey lens that can monitor eye health using various sensors integrated into the lens. You have to wonder how this would be in real life. Presumably, the display turns off and you see nothing, but it is annoying enough having your phone beep constantly without getting messages across your field of vision all the time.
It seems like this is a technology that will come, of course. If not this time, then sometime in the future. While we usually think the hacker community should lead the way, we aren’t sure we want to hack on something that touches people’s eyeballs. Not everyone can say that, though. For us, we’ll stick with headsets.
Continue reading →