Cars (including LEGO ones) will roll downhill. In theory if the hill were a treadmill, the car could roll forever. In practice, there are a lot of things waiting to go wrong to keep this from happening. If you’ve ever wondered what those problems would be and what a solution would look like, [Brick Technology] has a nine-minute video showing the whole journey.
The video showcases an iterative process of testing, surfacing a problem, redesigning to address that problem, and then back to testing. It starts off pretty innocently with increasing wheel friction and adding weight, but we’ll tell you right now it goes in some unexpected directions that show off [Brick Technology]’s skill and confidence when it comes to LEGO assemblies.
How it started
How it’s going
You can watch the whole thing unfold in the video, embedded below. It’s fun to see how the different builds perform, and we can’t help but think that the icing on the cake would be LEGO bricks with OLED screens and working instrumentation built into them.
One of the great things about 3D printers is their ability to make a single part all at once. Separating a part into multiple pieces is usually done to split up objects that are too big to fit on the 3D printer’s print bed. But [Peter] at Markforged (manufacturers of high-end 3D printers) has a video explaining another reason: multi-part prints can benefit from improved strength.
This part can be easily printed as a single piece, but it can be made nearly twice as strong when printed as two, and combined.
The idea is this: filament-based 3D printers generally create parts that are strongest along their X-Y axis (relative to their manufacture) and weakest in the Z direction. [Peter] proposes splitting a part into pieces with this in mind. Not because the part is inconveniently large or has tricky geometry, but so the individual pieces can be printed in orientations that provide the best mechanical strength.
This is demonstrated with the simple part shown here. The usual way to print this part would be flat on a print bed, but by splitting the parts into two and printing each in their optimal orientation, the combined part withstands nearly twice as much force before failing.
[Peter]’s examples use Markforged’s own filaments, but gives advice on more common polymers as well and the same principles apply. This idea is one worth keeping in mind the next time one is seeking to optimize strength. because of how simple it is.
Music generation guided by machine learning can make great projects, but there’s not usually much apparent control over the results. The system makes what it makes, and it’s an achievement if the results are not obvious cacophony. But that’s all different with GETMusic which allows for a much more involved approach because it understands and is able to create music by tracks. Among other things, this means one can generate a basic rhythm and melody first, then add additional elements to those existing ones, leaving the previous elements unchanged.
GETMusic can make music from scratch, or guided from examples, and under the hood uses a diffusion-based approach similar to the method behind AI image generators like Stable Diffusion. We’ve previously covered how Stable Diffusion works, but instead of images the same basic principles are used to guide the model from random noise to useful tracks of music.
[PyottDesign] recently wrapped up a personal project to create himself a custom AR/VR headset that could function as an AR (augmented reality) platform, and make it easier to develop new applications in a headset that could do everything he needed. He succeeded wonderfully, and published a video showcase of the finished project.
Getting a headset with the features he wanted wasn’t possible by buying off the shelf, so he accomplished his goals with a skillful custom repackaging of a Quest 2 VR headset, integrating a Stereolabs Zed Mini stereo camera (aimed at mixed reality applications) and an Ultraleap IR 170 hand tracking module. These hardware modules have tons of software support and are not very big, but when sticking something onto a human face, every millimeter and gram counts.
Bark is a universal text-to-audio model that can not only create realistic speech, it can incorporate music, background noises, and sound effects. It can even include non-speech sounds like laughter, sighs, throat clearings, and similar elements. But despite the fact that it can deliver such complex results, it’s important to understand some of the peculiarities.
The model takes a prompt and generates the resulting sound from scratch. Results might sometimes be unexpected.
Bark is not a conventional text-to-speech program, and how it works has a lot more in common with large language model AI chatbots. This means that results can deviate from expectations, and outputs aren’t necessarily going to be studio-quality speech. As the project’s README points out, “(generated outputs can) be anything from perfect speech to multiple people arguing at a baseball game recorded with bad microphones.” That being said, there is some support for voice presets as a way to help guide the model with some consistency.
Bark was designed by a company called Suno for research purposes and is available under the MIT License. It can be installed and run locally, and has some demos available as well as an online implementation.
The ability to install and run Bark locally is promising territory for incorporating it into projects. And should you be more interested in speech-to-text instead, don’t forget about this plain C/C++ implementaion of AI-powered speech recognition.
Weather can have a significant impact on transport and operations of all kinds, especially those at sea or in the air. This makes it a deeply important field of study, particularly in wartime. If you’re at all curious about how this kind of information was gathered and handled in the days before satellites and computer models, this write-up on WWII meteorology is sure to pique your interest.
Weather conditions were valuable data, and weather forecasts even more so. Both required data, which relied on human operators for instruments to be read and their readings transmitted.
The main method of learning weather conditions over the oceans is to persuade merchant ships to report their observations regularly. This is true even today, but these days we also have the benefit of things like satellite technology. Back in the mid-1900s there was no such thing, and the outbreak of WWII (including the classification of weather data as secret information due to its value) meant that new solutions were needed.
The aircraft of the Royal Air Force (RAF) were particularly in need of accurate data, and there was little to no understanding of the upper atmosphere at the time. Eventually, aircraft flew regular 10-hour sorties, logging detailed readings that served to provide data about weather conditions across the Atlantic. Readings were logged, encoded with one-time pad (OTP) encryption, then radioed back to base where charts would be created and updated every few hours.
The value of accurate data and precise understanding of conditions and how they could change was grimly illustrated in a disaster called the Night of the Big Wind (March 24-25, 1944). Forecasts predicted winds no stronger than 45 mph, but Allied bombers sent to Berlin were torn apart when they encountered winds in excess of 120 mph, leading to the loss of 72 aircraft.
The types of data recorded to monitor and model weather are nearly identical to those in modern weather stations. The main difference is that instruments used to be read and monitored by human beings, whereas today we can rely more on electronic readings and transmission that need no human intervention.
When the PlayStation VR2 headset was released, people wondered whether it would be possible to get the headset to work as a PC VR headset. That would mean being able to plug it into a PC and have it work as a VR headset, instead of it only working on a PS5 as Sony intended.
Enthusiasts were initially skeptical and at times despondent about the prospects, but developer [iVRy]’s efforts recently had a breakthrough. A PC-compatible VR2 is looking more likely to happen.
So far [iVRy] is claiming they have 6 DOF SLAM (Simultaneous Localisation and Mapping), Prox sensor, and stereo camera data.
Most of the juicy bits are paywalled behind [iVRy]’s Patreon. We’re hoping the jailbreak process will eventually be open-sourced.