After receiving a vaccination shot, it’s likely that we’ll feel some side-effects. These can range from merely a sore arm to swollen lymph nodes and even a fever. Which side-effects to expect depend on the exact vaccine, with each type and variant coming with its own list of common side-effects. Each person’s immune system will also react differently, which makes it hard to say exactly what one can expect after receiving the vaccination.
What we can do is look closer at the underlying mechanisms that cause these side-effects, to try and understand why they occur and how to best deal with them. Most relevant here for the initial response is the body’s innate immune system, with dendritic cells generally being among the first to come into contact with the vaccine and to present the antigen to the body’s adaptive immune system.
Key to the redness, swelling, and fever are substances produced by the body which include various cytokines as well as prostaglandin, producing the symptoms seen with inflammation and injury.
How often does this happen to you? You find yourself describing something that happened in a game to someone, and they’re not sure they know what part of the map you’re talking about, or they’ve never gotten that far. Wouldn’t it be cool to make a bookmark in a video game so you can jump right to the beginning of the action and show your friend what you mean using the actual game?
That’s the idea behind [Joël Franusic] and [Adam Smith]’s fantastic Playable Quotes for Game Boy — clip-making that creates a 4-D nugget of gameplay that can either be viewed as a video, or played live within the bounds of the clip. The system is built on a modified version of the PyBoy emulator.
Left: the full game ROM. Right: a bookmarked slice of the game ROM with the rest set to zero.
Basically, a Playable Quote is made up of a save state and all that entails, plus a slice of the game’s ROM that includes just enough game data to recreate an interactive clip. Everything is zipped up and steganographically encoded into a PNG file. Here’s a Tetris quote you can play (or watch) right now — you might recognize it from the post thumbnail. You’ll find the others on the games site, which allows people to create and share and build on each other’s work.
There’s so much more that can be done with this type of immersive and interactive tool outside the realm of games, and we’re excited to see where this leads and what people do with it.
While there are still plenty of folks out there tinkering with custom 3D printers, it’s safe to say that most people these days are using a commercially-available machine. The prices are just so low now, even on the resin printers, that unless you have some application that requires exacting specifications, it just doesn’t make a whole lot of sense to fiddle around with a homebrew machine.
As it so happens, [Nicolas Tranchant] actually does have such an application. He needs ultra-high resolution 3D prints for his jewelry company, but even expensive printers designed for doing dental work weren’t giving him the results he was looking for. Rather than spend five-figures on a machine that may or may not get the job done, he decided to check out what was available in kit form. That’s when he found the work of [Frédéric Lautré].
A look at the heavy-duty Z axis.
He purchased the unique “Top-Down” SLA kit from him back in 2017, and now after four years of working with the machine, [Nicolas] decided he would share his experiences with the rest of the class. The basic idea with this printer is that the light source is above the resin vat, rather than below. So instead of the print bed being pulled farther away from the resin on each new layer, it actually sinks deeper into it.
Compared to the “Bottom-Up” style of resin printers that are more common for hobbyists, this approach does away with the need for a non-stick layer of film at the bottom of the tank. Printing is therefore made faster and more reliable, as the part doesn’t need to be peeled off the film for each new layer.
[Nicolas] goes into quite a bit of detail about building and using the $700 USD kit, including the occasional modifications he made. It sounds like the kit later went through a few revisions, but the core concepts are largely the same. It’s worth noting that the kit did not come with the actual projector though, so in his case the total cost was closer to $1,400. We were also surprised to see that [Frédéric] apparently developed the software for this printer himself, so the tips on how to wrangle its unfamiliar interface for slicing and support generation may be particularly helpful.
Unfortunately, it sounds like [Frédéric] has dropped off the radar. The website for the kit is gone, and [Nicolas] has been unable to get in touch with him. Which is a shame, as this looks to be a fascinating project. Perhaps the Hackaday community can help track down this mysterious SLA maestro?
Despite the best efforts of scientists around the world, the current global pandemic continues onward. But even if you aren’t working on a new vaccine or trying to curb the virus with some other seemingly miraculous technology, there are a few other ways to help prevent the spread of the virus. By now we all know of ways to do that physically, but now thanks to [James Devine] and a team at CERN we can also model virus exposure directly on our own self-hosted Raspberry Pis.
The program, called the Covid-19 Airborne Risk Assessment (CARA), is able to take in a number of metrics about the size and shape of an area, the number of countermeasures already in place, and plenty of other information in order to provide a computer-generated model of the number of virus particles predicted as a function of time. It can run on a number of different Pi hardware although [James] recommends using the Pi 4 as the model does take up a significant amount of computer resources. Of course, this only generates statistical likelihoods of virus transmission but it does help get a more accurate understanding of specific situations.
For more information on how all of this works, the group at CERN also released a paper about their model. One of the goals of this project is that it is freely available and runs on relatively inexpensive hardware, so hopefully plenty of people around the world are able to easily run it to further develop understanding of how the virus spreads. For other ways of using your own computing power to help fight Covid, don’t forget about Folding@Home for using up all those extra CPU and GPU cycles.
If you live in much of the world today, high-speed Internet is a solved problem. But there are still places where getting connected presents unique challenges. Alphabet, the company that formed from Google, details their experience piping an optical network across the Congo. The project derived from an earlier program — project Loon — that used balloons to replace traditional infrastructure.
Laying cables along the twisting and turning river raises costs significantly, so a wireless approach makes sense. Connecting Brazzaville to Kinshasa using optical techniques isn’t perfect — fog, birds, and other obstructions don’t help. They still managed to pipe 700 terabytes of data in 20 days with over 99.9% reliability.
[Jeff Geerling] routinely tinkers around with Raspberry Pi compute module, which unlike the regular RPi 4, includes a PCI-e lane. With some luck, he was able to obtain an AMD Radeon RX 6700 XT GPU card and decided to try and plug it into the Raspberry Pi 4 Compute Module.
While you likely wouldn’t be running games with such as setup, there are many kinds of unique and interesting compute-based workloads that can be offloaded onto a GPU. In a situation similar to putting a V8 on a lawnmower, the Raspberry Pi 4 pulls around 5-10 watts and the GPU can pull 230 watts. Unfortunately, the PCI-e slot on the IO board wasn’t designed with a power-hungry chip in mind, so [Jeff] brought in a full-blown ATX power supply to power the GPU. To avoid problems with differing ground planes, an adapter was fashioned for the Raspberry Pi to be powered from the PSU as well. Plugging in the card yielded promising results initially. In particular, Linux detected the card and correctly mapped the BARs (Base Address Register), which had been a problem in the past for him with other devices. A BAR allows a PCI device to map its memory into the CPU’s memory space and keep track of the base address of that mapped memory range.
AMD kindly provides Linux drivers for the kernel. [Jeff] walks through cross-compiling the kernel and has a nice docker container that quickly reproduces the built environment. There was a bug that prevented compilation with AMD drivers included, so he wasn’t able to get a fully built kernel. Since the video, he has been slowly wading through the issue in a fascinating thread on GitHub. Everything from running out of memory space for the Pi to PSP memory training for the GPU itself has been encountered.
The ever-expanding capabilities of the plucky little compute module are a wonderful thing to us here at Hackaday, as we saw it get NVMe boot earlier this year. We’re looking forward to the progress [Jeff] makes with GPUs. Video after the break.
Stop motion animation is notoriously difficult to pull off well, in large part because it’s a mind-numbingly slow process. Each frame in the final video is a separate photograph, and for each one of those, the characters and props need to be moved the appropriate amount so that the final result looks smooth. You don’t even want to know how long Ben Wyatt spent working on Requiem for a Tuesday, though to be fair, it might still get done before the next Avatar.
But [Nick Bild] thinks his latest project might be able to improve on the classic technique with a dash of artificial intelligence provided by a Jetson Xavier NX. Basically, the Jetson watches the live feed from the camera, and using a hand pose detection model, waits until there’s no human hand in the frame. Once the coast is clear, it takes a shot and then goes back to waiting for the next hands-free opportunity. With the photographs being taken automatically, you’re free to focus on getting your characters moving around in a convincing way.
If it’s still not clicking for you, check out the video below. [Nick] first shows the raw unedited video, which primarily consists of him moving three LEGO figures around, and then the final product produced by his system. All the images of him fiddling with the scene have been automatically trimmed, leaving behind a short animated clip of the characters moving on their own.
Now don’t be fooled, it’s still going to take awhile. By our count, it took two solid minutes of moving around Minifigs to produce just a few seconds of animation. So while we can say its a quicker pace than with traditional stop motion production, it certainly isn’t fast.