What Can We Do With These Patient Monitor Videos?

So we’ll admit from the start that we’re not entirely sure how the average Hackaday reader can put this content to use. Still, these simulated patient monitor videos on YouTube gotta be useful for something. Right?

Uploaded by [themonitorsolution], each fourteen-minute 1080p video depicts what a patient monitor would look like in various situations, ranging from an adult in stable condition to individuals suffering from ailments such as COPD and sepsis. There’s even one for a dead patient, which makes for rather morbid watching.

Now we assume these are intended for educational purposes — throw them up on a display and have trainees attempt to diagnose what’s wrong with the virtual patient. But we’re sure clever folks like yourselves could figure out alternate uses for these realistic graphics. They could make for an impressive Halloween prop, or maybe they are just what you need to get that low-budget medical drama off the ground, finally.

Honestly, it seemed too cool of a resource not to point out. Besides, it’s exceedingly rare that we get to post a YouTube video that we can be confident none of our readers have seen before…at the time of this writing, the channel only has a single subscriber. Though with our luck, that person will end up being one of you lot.

Continue reading “What Can We Do With These Patient Monitor Videos?”

RoboPianist Is A Simulation For Advancing Robotic Control

Researchers at Google have posed themselves an interesting problem to solve: mastering the piano. However, they’re not trying to teach themselves, but a pair of simulated anthropomorphic robotic hands instead. Enter RoboPianist.

The hope is that the RoboPianist platform can help benchmark “high-dimensional control, targeted at testing high spatial and temporal precision, coordination, and planning, all with an underactuated system frequently making-and-breaking contacts.”

If that all sounds like a bit much to follow, the basic gist is that playing the piano takes a ton of coordination and control. Doing it in a musical way requires both high speed and perfect timing, further upping the challenge. The team hopes that by developing control strategies that can master the piano, they will more broadly learn about techniques useful for two-handed, multi-fingered control. To that end, RoboPianist models a pair of robot hands with 22 actuators each, or 44 in total. Much like human hands, the robot hands are underactuated by design, meaning they have less actuators than their total degrees of freedom.

Continue reading “RoboPianist Is A Simulation For Advancing Robotic Control”

Freq Out With LTSpice

We always enjoy [FesZ’s] videos, and his latest about FREQ function in LTSpice is no exception. In fact, LTSpice doesn’t document it, but it is part of the underlying Spice system. So, of course, you can figure it out or just watch the video below. The FREQ keyword allows you to change component attributes in a frequency-depended way.

Of course, capacitors and inductors are frequency dependent by design. But the FREQ technique allows you to adjust things like voltage sources or resistance in arbitrary ways. By default, you must specify the frequency response data in decibels, which isn’t always convenient. However, [FesZ] shows you how to use other methods to express them using modifiers to the command.

Continue reading “Freq Out With LTSpice”

This Camera Does Not Exist

Blender is a professional-grade 3D-rendering platform and much more, but it suffers sometimes from the just-too-perfect images that rendering produces. You can tell, somehow. So just how do you make a perfectly rendered scene look a little more realistic? If you’re [sirrandalot], you take a photograph. But not by taking a picture of your monitor with a camera. Instead, he’s simulating a colour film camera in extraordinary levels of detail within Blender itself.

The point of a rendering package is that it simulates light, so it shouldn’t be such a far-fetched idea that it could simulate the behaviour of light in a camera. Starting with a simple pinhole camera he moves on to a meniscus lens, and then creates a compound lens to correct for its imperfections. The development of the camera mirrors that of the progress of real cameras over the 20th century, simulating the film with its three colour-sensitive layers and even the antihalation layer, right down to their differing placements in the focal plane. It’s an absurd level of detail but it serves as both a quick run-down of how a film camera and its film work, and how Blender simulates the behaviour of light.

Finally we see the camera itself, modeled to look like a chunky medium format Instamatic, and some of its virtual photos. We can’t say all of them remove the feel of a rendered image, but they certainly do an extremely effective job of simulating a film photograph. We love this video, take a look at it below the break.

Continue reading “This Camera Does Not Exist”

Take A Ride In The Bathysphere

[Tom Scott] has traveled the world to see interesting things.  So when he’s impressed by a DIY project, we sit up and listen. In this case, he’s visiting the Bathysphere, a project created by a couple of passionate hobbyists in Italy. The project is housed at Explorandia, which based on google translate, sounds like a pretty epic hackerspace.

The Bathysphere project itself is a simulation of a submarine. Sounds simple, but this project is anything but.  There are no VR goggles involved.  Budding captains who are up for the challenge find themselves inside the cockpit of a mini-submarine. The sub itself is on a DIY motion platform. Strong electric motors move the system causing riders to feel like they are truly underwater. Inside the cockpit, the detail is amazing. All sorts of switches, lights, and greebles make for a realistic experience.  An electronic voice provides the ship status, and let’s the crew know of any emergencies. (Spoiler alert — there will be emergencies!)

The real gem is how this simulation operates. A Logitec webcam is mounted on an XY gantry. This camera then is dipped underwater in a small pond. Video from the camera is sent to a large monitor which serves as the sub’s window. It’s all very 1960’s simulator tech, but the effect works. The subtle movements of the simulator platform really make the users feel like they are 20,000 leagues under the sea.

Check out the video after the break for more info!

Continue reading “Take A Ride In The Bathysphere”

SMA Connector Footprint Design For Open Source RF Projects

When you first start out in the PCB layout game and know just enough to be dangerous, you simply plop down a connector, run a trace or two, and call it a hack. As you learn more about the finer points of inconveniencing electrons, dipping toes into the waters of higher performance, little details like via size, count, ground plane cutouts, and all that jazz start to matter, and it’s very easy to get yourself in quite a pickle trying to decide what is needed to just exceed the specifications (or worse, how to make it ‘the best.’) Connector terminations are one of those things that get overlooked until the MHz become GHz. Luckily for us, [Rob Ruark] is on hand to give us a leg-up on how to get decent performance from edge-launch SMA connections for RF applications. These principles should also hold up for high-speed digital connections, so it’s not just an analog game.

Continue reading “SMA Connector Footprint Design For Open Source RF Projects”

Peering Down Into Talking Ant Hill

Watching an anthill brings an air of fascination. Thousands of ants are moving about and communicating with other ants as they work towards a goal as a collective whole. For us humans, we project a complex inner world for each of these tiny creatures to drive the narrative. But what if we could peer down into a miniature world and the ants spoke English? (PDF whitepaper)

Researchers at the University of Stanford and Google Research have released a paper about simulating human behavior using multiple Large Language Models (LMM). The simulation has a few dozen agents that can move across the small town, do errands, and communicate with each other. Each agent has a short description to help provide context to the LLM. In addition, they have memories of objects, other agents, and observations that they can retrieve, which allows them to create a plan for their day. The memory is a time-stamped text stream that the agent reflects on, deciding what is important. Additionally, the LLM can replan and figure out what it wants to do.

The question is, does the simulation seem life-like? One fascinating example is the paper’s authors created one agent (Isabella) intending to have a Valentine’s Day party. No other information is included. But several agents arrive at the character’s house later in the day to party. Isabella invited friends, and those agents asked some people.

A demo using recorded data from an earlier demo is web-accessible. However, it doesn’t showcase the powers that a user can exert on the world when running live. Thoughts and suggestions can be issued to an agent to steer their actions. However, you can pause the simulation to view the conversations between agents. Overall, it is incredible how life-like the simulation can be. The language of the conversation is quite formal, and running the simulation burns significant amounts of computing power. Perhaps there can be a subconscious where certain behaviors or observations can be coded in the agent instead of querying the LLM for every little thing (which sort of sounds like what people do).

There’s been an exciting trend of combining LLMs with a form of backing store, like combining Wolfram Alpha with chatGPT. Thanks [Abe] for sending this one in!