Do You Trust This AI For Your Surgery?

If you are looking for the perfect instrument to start a biological horror show in our age of AI, you have come to the right place. Researchers at Johns Hopkins University have successfully used AI-guided robotics to perform surgical procedures. So maybe a bit less dystopian, but the possibilities are endless.

Pig parts are used as surrogate human gallbladders to demonstrate cholecystectomies. The skilled surgeon is replaced with a Da Vinci research kit, similarly used in human controlled surgeries.

Researchers used an architecture that uses live imaging and human corrections to input into a high-level language model, which feeds into the controlling low-level model. While there is the option to intervene with human input, the model is trained to and has demonstrated the ability to self-correct. This appears to work fairly well with nothing but minor errors, as shown in an age-restricted YouTube video. (NOTE: SURGICAL IMAGERY WATCH AT YOUR OWN RISK)

Flowchart showing the path of video to LLM to low level model to control robot

It’s noted that the robot performed slower than a traditional surgeon, trading time for precision. As always, when talking about anything medical, it’s not likely we will be seeing it on our own gallbladders anytime soon, but maybe within the next decade. If you want to read more on the specific advancements, check out the paper here.

Medical hacking isn’t always the most appealing for anyone with a weak stomach. For those of us with iron guts make sure to check out this precision tendon tester!

AI Is Only Coming For Fun Jobs

In the past few years, what marketers and venture capital firms term “artificial intelligence” but is more often an advanced predictive text model of some sort has started taking people’s jobs and threatening others. But not tedious jobs that society might like to have automated away in the first place. These AI tools have generally been taking rewarding or enjoyable jobs like artist, author, filmmaker, programmer, and composer. This project from a research team might soon be able to add astronaut to that list.

The team was working within the confines of the Kerbal Space Program Differential Game Challenge, an open-source plugin from MIT that allows developers to test various algorithms and artificial intelligences in simulated spacecraft situations. Generally, purpose-built models are used here with many rounds of refinement and testing, but since this process can be time consuming and costly the researchers on this team decided to hand over control to ChatGPT with only limited instructions. A translation layer built by the researchers allows generated text to be converted to spacecraft controls.

We’ll note that, at least as of right now, large language models haven’t taken the jobs of any actual astronauts yet. The game challenge is generally meant for non-manned spacecraft like orbital satellites which often need to make their own decisions to maintain orbits and avoid obstacles. This specific model was able to place second in a recent competition as well, although we’ll keep rooting for humans in certain situations like these.

Hackaday Links Column Banner

Hackaday Links: June 22, 2025

Hold onto your hats, everyone — there’s stunning news afoot. It’s hard to believe, but it looks like over-reliance on chatbots to do your homework can turn your brain into pudding. At least that seems to be the conclusion of a preprint paper out of the MIT Media Lab, which looked at 54 adults between the ages of 18 and 39, who were tasked with writing a series of essays. They divided participants into three groups — one that used ChatGPT to help write the essays, one that was limited to using only Google search, and one that had to do everything the old-fashioned way. They recorded the brain activity of writers using EEG, in order to get an idea of brain engagement with the task. The brain-only group had the greatest engagement, which stayed consistently high throughout the series, while the ChatGPT group had the least. More alarmingly, the engagement for the chatbot group went down even further with each essay written. The ChatGPT group produced essays that were very similar between writers and were judged “soulless” by two English teachers. Go figure.

Continue reading “Hackaday Links: June 22, 2025”

ChatGPT Patched A BIOS Binary, And It Worked

[devicemodder] wrote in to let us know they managed to install Linux Mint on their FRP-locked Panasonic Toughpad FZ-A2.

Android devices such as the FZ-A2 can be locked with Factory Reset Protection (FRP). The FRP limits what you can do with a device, tying it to a user account. On the surface that’s a good thing for consumers as it disincentivizes stealing. Unfortunately, when combined with SecureBoot, it also means you can’t just install whatever software you want on your hardware. [devicemodder] managed to get Linux Mint running on their FZ-A2, which is a notable achievement by itself, but even more remarkable is how it was done.

So how did [devicemodder] get around this limitation? The first step was to dump the BIOS using a CH341A-based programmer. From there, the image was uploaded to ChatGPT along with a request to disable SecureBoot. The resulting file was flashed back onto the FZ-A2, and all available fingers were crossed.

And… it worked! ChatGPT modified the BIOS enough that the Linux Mint installer could be booted from a flash drive. There are a bunch of bugs and issues to work through but in principle we have just seen AI capable enough to successfully patch a binary dump of BIOS code, which, for the record, is kind of hard to do. We’re not sure what all of this might portend.

So is uploading binaries to ChatGPT with requests for mods vibe coding? Or should we invent a new term for this type of hack?

ChatGPT & Me. ChatGPT Is Me!

For a while now part of my email signature has been a quote from a Hackaday commenter insinuating that an article I wrote was created by a “Dumb AI”. You have my sincerest promise that I am a humble meatbag scribe just like the rest of you, indeed one currently nursing a sore shoulder due to a sporting injury, so I found the comment funny in a way its writer probably didn’t intend. Like many in tech, I maintain a skepticism about the future role of large-language-model generative AI, and have resisted the urge to drink the Kool-Aid you will see liberally flowing at the moment.

Hackaday Is Part Of The Machine

As you’ll no doubt be aware, these large language models work by gathering a vast corpus of text, and doing their computational tricks to generate their output by inferring from that data. They can thus create an artwork in the style of a painter who receives no reward for the image, or a book in the voice of an author who may be struggling to make ends meet. From the viewpoint of content creators and intellectual property owners, it’s theft on a grand scale, and you’ll find plenty of legal battles seeking to establish the boundaries of the field.

Anyway, once an LLM has enough text from a particular source, it can do a pretty good job of writing in that style. ChatGPT for example has doubtless crawled the whole of Hackaday, and since I’ve written thousands of articles in my nearly a decade here, it’s got a significant corpus of my work. Could it write in my style? As it turns out, yes it can, but not exactly. I set out to test its forging skill. Continue reading “ChatGPT & Me. ChatGPT Is Me!”

UK CanSat Competition, Space Ex, Lancing College, Critical Design Review

Lancing College Shares Critical Design Review For UK CanSat Entry

A group of students from Lancing College in the UK have sent in their Critical Design Review (CDR) for their entry in the UK CanSat project.

Per the competition guidelines the UK CanSat project challenges students aged 14 to 19 years of age to build a satellite which can relay telemetry data about atmospheric conditions such as could help with space exploration. The students’ primary mission is to collect temperature and pressure readings, and these students picked their secondary mission to be collection of GPS data, for use on planets where GPS infrastructure is available, such as on Earth. This CDR follows their Preliminary Design Review (PDR).

The six students in the group bring a range of relevant skills. Their satellite transmits six metrics every second: temperature, pressure, altitude reading 1, altitude reading 2, latitude, and longitude. The main processor is an Arduino Nano Every, a BMP388 sensor provides the first three metrics, and a BE880 GPS module provides the following three metrics. The RFM69HCW module provides radio transmission and reception using LoRa.

The students present their plan and progress in a Gantt chart, catalog their inventory of relevant skills, assess risks, prepare mechanical and electrical designs, breadboard the satellite circuitry and receiver wiring, design a PCB in KiCad, and develop flow charts for the software. The use of Blender for data visualization was a nice hack, as was using ChatGPT to generate an example data file for testing purposes. Mechanical details such as parachute design and composition are worked out along with a shiny finish for high visibility. The students conduct various tests to ensure the suitability of their design and then conduct an outreach program to advertise their achievements to their school community and the internet at large.

We here at Hackaday would like to wish these talented students every success with their submission and we hope you had good luck on launch day, March 4th!

The backbone of this project is the LoRa technology and if you’re interested in that we’ve covered that here at Hackaday many times before, such as in this rain gauge and these soil moisture sensors.

Schooling ChatGPT On Antenna Theory Misconceptions

We’re not very far into the AI revolution at this point, but we’re far enough to know not to trust AI implicitly. If you accept what ChatGPT or any of the other AI chatbots have to say at face value, you might just embarrass yourself. Or worse, you might make a mistake designing your next antenna.

We’ll explain. [Gregg Messenger (VE6WO)] asked a seemingly simple question about antenna theory: Does an impedance mismatch between the antenna and a coaxial feedline result in common-mode current on the coax shield? It’s an important practical matter, as any ham who has had the painful experience of “RF in the shack” can tell you. They also will likely tell you that common-mode current on the shield is caused by an unbalanced antenna system, not an impedance mismatch. But when [Gregg] asked Google Gemini and ChatGPT that question, the answer came back that impedance mismatch can cause current flow on the shield. So who’s right?

In the first video below, [Gregg] built a simulated ham shack using a 100-MHz signal generator and a length of coaxial feedline. Using a toroidal ferrite core with a couple of turns of magnet wire and a capacitor as a current probe for his oscilloscope, he was unable to find a trace of the signal on the shield even if the feedline was unterminated, which produces the impedance mismatch that the chatbots thought would spell doom. To bring the point home, [Gregg] created another test setup in the second video, this time using a pair of telescoping whip antennas to stand in for a dipole antenna. With the coax connected directly to the dipole, which creates an unbalanced system, he measured a current on the feedline, which got worse when he further unbalanced the system by removing one of the legs. Adding a balun between the feedline and the antenna, which shifts the phase on each leg of the antenna 180° apart, cured the problem.

We found these demonstrations quite useful. It’s always good to see someone taking a chatbot to task over myths and common misperceptions. We look into baluns now and again. Or even ununs.

Continue reading “Schooling ChatGPT On Antenna Theory Misconceptions”