ChatGPT & Me. ChatGPT Is Me!

For a while now part of my email signature has been a quote from a Hackaday commenter insinuating that an article I wrote was created by a “Dumb AI”. You have my sincerest promise that I am a humble meatbag scribe just like the rest of you, indeed one currently nursing a sore shoulder due to a sporting injury, so I found the comment funny in a way its writer probably didn’t intend. Like many in tech, I maintain a skepticism about the future role of large-language-model generative AI, and have resisted the urge to drink the Kool-Aid you will see liberally flowing at the moment.

Hackaday Is Part Of The Machine

As you’ll no doubt be aware, these large language models work by gathering a vast corpus of text, and doing their computational tricks to generate their output by inferring from that data. They can thus create an artwork in the style of a painter who receives no reward for the image, or a book in the voice of an author who may be struggling to make ends meet. From the viewpoint of content creators and intellectual property owners, it’s theft on a grand scale, and you’ll find plenty of legal battles seeking to establish the boundaries of the field.

Anyway, once an LLM has enough text from a particular source, it can do a pretty good job of writing in that style. ChatGPT for example has doubtless crawled the whole of Hackaday, and since I’ve written thousands of articles in my nearly a decade here, it’s got a significant corpus of my work. Could it write in my style? As it turns out, yes it can, but not exactly. I set out to test its forging skill. Continue reading “ChatGPT & Me. ChatGPT Is Me!”

UK CanSat Competition, Space Ex, Lancing College, Critical Design Review

Lancing College Shares Critical Design Review For UK CanSat Entry

A group of students from Lancing College in the UK have sent in their Critical Design Review (CDR) for their entry in the UK CanSat project.

Per the competition guidelines the UK CanSat project challenges students aged 14 to 19 years of age to build a satellite which can relay telemetry data about atmospheric conditions such as could help with space exploration. The students’ primary mission is to collect temperature and pressure readings, and these students picked their secondary mission to be collection of GPS data, for use on planets where GPS infrastructure is available, such as on Earth. This CDR follows their Preliminary Design Review (PDR).

The six students in the group bring a range of relevant skills. Their satellite transmits six metrics every second: temperature, pressure, altitude reading 1, altitude reading 2, latitude, and longitude. The main processor is an Arduino Nano Every, a BMP388 sensor provides the first three metrics, and a BE880 GPS module provides the following three metrics. The RFM69HCW module provides radio transmission and reception using LoRa.

The students present their plan and progress in a Gantt chart, catalog their inventory of relevant skills, assess risks, prepare mechanical and electrical designs, breadboard the satellite circuitry and receiver wiring, design a PCB in KiCad, and develop flow charts for the software. The use of Blender for data visualization was a nice hack, as was using ChatGPT to generate an example data file for testing purposes. Mechanical details such as parachute design and composition are worked out along with a shiny finish for high visibility. The students conduct various tests to ensure the suitability of their design and then conduct an outreach program to advertise their achievements to their school community and the internet at large.

We here at Hackaday would like to wish these talented students every success with their submission and we hope you had good luck on launch day, March 4th!

The backbone of this project is the LoRa technology and if you’re interested in that we’ve covered that here at Hackaday many times before, such as in this rain gauge and these soil moisture sensors.

Schooling ChatGPT On Antenna Theory Misconceptions

We’re not very far into the AI revolution at this point, but we’re far enough to know not to trust AI implicitly. If you accept what ChatGPT or any of the other AI chatbots have to say at face value, you might just embarrass yourself. Or worse, you might make a mistake designing your next antenna.

We’ll explain. [Gregg Messenger (VE6WO)] asked a seemingly simple question about antenna theory: Does an impedance mismatch between the antenna and a coaxial feedline result in common-mode current on the coax shield? It’s an important practical matter, as any ham who has had the painful experience of “RF in the shack” can tell you. They also will likely tell you that common-mode current on the shield is caused by an unbalanced antenna system, not an impedance mismatch. But when [Gregg] asked Google Gemini and ChatGPT that question, the answer came back that impedance mismatch can cause current flow on the shield. So who’s right?

In the first video below, [Gregg] built a simulated ham shack using a 100-MHz signal generator and a length of coaxial feedline. Using a toroidal ferrite core with a couple of turns of magnet wire and a capacitor as a current probe for his oscilloscope, he was unable to find a trace of the signal on the shield even if the feedline was unterminated, which produces the impedance mismatch that the chatbots thought would spell doom. To bring the point home, [Gregg] created another test setup in the second video, this time using a pair of telescoping whip antennas to stand in for a dipole antenna. With the coax connected directly to the dipole, which creates an unbalanced system, he measured a current on the feedline, which got worse when he further unbalanced the system by removing one of the legs. Adding a balun between the feedline and the antenna, which shifts the phase on each leg of the antenna 180° apart, cured the problem.

We found these demonstrations quite useful. It’s always good to see someone taking a chatbot to task over myths and common misperceptions. We look into baluns now and again. Or even ununs.

Continue reading “Schooling ChatGPT On Antenna Theory Misconceptions”

Internet Connected TI-84 To Cut Your Academic Career Short

In an educational project with ethically questionable applications, [ChromaLock] has converted the ubiquitous TI-84 calculator into the ultimate cheating device.

The foundation of this hack lies in the TI-84’s link protocol, which has been a mainstay in calculator mods for years. [ChromaLock] uses this interface to connect to a tiny WiFi-enabled XIAO ESP32-C3 module hidden in the calculator. It’s mounted on a custom PCB with a simple MOSFET-based level shifting circuit, and slots neatly into a space on the calculator rear cover. The connecting wires are soldered directly to the pads of the 2.5 mm jack, and to the battery connections for power.

But what does this mod do? It connects your calculator to the internet and gives you a launcher with several applets. These allow you to view images badly pixelated images on the TI-84’s screen, text-chat with an accomplice, install more apps or notes, or hit up ChatGPT for some potentially hallucinated answers. Inputting long sections of text on the calculator’s keypad is a time-consuming process, so [ChromaLock] teased a camera integration, which will probably make use of newer LLMs image input capabilities. The ESP32 doesn’t handle all the heavy lifting, and needs to connect to an external server for more complex interfaces.

To prevent pre-installed programs from being used for cheating on TI-84s, examiners will often wipe the memory or put it into test mode. This mod can circumvent both. Pre-installed programs are not required on the calculator to interface with the hardware module, and installing the launcher is done by sending two variables containing a password and download command to the ESP32 module. The response from the module will also automatically break the calculator out of test mode.

We cannot help but admire [ChromaLock]’s ingenuity and polished implementation, and hopefully our readers are more interested in technical details than academic self-sabotage. For those who need even more capability in their calculator, we’d suggest checking out the NumWorks. Continue reading “Internet Connected TI-84 To Cut Your Academic Career Short”

Uncovering ChatGPT Usage In Academic Papers Through Excess Vocabulary

Frequencies of PubMed abstracts containing certain words. Black lines show counterfactual extrapolations from 2021–22 to 2023–24. The first six words are affected by ChatGPT; the last three relate to major events that influenced scientific writing and are shown for comparison. (Credit: Kobak et al., 2024)
Frequencies of PubMed abstracts containing certain words. Black lines show counterfactual extrapolations from 2021–22 to 2023–24. The first six words are affected by ChatGPT; the last three relate to major events that influenced scientific writing and are shown for comparison. (Credit: Kobak et al., 2024)

That students these days love to use ChatGPT for assistance with reports and other writing tasks is hardly a secret, but in academics it’s becoming ever more prevalent as well. This raises the question of whether ChatGPT-assisted academic writings can be distinguished somehow. According to [Dmitry Kobak] and colleagues this is the case, with a strong sign of ChatGPT use being the presence of a lot of flowery excess vocabulary in the text. As detailed in their prepublication paper, the frequency of certain style words is a remarkable change in the used vocabulary of the published works examined.

For their study they looked at over 14 million biomedical abstracts from 2010 to 2024 obtained via PubMed. These abstracts were then analyzed for word usage and frequency, which shows both natural increases in word frequency (e.g. from the SARS-CoV-2 pandemic and Ebola outbreak), as well as massive spikes in excess vocabulary that coincide with the public availability of ChatGPT and similar LLM-based tools.

In total 774 unique excess words were annotated. Here ‘excess’ means ‘outside of the norm’, following the pattern of ‘excess mortality’ where mortality during one period noticeably deviates from patterns established during previous periods. In this regard the bump in words like respiratory are logical, but the surge in style words like intricate and notably would seem to be due to LLMs having a penchant for such flowery, overly dramatized language.

The researchers have made the analysis code available for those interested in giving it a try on another corpus. The main author also addressed the question of whether ChatGPT might be influencing people to write more like an LLM. At this point it’s still an open question of whether people would be more inclined to use ChatGPT-like vocabulary or actively seek to avoid sounding like an LLM.

Wrencher-2: A Bold New Direction For Hackaday

Over the last year it’s fair to say that a chill wind has blown across the face of the media industry, as the prospect emerges that many content creation tasks formerly performed by humans instead being swallowed up by the inexorable rise of generative AI. In a few years we’re told, there may even be no more journalists, as the computers become capable of keeping your news desires sated with the help of their algorithms.

Here at Hackaday, we can see this might be the case for a gutter rag obsessed with celebrity love affairs and whichever vegetable is supposed to cure cancer this week, but we continue to believe that for quality coverage of the latest and greatest in the hardware hacking world, you can’t beat a writer made of good old-fashioned meat. Indeed, in a world saturated by low-quality content, the opinions of smart and engaged writers become even more valuable. So we’ve decided to go against the trend, by launching not a journalist powered by AI, but an AI powered by journalists.

Announcing Wrencher-2, a Hackaday chat assistant in your browser

Wrencher-2 is a new paradigm in online chat assistants, eschewing generative algorithms in favour of the collective expertise of the Hackaday team. Ask Wrencher-2 a question, and you won’t get a vague and made-up answer from a computer, instead you’ll get a pithy and on-the-nail answer from a Hackaday staffer. Go on – try it! Continue reading “Wrencher-2: A Bold New Direction For Hackaday”

Tech Support… Can AI Be Worse?

You can’t read the news today without another pundit excitedly reporting how AI is going to take every job you can imagine. Of course, AI will change the employment landscape. It will take some jobs and reduce the need for others. What about tech support? Is it possible that an AI might be able to help people with technical issues better than humans? My first answer was no way, but then I was painfully reminded of something. The question isn’t if AI can help you better than any human can. The question is if AI can help you better than the low-paid person on the other end of the phone you are likely to talk to. Sadly, I think the answer to that question is almost certainly yes.

In all fairness, if you read Hackaday, you probably don’t encounter many technical support people who can solve a problem you can’t. By the time you call them, it is a lost cause. But this is more than just “Hackday folks are smarter than the tech support agents.” The overall quality of tech support at many companies is rock bottom no matter who you are. Continue reading “Tech Support… Can AI Be Worse?”