A Great Use For AI: Wasting Scammers Time!

We may have found the killer app for AI. Well, actually, British telecom provider O2 has. As The Guardian reports, they have an AI chatbot that acts like a 78-year-old grandmother and receives phone calls. Of course, since the grandmother—Daisy, by name—doesn’t get any real phone calls, anyone calling that number is probably a scammer. Daisy’s specialty? Keeping them tied up on the phone.

While this might just seem like a prank for revenge, it is actually more than that. Scamming people is a numbers game. Most people won’t bite. So, to be successful, scammers have to make lots of calls. Daisy can keep one tied up for around 40 minutes or more.

Continue reading “A Great Use For AI: Wasting Scammers Time!”

Hackaday Links Column Banner

Hackaday Links: December 22, 2024

Early Monday morning, while many of us will be putting the finishing touches — or just beginning, ahem — on our Christmas preparations, solar scientists will hold their collective breath as they wait for word from the Parker Solar Probe’s record-setting passage through the sun’s atmosphere. The probe, which has been in a highly elliptical solar orbit since its 2018 launch, has been getting occasional gravitational nudges by close encounters with Venus. This has moved the perihelion ever closer to the sun’s surface, and on Monday morning it will make its closest approach yet, a mere 6.1 million kilometers from the roiling photosphere. That will put it inside the corona, the sun’s extremely energetic atmosphere, which we normally only see during total eclipses. Traveling at almost 700,000 kilometers per hour, it won’t be there very long, and it’ll be doing everything it needs to do autonomously since the high-energy plasma of the corona and the eight-light-minute distance makes remote control impossible. It’ll be a few days before communications are re-established and the data downloaded, which will make a nice present for the solar science community to unwrap.

Continue reading “Hackaday Links: December 22, 2024”

Hackaday Links Column Banner

Hackaday Links: June 16, 2024

Attention, slackers — if you do remote work for a financial institution, using a mouse jiggler might not be the best career move. That’s what a dozen people learned this week as they became former employees of Wells Fargo after allegedly being caught “simulating keyboard activity” while working remotely. Having now spent more than twice as many years working either hybrid or fully remote, we get it; sometimes, you’ve just got to step away from the keyboard for a bit. But we’ve never once felt the need to create the “impression of active work” during those absences. Perhaps that’s because we’ve never worked in a regulated environment like financial services.

For our part, we’re curious as to how the bank detected the use of a jiggler. The linked article mentions that regulators recently tightened rules that require employers to treat an employee’s home as a “non-branch location” subject to periodic inspection. More than enough reason to quit, in our opinion, but perhaps they sent someone snooping? More likely, the activity simulators were discovered by technical means. The article contains a helpful tip to avoid powering a jiggler from the computer’s USB, which implies detecting the device over the port. Our guess is that Wells tracks mouse and keyboard activity and compares it against a machine-learning model to look for signs of slacking.

Continue reading “Hackaday Links: June 16, 2024”

Australian Library Uses Chatbot To Imitate Veteran With Predictable Results

The educational sector is usually the first to decry large language models and AI, due to worries about cheating. The State Library of Queensland, however, has embraced the technology in controversial fashion. In the lead-up to Anzac Day, the primarily Australian war memorial holiday, the library released a chatbot intended to imitate a World War One veteran. It went as well as you’d expect.

The highlighted line was apparently added to the chatbot’s instructions later on to help shut down tomfoolery.

Twitter users immediately chimed in with dismay at the very concept. Others showed how easy it was to “jailbreak” the AI, convincing Charlie he was actually supposed to teach Python, imitate Frasier Crane, or explain laws like Elle from Legally Blonde. One person figured out how to get Charlie to spit out his initial instructions; these were patched later in the day to try and stop some of the shenanigans.

From those instructions, it’s clear that this was supposed to be educational, rather than some sort of macabre experiment. However, Charlie didn’t do a great job here, either. As with any Large Language Model, Charlie had no sense of objective truth. He routinely spat out incorrect facts regarding the war, and regularly contradicted himself.

Generally, any plan that includes the words “impersonate a veteran” is a foolhardy one at best. Throwing a machine-generated portrait and a largely uncontrolled AI into the mix didn’t help things. Regardless, the State Library has left the “Virtual Veterans” experience up at the time of writing.

The problem with AI is that it’s not a magic box that gets things right all the time. It never has been. As long as organizations keep putting AI to use in ways like this, the same story will keep playing out.

Dump A Code Repository As A Text File, For Easier Sharing With Chatbots

Some LLMs (Large Language Models) can act as useful programming assistants when provided with a project’s source code, but experimenting with this can get a little tricky if the chatbot has no way to download from the internet. In such cases, the code must be provided by either pasting it into the prompt or uploading a file manually. That’s acceptable for simple things, but for more complex projects, it gets awkward quickly.

To make this easier, [Eric Hartford] created github2file, a Python script that outputs a single text file containing the combined source code of a specified repository. This text file can be uploaded (or its contents pasted into the prompt) making it much easier to share code with chatbots.

Continue reading “Dump A Code Repository As A Text File, For Easier Sharing With Chatbots”

Air Canada’s Chatbot: Why RAG Is Better Than An LLM For Facts

Recently Air Canada was in the news regarding the outcome of Moffatt v. Air Canada, in which Air Canada was forced to pay restitution to Mr. Moffatt after the latter had been disadvantaged by advice given by a chatbot on the Air Canada website regarding the latter’s bereavement fare policy. When Mr. Moffatt inquired whether he could apply for the bereavement fare after returning from the flight, the chatbot said that this was the case, even though the link which it provided to the official bereavement policy page said otherwise.

This latter aspect of the case is by far the most interesting aspect of this case, as it raises many questions about the technical details of this chatbot which Air Canada had deployed on its website. Since the basic idea behind such a chatbot is that it uses a curated source of (company) documentation and policies, the assumption made by many is that this particular chatbot instead used an LLM with more generic information in it, possibly sourced from many other public-facing policy pages.

Whatever the case may be, chatbots are increasingly used by companies, but instead of pure LLMs they use what is called RAG: retrieval augmented generation. This bypasses the language model and instead fetches factual information from a vetted source of documentation.

Continue reading “Air Canada’s Chatbot: Why RAG Is Better Than An LLM For Facts”

Hackaday Links Column Banner

Hackaday Links: September 10, 2023

Most of us probably have a vision of how “The Robots” will eventually rise up and deal humanity out of the game. We’ve all seen that movie, of course, and know exactly what will happen when SkyNet becomes self-aware. But for those of you thinking we’ll get off relatively easy with a quick nuclear armageddon, we’re sorry to bear the news that AI seems to have other plans for us, at least if this report of dodgy AI-generated mushroom foraging manuals is any indication. It seems that Amazon is filled with publications these days that do a pretty good job of looking like they’re written by human subject matter experts, but are actually written by ChatGPT or similar tools. That may not be such a big deal when the subject matter concerns stamp collecting or needlepoint, but when it concerns differentiating edible fungi from toxic ones, that’s a different matter. The classic example is the Death Cap mushroom (Amanita phalloides) which varies quite a bit in identifying characteristics like color and size, enough so that it’s often tough for expert mycologists to tell it apart from its edible cousins. Trouble is, when half a Death Cap contains enough toxin to kill an adult human, the margin for error is much narrower than what AI is likely to include in a foraging manual. So maybe that’s AI’s grand plan for humanity — just give us all really bad advice and let Darwin take care of the rest.

Continue reading “Hackaday Links: September 10, 2023”