Peering Into The Black Box Of Large Language Models

Large Language Models (LLMs) can produce extremely human-like communication, but their inner workings are something of a mystery. Not a mystery in the sense that we don’t know how an LLM works, but a mystery in the sense that the exact process of turning a particular input into a particular output is something of a black box.

This “black box” trait is common to neural networks in general, and LLMs are very deep neural networks. It is not really possible to explain precisely why a specific input produces a particular output, and not something else.

Why? Because neural networks are neither databases, nor lookup tables. In a neural network, discrete activation of neurons cannot be meaningfully mapped to specific concepts or words. The connections are complex, numerous, and multidimensional to the point that trying to tease out their relationships in any straightforward way simply does not make sense.

Continue reading “Peering Into The Black Box Of Large Language Models”

A DIY Proximity Sensor, Using Just Scrap Parts And Software

[mircemk] shows how to create a simple non-contact proximity sensor using little more than an Arduino Nano board, and a convenient software library intended to measure the value of capacitors.

The prototype has a threshold set via potentiometer for convenience.

The basic idea is that it’s possible to measure a capacitor’s capacitance using two microcontroller pins and the right software, so by using a few materials to create an open-style capacitor, one can monitor it for changes and detect when anything approaches enough to alter its values past a given threshold, creating a proximity sensor.

The sensor shown here is essentially two plates mounted side-by-side, attached to an Arduino Nano using the Capacitor library which uses just two pins, one digital and one analog.

As configured, [mircemk]’s sensor measures roughly thirty picofarads, and that value decreases when approached by something with a dielectric constant that is different enough from the air surrounding the sensor. The sensor ignores wood and plastic, but an approaching hand is easily detected. The sensor also detects liquid water with similar ease, either in the form of pooled liquid, or filled bottles.

We’ve also seen a spring elegantly used as a hidden touch sensor that works through an enclosure’s wall by using similar principles, so the next time you need a proximity or touch-sensitive sensor in a project, reaching for the junk box might get you where you need to go. Watch [mircemk]’s sensor in action in the video, just below the page break.

Continue reading “A DIY Proximity Sensor, Using Just Scrap Parts And Software”

Bats Can No Longer Haunt Apple VR Headsets Via Web Exploit

Bug reporting doesn’t usually have a lot of visuals. Not so with the visionOS bug [Ryan Pickren] found, which fills a user’s area with screeching bats after visiting a malicious website. Even better, closing the browser doesn’t get rid of them! Better still? Doesn’t need to be bats, it could be spiders. Fun!

The bug has been fixed, but here’s how it worked: the Safari browser build for visionOS allowed a malicious website to fill the user’s 3D space with animated objects without interaction or permission. The code to trigger this is remarkably succinct, and is actually a new twist on an old feature: Apple AR Quick Look, an HTML-based feature for rendering 3D augmented reality content in Safari.

How about spiders, instead?

Leveraging this old feature is what lets an untrusted website launch an arbitrary number of animated 3D objects — complete with sound — into a user’s virtual space without any interaction from the user whatsoever. The icing on the cake is that Quick Look is a separate process, so closing Safari doesn’t get rid of the pests.

Providing immersive 3D via a web browser is a valuable way to deliver interactive content on both desktops and VR headsets; a good example is the fantastic virtual BBC Micro which uses WebXR. But on the Apple Vision Pro the user is always involved and there are privacy boundaries that corral such content. Things being launched into a user’s space in an interaction-free way is certainly not intended behavior.

The final interesting bit about this bug (or loophole) was that in a way, it defied easy classification and highlights a new sort of issue. While it seems obvious from a user experience and interface perspective that a random website spawning screeching crawlies into one’s personal space is not ideal, is this a denial-of-service issue? A privilege escalation that technically isn’t? It’s certainly unexpected behavior, but that doesn’t really capture the potential psychological impact such bugs can have. Perhaps the invasion of personal space and user boundaries will become a quantifiable aspect of bugs in these new platforms. What fun.

Torment Poor Milton With Your Best Pixel Art

One of the great things about new tech tools is just having fun with them, like embracing your inner trickster god to mess with ‘Milton’, an AI trapped in an empty room.

Milton is trapped in a room is a pixel-art game with a simple premise: use a basic paint interface to add objects to the room, then watch and listen to Milton respond to them. That’s it? That’s it. The code is available on the GitHub repository, but there’s also a link to play it live without any kind of signup or anything. Give it a try if you have a few spare minutes.

Under the hood, the basic loop is to let the user add something to the room, send the picture of the room (with its new contents) off for image recognition, then get Milton’s reaction to it. Milton is equal parts annoyed and jumpy, and his speech and reactions reflect this.

The game is a bit of a concept demo for Open Souls whose “thing” is providing AIs with far more personality and relatable behaviors than one typically expects from large language models. Maybe this is just what’s needed for AI opponents in things like the putting game of Connect Fore! to level up their trash talking.

One-handed PS-OHK Keyboard Doesn’t Need Chording Or Modifier Keys

Most one-handed keyboards rely on modifier keys or chording (pressing multiple keys in patterns) to stretch the functionality of a single hand’s worth of buttons. [Dylan Turner]’s PS-OHK takes an entirely different approach, instead putting 75 individual keys within reach of a single hand, with a layout designed to be practical as well as easy to get used to.

We can’t help but notice Backspace isn’t obvious in the prototype, but it’s also a work in progress.

The main use case of the PS-OHK is for one hand to comfortably rest at the keyboard while the other hand manipulates a mouse in equal comfort. There is a full complement of familiar special keys (Home, End, Insert, Delete, PgUp, PgDn) as well as function keys F1 to F12 which helps keep things familiar.

As for the rest of the layout, we like the way that [Dylan] clearly aimed to maintain some of the spatial relationship of  “landmark” keys such as ESC, which is positioned at the top-left corner of its group. Similarly, arrow keys are grouped together in the expected pattern.

One-handed keyboards usually rely on modifier keys or multi-key chording and it’s interesting to see work put into a different approach that doesn’t require memorizing strange layouts or input patterns.

Want to make your own? The GitHub repository has everything you need. Accommodating the 75 physical keys requires a large PCB, but it’s a fairly straightforward shape and doesn’t have any oddball manufacturing requirements, which means getting it made should be a snap.

Watch SLS 3D Printed Parts Become Printed Circuits

[Ben Krasnow] of the Applied Science channel recently released a video demonstrating his process for getting copper-plated traces reliably embedded into sintered nylon powder (SLS) 3D printed parts, and shows off a variety of small test boards with traces for functional circuits embedded directly into them.

Here’s how it works: The SLS 3D printer uses a laser to fuse powdered nylon together layer by layer to make a plastic part. But to the nylon powder, [Ben] has added a small amount of a specific catalyst (copper chromite), so that prints contains this catalyst. Copper chromite is pretty much inert until it gets hit by a laser, but not the same kind of laser that sinters the nylon powder. That means after the object is 3D printed, the object is mostly nylon with a small amount of (inert) copper chromite mixed in. That sets the stage for what comes next.

Continue reading “Watch SLS 3D Printed Parts Become Printed Circuits”

A Closer Peek At The Frame AR Glasses

The Frame AR glasses by Brilliant Labs, which contain a small display, are an entirely different approach to hacker-accessible and affordable AR glasses. [Karl Guttag] has shared his thoughts and analysis of how the Frame glasses work and are constructed, as usual leveraging his long years of industry experience as he analyzes consumer display devices.

It’s often said that in engineering, everything is a tradeoff. This is especially apparent in products like near-eye displays, and [Karl] discusses the Frame glasses’ tradeoffs while comparing and contrasting them with the choices other designs have made. He delves into the optical architecture, explaining its impact on the user experience and the different challenges of different optical designs.

The Frame glasses are Brilliant Labs’ second product with their first being the Monocle, an unusual and inventive sort of self-contained clip-on unit. Monocle’s hacker-accessible design and documentation really impressed us, and there’s a pretty clear lineage from Monocle to Frame as products. Frame are essentially a pair of glasses that incorporate a Monocle into one of the lenses, aiming to be able to act as a set of AI-empowered prescription glasses that include a small display.

We recommend reading the entire article for a full roundup, but the short version is that it looks like many of Frame’s design choices prioritize a functional device with low cost, low weight, using non-specialized and economical hardware and parts. This brings some disadvantages, such as a visible “eye glow” from the front due to display architecture, a visible seam between optical elements, and limited display brightness due to the optical setup. That being said, they aim to be hacker-accessible and open source, and are reasonably priced at 349 USD. If Monocle intrigued you, Frame seems to have many of the same bones.