2View: The Self-Erasing VHS Tape With Paperclip Hack

The back of the 2View VHS box. The instructions are all in Dutch, as its (sole) launch market. (Credit: Techmoan, YouTube)
The back of the 2View VHS box. The instructions are all in Dutch, as its (sole) launch market. (Credit: Techmoan, YouTube)

Over the decades the video and music industries have tried a wide range of ways to get consumers to buy ‘cheaper’ versions of albums and music, but then limit the playback in some way. Perhaps one of the most fascinating ones is the 2View, as recently featured by [Matt] over at Techmoan on Youtube. This is a VHS tape which works in standard VHS players and offers you all the goodness that VHS offers, like up to 512 lines of PAL video and hard-coded ads and subtitles, but also is restricted to just playing twice. After this second playback and rewinding, the tape self-erases and is blank, leaving you with just an empty VHS tape you can use for your own recordings.

As a form of analog restrictions management (ARM) it’s pretty simple in how it works, with [Matt] taking the now thankfully erased Coyote Ugly tape apart for a demonstration of the inside mechanism. This consists out of effectively just two parts: one plastic, spring-loaded shape that moves against one of the tape spools and follows the amount of tape, meaning minutes watched, and a second arm featuring a permanent magnet that is retained by an inner track inside the first shape until after rewinding twice it is released and ends up against the second spool, erasing the tape until rewound, after which it catches in a neutral position. This then left an erased tape that could be safely recorded on again.

Although cheaper than a comparable VHS tape without this limit, 2View was released in 2001, when in the Netherlands and elsewhere DVDs were demolishing the VHS market. This, combined with the fact that a simple bent paperclip could be stuck inside to retain the erase arm in place to make it a regular VHS tape, meant that it was really a desperate attempt that quickly vanished off the market

Continue reading “2View: The Self-Erasing VHS Tape With Paperclip Hack”

Photo of Ceefax on a CRT television

Ceefax: The Original News On Demand

Long before we had internet newsfeeds or Twitter, Ceefax delivered up-to-the-minute news right to your television screen. Launched by the BBC in 1974, Ceefax was the world’s first teletext service, offering millions of viewers a mix of news, sports, weather, and entertainment on demand. Fast forward 50 years, and the iconic service is being honored with a special exhibition at the Centre for Computing History in Cambridge.

At its peak, Ceefax reached over 22 million users. [Ian Morton-Smith], one of Ceefax’s original journalists, remembers the thrill of breaking stories directly to viewers, bypassing scheduled TV bulletins. The teletext interface, with its limited 80-word entries, taught him to be concise, a skill crucial to news writing even today.

We’ve talked about Ceefax in the past, including in 2022 when we explored a project bringing Ceefax back to life using a Raspberry Pi. Prior to that, we delved into its broader influence on early text-based information systems in a 2021 article.

But Ceefax wasn’t just news—it was a global movement toward interactive media, preceding the internet age. Services like Viditel and the French Minitel carried forward the idea of interactive text and graphics on screen.

Dual-Port RAM For A Simple VGA Card

Making microcontrollers produce video has long been a staple of hardware hacking, but as the resolution goes up, it becomes a struggle for less capable silicon. To get higher resolution VGA from an Arduino, [Marcin Chwedczuk] has produced perhaps the most bulletproof solution, to create dual-port RAM with the help of a static RAM chip and a set of 74-series bus transceivers, and let a hardware VGA interface take care of the display. Yes, it’s not a microcontroller doing VGA, but standalone VGA for microcontrollers.

Dual-port memory is a special type of memory with two interfaces than can independently be used to access the contents. It’s not cheap when bought in integrated form, so seeing someone making a substitute with off-the-shelf parts is certainly worth a second look. The bus transceivers are in effect bus-width latches, and each one hangs on to the state while the RAM chip services each in turn. The video card part is relatively straightforward, a set of 74 chips which produce the timings and step through the addresses, and a shift register to push out simple black or white pixel data as a rudimentary video stream. We remember these types of circuits being used back in the days of home made video terminals, and here in 2024 they still work fine.

The display this thing produces isn’t the most impressive picture, but it is VGA, and it does work. We can see this circuit being of interest to plenty of other projects having less capable processing power, in fact we’d say the challenge should lie in how low you can go if all you need is the capacity to talk 74-series logic levels.

Interested in 74-series VGA cards? This isn’t the first we’ve seen.

If You Give A Dev A Tricked Out Xbox, They’ll Patch Halo 2

[Ryan Miceli] had spent a few years poring over and reverse-engineering Halo 2 when a friend asked for a favor. His friend created an improved Xbox with significant overclocks, RAM upgrades, BIOS hacks, and a processor swap. The goal was simple: patch the hardcoded maximum resolution from 480p to 720p and maybe even 1080p. With double the CPU clock speed but only a 15% overclock on the GPU, [Ryan] got to work.

Step one was to increase the size of the DirectX framebuffers. Increasing the output resolution introduced severe graphical glitches and rendering bugs. The game reuses the framebuffers multiple times as memory views, and each view encodes a header at the top with helpful information like width, height, and tiling. After patching that, [Ryan] had something more legible, but some models weren’t loading (particularly the water in the title screen). The answer was the texture accumulation layer. The Xbox has a hardware limitation of only sampling four textures per shader pass, which means you need a buffer the size of the render resolution to accumulate the textures if you want to sample more than four textures. Trying to boot the game resulted in an out-of-memory crash. The Xbox [Ryan] was working on had been upgraded with an additional 64MB of RAM, but the memory allocator in Halo 2 wasn’t taking advantage of it. Yet.

To see where the memory was going, [Ryan] wrote a new tool called XboxImageGrabber to show where memory was allocated and by whom. Most games make a few substantial initial allocations from the native allocator, then toss it over to a custom allocator tuned for their game. However, the extra 64MB of RAM was in dev consoles and meant as debug RAM, which meant the GPU couldn’t properly access it. Additionally, between the lower 64MB and upper is the Xbox kernel. Now, it became an exercise of patching the allocator to work with two blobs of memory instead of one contiguous one. It also moved runtime data into the upper 64MB while keeping video allocations in the lower. Ultimately, [Ryan] found it easier to patch the kernel to allow memory allocations the GPU could use in the upper 64MB of memory. Running the game at 720p resulted in only a semi-playable framerate, dropping to 10fps in a few scenes.

After some initial tests, [Ryan] concluded that it wasn’t the GPU or the CPU that was the bottleneck but the swap chain. Halo 2 turns VSync on by default, meaning it has to wait until a blank period before swapping between its two framebuffers. A simple tweak is to add a third frame buffer. The average FPS jumped 10%, and the GPU became the next bottleneck to tweak. With a light GPU overclock, the game was getting very close to 30fps. Luckily for [Ryan], no BIOS tweak was needed as the GPU clock hardware can be mapped and tweaked as an MMIO. After reverse engineering, a debugging feature to visual cache evictions, [Ryan] tuned the texture and geometry cache to minimize pop-ins that the original game was infamous for.

Overall, it’s an incredible hack with months of hard work behind it. The code for the patch is on Github, and there’s a video after the break comparing the patched and unpatched games. If you still need more Halo in your life, why not make yourself a realistic battle rifle from the game?

Continue reading “If You Give A Dev A Tricked Out Xbox, They’ll Patch Halo 2

Using OpenCV To Catch A Hungry Thief

Rory, the star of the show

[Scott] has a neat little closet in his carport that acts as a shelter and rest area for their outdoor cat, Rory. She has a bed and food and water, so when she’s outside on an adventure she has a place to eat and drink and nap in case her humans aren’t available to let her back in. However, [Scott] recently noticed that they seemed to be going through a lot of food, and they couldn’t figure out where it was going. Kitty wasn’t growing a potbelly, so something else was eating the food.

So [Scott] rolled up his sleeves and hacked together an OpenCV project with a FLIR Boson to try and catch the thief. To reduce the amount of footage to go through, the system would only capture video when it detected movement or a large change in the scene. It would then take snapshots, timestamp them, and optionally record a feed of the video. [Scott] originally started writing the system in Python, but it couldn’t keep up and was causing frames to be dropped when motion was detected. Eventually, he re-wrote the prototype in C++ which of course resulted in much better performance!

Continue reading “Using OpenCV To Catch A Hungry Thief”

Giving People An Owl-like Visual Field Via VR Feels Surprisingly Natural

We love hearing about a good experiment, and here’s a pretty neat one: researchers used a VR headset, an off-the-shelf VR360 camera, and some custom software to glue them together. The result? Owl-Vision squashes a full 360° of un-distorted horizontal visual perception into 90° of neck travel to either side. One can see all around oneself, without needing to physically turn one’s head any further than is natural.

It’s still a work in progress, and accessing the paper currently doesn’t have a free option, but the demonstration video at that link (also embedded below) gives a solid overview of what’s going on.

Continue reading “Giving People An Owl-like Visual Field Via VR Feels Surprisingly Natural”

EMO: Alibaba’s Diffusion Model-Based Talking Portrait Generator

Alibaba’s EMO (or Emote Portrait Alive) framework is a recent entry in a series of attempts to generate a talking head using existing audio (spoken word or vocal audio) and a reference portrait image as inputs. At its core it uses a diffusion model that is trained on 250 hours of video footage and over 150 million images. But unlike previous attempts, it adds what the researchers call a speed controller and a face region controller. These serve to stabilize the generated frames, along with an additional module to stop the diffusion model from outputting frames that feature a result too distinct from the reference image used as input.

In the related paper by [Linrui Tian] and colleagues a number of comparisons are shown between EMO and other frameworks, claiming significant improvements over these. A number of examples of talking and singing heads generated using this framework are provided by the researchers, which gives some idea of what are probably the ‘best case’ outputs. With some examples, like [Leslie Cheung Kwok Wing] singing ‘Unconditional‘ big glitches are obvious and there’s a definite mismatch between the vocal track and facial motions. Despite this, it’s quite impressive, especially with fairly realistic movement of the head including blinking of the eyes.

Meanwhile some seem extremely impressed, such as in a recent video by [Matthew Berman] on EMO where he states that Alibaba releasing this framework to the public might be ‘too dangerous’. The level-headed folks over at PetaPixel however also note the obvious visual imperfections that are a dead give-away for this kind of generative technology. Much like other diffusion model-based generators, it would seem that EMO is still very much stuck in the uncanny valley, with no clear path to becoming a real human yet.

Continue reading “EMO: Alibaba’s Diffusion Model-Based Talking Portrait Generator”