TRS-80 Gains Multiple Monitor Support, And High-Resolution Graphics

To call [Glen Kleinschmidt] a vintage computing enthusiast would be an understatement. Who else would add the ability to control and address multiple VGA monitors to a rack-mounted TRS-80 Model 1? Multiple 64-color 640×480 monitors might not be considered particularly amazing by today’s standards, but for 70s-era computing, it’s a different story.

Drawing this sin(x)/x ripple surface can be done in only 17 lines of BASIC.

How does a TRS-80 even manage to output anything useful to these monitors? [Glen] wrote his own low-level driver in machine code to handle that. The driver even has useful routines that are callable from within BASIC, meaning that programs written on the TRS-80 are granted powerful drawing abilities. Oh, and did we mention that the VGA graphics cards themselves were designed and made by [Glen]?

Interested in making your own? [Glen] provides all the resources you’ll need to re-create his work, including machine code drivers and demonstration BASIC programs as downloadable audio files, just as they would have been on original cassette tapes.

Watch things in action in the videos embedded below. The first draws a Land Rover, and the second plots a simple Moiré pattern star. Not bad for 70s-era hardware and 74xx logic!

Continue reading “TRS-80 Gains Multiple Monitor Support, And High-Resolution Graphics”

Truthsayer Uses Facial Recognition To See If You’re Telling The Truth

It’s hard to watch [Mark Zuckerberg]’s 2018 Congressional testimony and not come to the conclusion that he is, at a minimum, quite a bit different than the average person. Of course, having built a multibillion-dollar company that drastically changed everything about the way people communicate is pretty solid evidence of that, but the footage at least made a fun test case for this AI truth-detecting algorithm.

Now, we’re not saying that anyone in these videos was lying, and neither is [Fletcher Heisler]. His algorithm, which analyzes video of a person and uses machine vision to pick up cues that might be associated with the stress of untruthfulness, is far from perfect. But as the first video below shows, it is a lot of fun to see it at work. The idea is to capture data like pulse rate, gaze direction, blink rate, mouth posture, and even hand position and use them as a proxy for lying. The second video, from [Fletcher]’s recent DEFCON talk, has much more detail.

The key to all this is finding human faces in a video — a task that seemed to fail suspiciously frequently when [Zuck] was on camera — using OpenCV and MediaPipe’s Face Mesh. The subject’s pulse is detected by watching for subtle changes in the color of a subject’s cheeks as blood flows through them, which we’ve heard about plenty of times but never before seen presented so clearly and executed so simply. Gaze direction, blinking, and lip compression are fairly easy to detect too. [Fletcher] also threw in the FER library for facial expression recognition, to get an idea of the subject’s mood. Together, these cues form a rough estimate of the subject’s truthiness, which [Fletcher] is quick to point out is just for entertainment purposes and totally shouldn’t be used on your colleagues on the next Zoom call.

Does [Fletcher]’s facial mesh look familiar? It should, since we once watched him twitch his way through a coding interview.

Continue reading “Truthsayer Uses Facial Recognition To See If You’re Telling The Truth”

This Simple Media Player Will Inspire Beginners And Invite Experimentation

While it would have been considered science-fiction just a few decades ago, the ability to watch virtually any movie or TV show on a little slab that fits in your pocket is today no big deal. But for an electronics beginner, being able to put together a pocketable video player like this one would be quite exciting, and might even serve as a gateway into the larger world of electronics design.

For inspiration, [Alex] from Super Make Something on YouTube looked to the Rickrolling keychain media players we featured back in January. His player is quite a bit larger and more capable, with a PCB design that allows the player to be built in multiple configurations, from audio-only to full video and a LiPo battery. The guts of the player center around an ESP32 module, with an audio amp and speakers plus a 1.8″ LCD screen with SD card reader for storing media files. Add in a few controls and switches and a little code, and you’ll be playing back media files in a snap. Build info and demo in the video below.

It may be a simple design, but we feel like that’s the whole point. [Alex] has taken pains to make this as approachable a build as possible. All the parts are cheap and easily available, and the skills needed to put it together are minimal — with the possible exception of soldering down the ESP32 module, which lacks castellated edge terminals. For a beginner, getting a usable media player by mixing together just a few modules would be magical, and the fact that it’s still pretty hackable afterward is just icing on the cake.

Continue reading “This Simple Media Player Will Inspire Beginners And Invite Experimentation”

Recreating A Camera Shot

People rolling off shields and spears clashing against swords as the camera zooms in and out wildly makes the hallmark action sequences in the movie 300 so iconic. Unfortunately, achieving this effect wasn’t particularly easy. Three cameras were rolling, each with a different lens (100mm, 50mm, and 21mm) to capture a different view of the same scene. In post-production, you can dramatically switch between the three cameras since the shot is synchronized. The folks over at [Corridor Crew] wanted to recreate the effect, but rather than create a custom mount to hold three expensive cameras, they 3d printed a custom mount to hold three costly smartphones.

While there are three cameras on the back of most phones, most phones can’t shoot in slo-mo from all cameras simultaneously. So they would need a rig to hold three phones. The first design was simple and just brackets to hold phones. While nice and sturdy, getting the phones in or out wasn’t easy, and getting to the record button was tricky. iPhones have this handy little magnetic ring on the back. They had a bracket that worked pretty well after a few iterations on the design and some printer issues. Since each camera has optical image stabilization, it is easy for the lenses to get out of alignment, which can mar the shot. However, they somewhat covered up the effect in post. With a working prototype, the only thing left to do was to slice a bunch of piñatas in slow motion with a thrumming soundtrack.

We love seeing exciting camera setups and iterating to find something that works. This dual-camera setup has a very different goal and tries to lean into the parallax effect rather than hide it. Video after the break.
Continue reading “Recreating A Camera Shot”

side by side of upscaling in the AGI engine

Upscaling The Sierras

If you played many games back in the mid-80s to 90s, you might remember the iconic graphics from Sierra’s Online Adventure Games. They were brightly colored (16 colors) and dynamic with some depth. To pay homage, [eviltrout] worked to upscale the images. Despite being rendered at 160×200 at 16 colors and then stretched, storing all those bitmaps even at only 4 bits per pixel would take all the storage available on the floppy disk. The engineers on the game decided instead to take a vector approach to a raster problem.

When [eviltrout] came through to try and upscale the backgrounds, he started by writing some code to extract the draw commands from the engine of the game, known as Adventure Game Interpreter (AGI). Comparing the vector commands to equivalent PNG versions with the best compression, the AGI vector versions were around half the size. Not bad for a couple of game developers in the 80s. Since it is all vector commands under the hood, it should be relatively simple to draw them at a much higher resolution. At least, that’s what he thought. The first issue was with flood fills. Since the canvas is larger, there are gaps between lines, and the flood escapes. A few approaches were taken, such as using a low-resolution reference and marching squares, but neither was satisfactory. Eventually, [eviltrout] expanded flood fills and used thicker lines. He also first rendered to a lower resolution and connected neighboring lines of the same color. Finally, he used ImageMagick to denoise white specs in the output.

We find the effect charming, but some might say you’re distorting art into what the artist never intended to be. But, as with all graphical enhancements, some artistic liberties are being taken without the original artist involved. The code is available on GitHub under an MIT license. Video after the break.

Continue reading “Upscaling The Sierras”

See How To Effectively Use A Green Screen In A Limited Space

Virtual green screens are pretty neat, but for results, nothing beats the real thing. But what if you have limited space? [Fred Emmott] had about 30 inches behind his desk to work with, and shares what it took to make a green screen work reliably in a limited space.

Even (and consistently deployable) lighting is even more important than the camera.

When it comes right down to it, the fundamentals of camera work (lighting, angles, and so on) are unchanged, but hanging a green screen only 30 inches behind one’s desk does make it a bit more challenging to dial in the right environment. In addition, [Fred] wanted a solution that could be deployed and packed away without much of a hassle, and without taking up too much storage space. He ended up using a collapsible green screen that can be pulled straight up and out from its container, similar to portable stand-up banners used at trade shows.

As for the camera end of things, [Fred] found that reliable, quality lighting was critically important, even more so than the camera used. For repeatable results, he suggests disabling any automatic features (such as low light enhancement, or auto white balance, and settings of that nature) and to use LED lighting in the ‘daylight’ range for illumination and fill. The key to good green screen results is to light things evenly, and this is a bit more challenging when working in such a tight space.

To deal with this, [Fred] suggests lights that can be easily repositioned, and put them as far back from things as you can. Get the lighting as even as possible, then adjust your software to match ([Fred] uses OBS Studio) for best results. Once that’s done, it can be more easily set up and torn down with minimal fiddling.

Computers sure make all this much easier than it was back in the day, and if you’re curious, here is all about how green screens were done before the digital age.

Someone setting down an arUco tag

Make Your Own Virtual Set

An old adage says out of cheap, fast, and good, choose two. So if you’re like [Philip Moss] and trying to make a comedy series on a limited budget rapidly, you will have to take some shortcuts to have it still be good. One shortcut [Philip] took was to do away with the set and make it all virtual.

If you’ve heard about the production of a certain western-style space cowboy that uses a virtual set, you probably know what [Philip] did. But for those who haven’t been following, the idea is to have a massive LED wall and tracking of where the camera is. By creating a 3d set, you can render that to the LED wall so that the perspective is correct to the camera. While a giant LED wall was a little out of budget for [Philip], good old green screen fabric wasn’t. The idea was to set up a large green screen backdrop, put some props in, get some assets online, and film the different shots needed. The camera keeps track of where in the virtual room it is, so things like calculating perspective are easy. They also had large arUco tags to help unreal know where objects are. You can put a wall right where the actors think there’s a wall or a table exactly where you put a table covered in green cloth.

Initially, the camera was tracked using a Vive tracker and LiveLink though the tracking wasn’t smooth enough while moving to be used outside of static shots. However, this wasn’t a huge setback as they could move the camera, start a new shot, and not have to change the set in Unreal or fiddle with compositing. Later on, they switched to a RealSense camera instead of the Vive and found it much smoother, though it did tend to drift.

The end result called ‘Age of Outrage’, was pretty darn good. Sure, it’s not perfect, but it doesn’t jump out and scream “rendered set!” the way CGI tv shows in the 90’s did. Not too shabby considering the hardware/software used to create it!