Seiko Had A Smartwatch In 1984

You might think of the smartwatch era as beginning with Apple, relatively recently. Or, you might think back to those fancy Timex models with the datalink thing going on in the 1990s. Seiko can beat them all, though, with its UC-2000 smartwatch that debuted all the way back in 1984.

The UC2200 was the bigger docking station of the two.

The UC-2000 very much looks cutting edge for its era, and absolutely ancient today. It featured a 4-bit CPU, 2 kilobytes of RAM, and 6 kilobytes of ROM. Display was via a simple 10×4 character LCD in a rectangular form factor, with four buttons along the bottom. Branded as a “personal information processor,” it was intended for use with the UC-2100 dock. This added a full physical QWERTY keyboard that interacted with the UC-2000 when the two were combined together. Alternatively, you could go for the UC-2200, which not only had a keyboard but also a thermal printer to boot. Oh, and ROM packs for Microsoft Basic, games, or an English-to-Japanese translator.

What could you do on this thing? Well, it had basic watch functions, so it told the time, acted as a stop watch, and an alarm, of course. But you could also use it to store two memos of up to 1000 characters each, schedule appointments, and do basic calculations.

The one thing this smartwatch was missing? Connectivity. It couldn’t get on the Internet, nor could it snatch data from the ether via radio or any other method. By today’s measures, it wouldn’t qualify as much of a smartwatch at all. Moreso a personal organizer that fit on the wrist. Still, for its day, this thing really was a whole computer that fit on your wrist.

Would you believe we’ve seen the UC-2000 before? In fact, we’ve even seen it hacked to play Tetris! Video of that wonderful feat after the break.
Continue reading “Seiko Had A Smartwatch In 1984”

Remembering Seymour Cray

If you think of supercomputers, it is hard not to think of Seymour Cray. He built giant computers at Control Data Corporation and went on to build the famous Cray supercomputers. While those computers aren’t especially amazing today, for their time, they were modern marvels. [Asianometry] has a great history of Cray, starting with his work at ERA, which would, of course, eventually produce the computer known as the Univac 1103.

ERA was bought up by Remington Rand, which eventually became Sperry Rand. Due to conflict, some of the ERA staff left to form Control Data Corporation, and Cray went with them. The new company decided to focus on computers to do simulations for things like nuclear test simulations.

Continue reading “Remembering Seymour Cray”

DME With A Twist Of LimeSDR

Navigating aircraft today isn’t like the old days. No more arrows painted on a barn roof or rotating airway beacons. Now, there are a host of radio navigation aids. GPS, of course, is available. But planes often use VOR to determine a bearing to a known point and DME — distance measuring equipment — to measure the distance to that point. DME operates around 1000 MHz and is little more than a repeater. An airplane sends a pair of pulses, and times how long it takes for the DME to repeat them. [Daniel Estévez] has been monitoring these transmissions with a LimeSDR.

Like most repeaters, the DME transponders listen on one frequency and transmit on another. Those frequencies are 63 MHz apart. This poses a challenge for some types of SDRs which have limits on bandwidth.

Continue reading “DME With A Twist Of LimeSDR”

Playing Rock, Paper Scissors With A Time Of Flight Sensor

You can do all kinds of wonderful things with cameras and image recognition. However, sometimes spatial data is useful, too. As [madmcu] demonstrates, you can use depth data from a time-of-flight sensor for gesture recognition, as seen in this rock-paper-scissors demo.

If you’re unfamiliar with time-of-flight sensors, they’re easy enough to understand. They measure distance by determining the time it takes photons to travel from one place to another. For example, by shooting out light from the sensor and measuring how long it takes to bounce back, the sensor can determine how far away an object is. Take an array of time-of-flight measurements, and you can get simple spatial data for further analysis.

The build uses an Arduino Uno R4 Minima, paired with a demo board for the VL53L5CX time-of-flight sensor. The software is developed using NanoEdge AI Studio. In a basic sense, the system uses a machine learning model to classify data captured by the time-of-flight sensor into gestures matching rock, paper, or scissors—or nothing, if no hand is present. If you don’t find [madmcu]’s tutorial enough, you can take a look at the original version from STMicroelectronics, too.

It takes some training, and it only works in the right lighting conditions, but this is a functional system that can determine real hand sign and play the game. We’ve seen similar techniques help more advanced robots cheat at this game before, too! What a time to be alive.

CUDA, But Make It AMD

Compute Unified Device Architecture, or CUDA, is a software platform for doing big parallel calculation tasks on NVIDIA GPUs. It’s been a big part of the push to use GPUs for general purpose computing, and in some ways, competitor AMD has thusly been left out in the cold. However, with more demand for GPU computation than ever, there’s been a breakthrough. SCALE from [Spectral Compute] will let you compile CUDA applications for AMD GPUs.

SCALE allows CUDA programs to run as-is on AMD GPUs, without modification. The SCALE compiler is also intended as a drop-in swap for nvcc, right down to the command line options. For maximum ease of use, it acts like you’ve installed the NVIDIA Cuda Toolkit, so you can build with cmake just like you would for a normal NVIDIA setup. Currently, Navi 21 and Navi 31 (RDNA 2.0 and RDNA 3.0) targets are supported, while a number of other GPUs are undergoing testing and development.

The basic aim is to allow developers to use AMD hardware without having to maintain an entirely separate codebase. It’s still a work in progress, but it’s a promising tool that could help break NVIDIA’s stranglehold on parts of the GPGPU market.

 

Show Us Your Minimalist Games, And Win

Sometimes the tightest constraints inspire the highest creativity. The 2024 Tiny Games Challenge invites you to have the most fun with the most minimal setup. Whether that’s tiny size, tiny parts count, or tiny code, we want you to show us that big fun can come in small packages.

The Tiny Games Challenge starts now and runs through September 10th, with the top three entries receiving a $150 gift certificate courtesy of DigiKey.

Continue reading “Show Us Your Minimalist Games, And Win”

Giving People An Owl-like Visual Field Via VR Feels Surprisingly Natural

We love hearing about a good experiment, and here’s a pretty neat one: researchers used a VR headset, an off-the-shelf VR360 camera, and some custom software to glue them together. The result? Owl-Vision squashes a full 360° of un-distorted horizontal visual perception into 90° of neck travel to either side. One can see all around oneself, without needing to physically turn one’s head any further than is natural.

It’s still a work in progress, and accessing the paper currently doesn’t have a free option, but the demonstration video at that link (also embedded below) gives a solid overview of what’s going on.

Continue reading “Giving People An Owl-like Visual Field Via VR Feels Surprisingly Natural”