ASL Glove

Electronic Glove Detects Sign Language

A team of Cornell students recently built a prototype electronic glove that can detect sign language and speak the characters out loud. The glove is designed to work with a variety of hand sizes, but currently only fits on the right hand.

The glove uses several different sensors to detect hand motion and position. Perhaps the most obvious are the flex sensors that cover each finger. These sensors can detect how each finger is bent by changing the resistance according to the degree of the bend. The glove also contains an MPU-6050 3-axis accelerometer and gyroscope. This sensor can detect the hand’s orientation as well as rotational movement.

While the more high-tech sensors are used to detect most characters, there are a few letters that are similar enough to trick the system. Specifically, they had trouble with the letters R, U, and V. To get around this, the students strategically placed copper tape in several locations on the fingers. When two pieces of tape come together, it closes a circuit and acts as a momentary switch.

The sensor data is collected by an ATmega1284p microcontroller and is then compiled into a packet. This packet gets sent to a PC which then does the heavy processing. The system uses a machine learning algorithm. The user can train the it by gesturing for each letter of the alphabet multiple times. The system will collect all of this data and store it into a data set that can then be used for detection.

This is a great project to take on. If you need more inspiration there’s a lot to be found, including another Cornell project that speaks the letters you sign, as well as this one which straps all needed parts to your forearm.
Continue reading “Electronic Glove Detects Sign Language”

Touching Light With Haptic Feedback

Many of us have gone on a stationary romp through some virtual or augmented scape with one of the few headsets out in the wild today. While the experience of viewing a convincing figment of reality is an exciting sensation in itself, [Mark Lee] and [Kevin Wang] are figuring out how to tie other senses into the mix.

The duo from Cornell University have built a mechanical exoskeleton that responds to light with haptic feedback. This means the wearer can touch the sphere of light around a source as if it were a solid object. Photo resistors are mounted like antenna to the tip of each finger, which they filed down around the edges to receive a more  diffused amount of light. When the wearer of the apparatus moves their hand towards a light source, the sensors trigger servo motors mounted on the back of the hand to actuate and retract a series of 3D printed tendons which arch upward and connect to the individual fingers of the wearer. This way as the resistors receive varying amounts of light, they can react independently to simulate physical contours.

One of the goals of the project was to produce a working proof of concept with no more than 100 dollars worth of materials, which [Mark] and [Kevin] achieve with some cash to spare. Their list of parts can be found on their blog along with some more details on the project.

Continue reading “Touching Light With Haptic Feedback”

SingLock

SingLock Protects Your Valuables From Shy People

Two Cornell students have designed their own multi-factor authentication system. This system uses a PIN combined with a form of voice recognition to authenticate a user. Their system is not as simple as speaking a passphrase, though. Instead, you have to sing the correct tones into the lock.

The system runs on an ATMEL MEGA1284P. The chip is not sophisticated enough to be able to easily identify actual human speech. The team decided to focus their effort on detecting pitch instead. The result is a lock that requires you to sing the perfect sequence of pitches. We would be worried about an attacker eavesdropping and attempting to sing the key themselves, but the team has a few mechanisms in place to protect against this attack. First, the system also requires a valid PIN.  An attacker can’t deduce your PIN simply by listening from around the corner. Second, the system also maintains the user’s specific voice signature.

The project page delves much more deeply into the mathematical theory behind how the system works. It’s worth a read if you are a math or audio geek. Check out the video below for a demonstration. Continue reading “SingLock Protects Your Valuables From Shy People”

WirePrint

WirePrint Is A Physical ‘Print Preview’ For 3D Printers

3D printers may be old news to most of us, but that’s not stopping creative individuals from finding new ways to improve on the technology. Your average consumer budget 3D printer uses an extrusion technology, whereby plastic is melted and extruded onto a platform. The printer draws a single two-dimensional image of the print and then moves up layer by layer. It’s an effective and inexpensive method for turning a computer design into a physical object. Unfortunately, it’s also very slow.

That’s why Hasso Plattner Institute and Cornell University teamed up to develop WirePrint. WirePrint can slice your three-dimensional model into a wire frame version that is capable of being printed on an extrusion printer. You won’t end up with a strong final product, but WirePrint will help you get a feel for the overall size and shape of your print. The best part is it will do it in a fraction of the time it would take to print the actual object.

This is a similar idea to reducing the amount of fill that your print has, only WirePrint takes it a step further. The software tells your printer to extrude plastic in vertical lines, then pauses for just enough time for it to cool and harden in that vertical position. The result is much cleaner than if this same wire frame model were printed layer by layer. It also requires less overall movement of the print head and is therefore faster.

The best part about this project is that it’s a software hack. This means that it can likely be used on any 3D printers that use extrusion technology. Check out a video of the process below to see how it works. Continue reading “WirePrint Is A Physical ‘Print Preview’ For 3D Printers”

Send Wireless TXT Between Two TI Calculators

 

TI calculators with wireless circuitry

One day while sitting in class in a Cornell University schoolroom, [Will] and [Michael] thought how cool it would be to send text messages to each other via their Texas Instruments calculators.  Connecting the two serial ports with a serial cable was out of the question. So they decided to develop a wireless link that would work for both TI-83 and TI-84 calculators.

The system is powered by a pair of ATmega644’s and two Radiotronix RF Modules that creates a wireless link between the two serial ports. The serial ports are 3 wire ports, which can be used for several things, including acting as a TV out port. [Will] and [Michael] reverse engineered the port’s protocol and did an excellent job at explaining it in full detail. Because they are dealing with the lowest level of the physical protocol, there is no need for them to deal with higher levels like checksums, header packets, ext.

Be sure to stick around after the break to see a video of the project in action. It’s quite slow for today’s standards. If you have any ideas on how to speed it up, be sure to let everyone know in the comments.

Continue reading “Send Wireless TXT Between Two TI Calculators”

Camera-based Touchscreen Input Via An FPGA

piano-hero-uses-camera-based-touch-input

[Chonggang Li] wrote in to share a link to the final project he and [Ran Hu] built for their embedded systems class. It’s called Piano Hero and uses an FPGA to implement a camera-based touch screen system.

All of the hardware used in the project is shown above. The monitor acts as the keyboard, using an image produced by the FPGA board to mark the locations of each virtual key. It uses a regular VGA monitor so they needed to find some way to monitor touch inputs. The solution uses a camera mounted above the screen at an obtuse angle. That is to say, the screen is tilted back just a bit which allows the images on it to be seen by the camera. The FPGA board processes the incoming image, registering a key press when your finger passes between the monitor and the camera. This technique limits the input to just a single row of keys.

This should be much simpler than using a CCD scanner sensor, but that one can track two-dimensions of touch input.

Continue reading “Camera-based Touchscreen Input Via An FPGA”

FPGA Plays Mario Like A Champ

fpga-controls-mario-bros

This isn’t an FPGA emulating Mario Bros., it’s an FPGA playing the game by analyzing the video and sending controller commands. It’s a final project for an engineering course. The ECE5760 Advanced FPGA course over at Cornell University that always provides entertainment for us every time the final projects are due.

Developed by team members [Jeremy Blum], [Jason Wright], and [Sima Mitra], the video parsing is a hack. To get things working they converted the NES’s 240p video signal to VGA. This resulted in a rolling frame show in the demo video. It also messes with the aspect ratio and causes a few other headaches but the FPGA still manages to interpret the image correctly.

Look closely at the screen capture above and you’ll see some stuff that shouldn’t be there. The team developed a set of tests used to determine obstacles in Mario’s way. The red lines signify blocks he will have to jump over. This also works for pits that he needs to avoid, with a different set of tests to detect moving enemies. Once it knows what to do the FPGA emulates the controller signals necessary, pushing them to the vintage gaming console to see him safely to the end of the first level.

We think this is more hard-core than some other autonomous Mario playing hacks just because it patches into the original console hardware instead of using an emulator.

Continue reading “FPGA Plays Mario Like A Champ”