MagnID – Sneaky New Way of Interacting With Tablets

New magnetic tech dubbed “MagnID” is being presented this weekend at Stanford’s annual TEI conference. It is a clever hack aimed to hijack a tablet’s compass sensor and force it to recognize multiple objects. Here is a sneak peek at the possibilities of magnetic input for tablets.

Many tablets come with some sort of triaxial magnetic sensor but as [Andrea] and [Ian]’s demo shows, they are only capable of passing along the aggregate vector of all magnetic forces. If one had multiple magnetic objects, the sensor is not able to provide much useful information.

Their solution is a mix of software and hardware. Each object is given a magnet that rotates at a different known speed. This creates complex sinusoidal magnetic fields that can be mathematically isolated with bandpass filters. This also gives them distance to each object. The team added an Arduino with a magnetometer for reasons unexplained, perhaps the ones built into tablets are not sufficient?

The demo video below shows off what is under the hood and some new input mechanics for simple games, sketching, and a logo turtle. Their hope is that this opens the door to all manner of tangible devices.

Check out their demo at Standford’s 9th annual “Tangible, Embedded, Embodied Interaction” this January 15-19, 2015.

Continue reading “MagnID – Sneaky New Way of Interacting With Tablets”

Build your own self-driving car

If you’ve ever wanted your own self-driving car, this is your chance. [Sebastian Thrun], co-lecturer (along with the great [Peter Norvig]) of the Stanford AI class is opening up a new class that will teach everyone who enrolls how to program a self-driving car in seven weeks.

The robotic car class is being taught alongside a CS 101 “intro to programming” course. If you don’t know the difference between an interpreter and a compiler, this is the class for you. You’ll learn how to make a search engine from scratch in seven weeks. The “Building a Search Engine” class is taught by [Thrun] and [David Evans], a professor from the University of Virginia. The driverless car course is taught solely by [Thrun], who helped win the 2005 DARPA Grand Challenge with his robot car.

In case you’re wondering if this is going to be another one-time deal like the online AI class, don’t worry. [Thrun] resigned as a tenured professor at Stanford to concentrate on teaching over the Internet. He’s still staying at Stanford as an associate professor but now he’s spending his time on his online university, Udacity. It looks like he might have his hands full with his new project; so far, classes on the theory of computation, operating systems, distributed systems, and computer security are all planned for 2012.

Want to learn Artificial Intelligence? Good.

In a little more than a month, tens of thousands of people around the world will attend a class on Artificial Intelligence at Stanford. Registration for this class is still open for both class ‘tracks’. The “basic” track is simply watching lectures and answering quizzes, or a slightly more advanced version of MIT OpenCourseware or Khan Academy. The “advanced” track is the full class, requires homework and exams, and aspires to Stanford difficulty.

With thousands of people taking this class, there’s bound to be a few study groups popping up around the web. The largest ones we’ve seen are /r/aiclass on Reddit and the stack overflow style aiqus. The most common reply to ‘what language should I learn from this class?’ is Python, although there’s an online code repo that has the text’s working code in Lisp, Java, C++ and C#.

If AI doesn’t float your boat, there are two more classes being taught from Stanford this fall: machine learning and introduction to databases. Any way you look at it, you’re getting to take a class from one of the preeminent instructors in the field for free. Do yourself a favor and sign up.

Thanks to everyone who sent this in. You can stop now.

Open source digital camera

camera_marcandrew_news

Those brainy folks over at Stanford are working on an open source digital camera. This is an effort to advance what they call “computational photography”. Basically they’re looking to combine some of the functionality of Photoshop or Gimp right into the camera. One example they discuss is utilizing an algorithm to even out the light levels from one side of the picture to the other. Another trick they’ve already accomplished in the lab is increasing the resolution of full motion video. They take a full resolution photo once every few frames and use the computing power of the camera to incorporate that information into the low-res frames around it.

We like the idea of being able to get at the firmware that runs on our digital cameras. Going with open source would certainly provide that access, but cost will be an issue. The Stanford team hopes to produce a model of what they now call Frankencamera that sells for “less than $1000″.

[via crave]

Autonomous helicopter learns autorotation


Stanford’s autonomous helicopter group has made some impressive advancements in the field of autonomous helicopter control, including inverted hovering and performing aerobatic stunts. The group uses reinforcement learning to teach its control system various maneuvers and has been very successful in doing so. One of their latest achievements was teaching the bot the emergency landing technique autorotation. Autorotation is used when a helicopter’s engine fails or is disengaged and works by changing the collective pitch to use the airflow from descent to rotate the blades. The group has more flight demonstrations on their YouTube channel.

[via BotJunkie]