Turning That Old Hoverboard Into A Learning Platform

[Isabelle Simova] is building Hoverbot, a flexible robotics platform using Ikea plastic trays, JavaScript running on a Raspberry Pi and parts scavenged from commonly available hoverboards.

Self-balancing scooters a.k.a. Hoverboards are a great source of parts for such a project. Their high torque, direct drive brushless motors can drive loads of 100 kg or more. In addition, you also get a matching motor controller board, a rechargeable battery and its charging circuit. Most hoverboard controllers use the STM32F103, so flashing them with your own firmware becomes easy using a ST-link V2 programmer.

The next set of parts you need to build your robot is sensors. Some are cheap and easily available, such as microphones, contact switches or LDRs, while others such as ultrasonic distance sensors or LiDAR’s may cost a lot more. One source of cheap sensors are car parking assist transducers. An aftermarket parking sensor kit usually consists of four transducers, a control box, cables and display. Using a logic analyzer, [Isabelle] shows how you can poke around the output port of the control box to reverse engineer the data stream and decipher the sensor data. Once the data structure is decoded, you can then use some SPI bit-banging and voltage translation to interface it with the Raspberry Pi. Using the Pi makes it easy to add a cheap web camera, microphone and speakers to the Hoverbot.

Ikea is a hackers favourite, and offers a wide variety of hacker friendly devices and supplies. Their catalog offers a wide selection of fine, Swedish engineered products which can be used as enclosures for building robots. [Isabelle] zeroed in on a deep, circular plastic tray from a storage table set, stiffened with some plywood reinforcement. The tray offers ample space to mount the two motors, two castor wheels, battery and the rest of the electronics. Most of the original hardware from the hoverboard comes handy while putting it all together.

The software glue that holds all this together is JavaScript. The event-driven architecture of Node.js makes it a very suitable framework to use for Hoverbot. [Isabelle] has built a basic application allowing remote control of the robot. It includes a dashboard which shows live video and audio streams from the robot, buttons for movement control, an input box for converting text to speech, ultrasonic sensor visualization, LED lighting control, message log and status display for the motors. This makes the dashboard a useful debugging tool and a starting point for building more interesting applications. Check the build log for all the juicy details. Which other products from the Ikea catalog can be used to build the Hoverbot? How about a robotic Chair?

Continue reading “Turning That Old Hoverboard Into A Learning Platform”

TensorFlow In Your Browser

If you want to explore machine learning, you can now write applications that train and deploy TensorFlow in your browser using JavaScript. We know what you are thinking. That has to be slow. Surprisingly, it isn’t, since the libraries use Graphics Processing Unit (GPU) acceleration. Of course, that assumes your browser can use your GPU. There are several demos available, include one where you train a Pac Man game to respond to gestures in your webcam to control the game. If you try it and then disable accelerated graphics in your browser options, you’ll see just what a speed up you can gain from the GPU.

Continue reading “TensorFlow In Your Browser”

A Wrencher On Your Oscilloscope

We like oscilloscope art here at Hackaday, so it was natural to recently feature a Javascript based oscilloscope art generator on these pages, along with its companion clock. Open a web page, scribble on the screen, see it on the ‘scope.

As part of our coverage we laid down the challenge: “If any of you would like to take this further and make a Javascript oscilloscope Wrencher, we’d love to make it famous“. Which of course someone immediately did, and that someone was [Ted] with this JSFiddle. Hook up your soundcard’s left and right to X and Y respectively, press the “logo” button in the bottom right hand pane, fiddle with your voltages and trigger levels for a bit, and you should see a Wrencher on the screen. We’re as good as our word, so here we are making the code famous. Thanks, [Ted]!

It’s not an entirely perfect Wrencher generator, as it has a lot of points to draw in the time available, resulting in a flickery Wrencher. (Update: take a look at the comments below, where he has posted an improved JSFiddle and advice on getting a better screen grab.) Thus the screen shot is an imperfect photograph rather than the usual grab to disk, for some reason the Rigol 1054z doesn’t allow the persistence to be turned up in X-Y mode so each grab only had a small part of the whole. But it draws a Wrencher on the screen, so we’re pretty impressed.

The piece that inspired this Wrencher can be found here. If you think you can draw one with a faster refresh rate, get coding and put it in the comments. We can’t promise individual coverage for each effort though, we’re Hackaday rather than Yet-another-scope-Wrencher-aday.

Oscilloscope Art From Your Browser

Oscilloscope art is a fascinating pursuit in which waveforms are generated for the X an Y channels of an oscilloscope to draw pictures on its screen. It’s somewhat distinct from vector computer graphics of the type you might see in older arcade machines or the Vectrex console, in that while it uses a similar approach to creating a display it has a very different purpose. Sometimes these works can be breathtakingly beautiful animations, and other times maybe not so much.

If you’d like to explore the topic as a mild diversion, then maybe this Javascript oscilloscope art generator from [Neil Fraser] might be of interest. In around a hundred lines of code he’s created an in-browser scratchpad upon which a waveform can be drawn which will then be created as an audio signal on your computer’s soundcard. Hook up left and right to X and Y of your oscilloscope, and what you scribbled on the pad should pop up on the screen.

Draw it, see it on screen. Magic!
Draw it, see it on screen. Magic!

It’s an impressive piece of work that you can see in the video below or try for yourself, and your scribe’s Rigol was pressed into service to give it a go. After a bit of tweaking to find the right voltages and selecting slope triggering rather than edge triggering, we too were making squiggles appear on the screen.

It’s rather funny, he’s saved the best for last. As an afterthought, he also provides a link to another piece of his work, an oscilloscope clock in Javascript. If any of you would like to take this further and make a Javascript oscilloscope Wrencher, we’d love to make it famous.

Continue reading “Oscilloscope Art From Your Browser”

Running Programs On Paper

It’s a simple fact that most programs created for the personal computer involve the same methods of interaction, almost regardless of purpose. Word processors, graphics utilities, even games – the vast majority of interaction is performed through a keyboard and mouse. However, sometimes it can be fun to experiment with alternative technologies for users to interact with code – Paper Programs is an exciting way to do just that.

Paper Programs is a combination of a variety of existing technologies to create a way of interacting with code which is highly tangible. The setup consists of a projector, and a webcam which can see the projected area, combined with Javascript programs running in a browser. Programs can be edited in the browser, then printed out with special coloured dots around the page. When the page is placed in the projection area, these dots identify the unique program and are picked up by the webcam, and the server executes the relevant code, projecting back onto the page.

It’s a system that creates a very tactile way of interacting with a program – by moving the page around or placing different pages next to each other, programs can interact in various ways. The system is setup for collaboration as well, allowing users to edit code directly in the browser.

The project reminds us of earlier works on DIY multitouch screens, but with a greater focus on direct engagement with the underlying code. What other unique ways exist to interact with code? Let us know in the comments.

Continue reading “Running Programs On Paper”

Tiny Programming Language In 25 Lines Of Code

There are certain kinds of programs that fascinate certain kinds of software hackers. Maybe you are into number crunching, chess programs, operating systems, or artificial intelligence. However, on any significant machine, most of the time those activities will require some sort of language. Sure, we all have some processor we can write hex code for in our head, but you really want at least an assembler if not something sturdier. Writing languages can be addictive, but jumping right into a big system like gcc and trying to make changes is daunting for anyone. If you want a gentle introduction, check out [mgechev’s] language that resides in 25 lines of Javascript.

The GitHub page bills it as a tiny compiler, although that is a bit misleading and even the README says it is a transpiler. Actually, the code reads a simple language, uses recursive descent parsing to build a tree, and then uses a “compiler” to convert the tree to JavaScript (which can then be executed, of course). It can also just interpret the tree and produce a numerical answer.

Continue reading “Tiny Programming Language In 25 Lines Of Code”

Lowering JavaScript Timer Resolution Thwarts Meltdown And Spectre

The computer security vulnerabilities Meltdown and Spectre can infer protected information based on subtle differences in hardware behavior. It takes less time to access data that has been cached versus data that needs to be retrieved from memory, and precisely measuring time difference is a critical part of these attacks.

Our web browsers present a huge potential surface for attack as JavaScript is ubiquitous on the modern web. Executing JavaScript code will definitely involve the processor cache and a high-resolution timer is accessible via browser performance API.

Web browsers can’t change processor cache behavior, but they could take away malicious code’s ability to exploit them. Browser makers are intentionally degrading time measurement capability in the API to make attacks more difficult. These changes are being rolled out for Google Chrome, Mozilla Firefox, Microsoft Edge and Internet Explorer. Apple has announced Safari updates in the near future that is likely to follow suit.

After these changes, the time stamp returned by performance.now will be less precise due to lower resolution. Some browsers are going a step further and degrade the accuracy by adding a random jitter. There will also be degradation or outright disabling of other features that can be used to infer data, such as SharedArrayBuffer.

These changes will have no impact for vast majority of users. The performance API are used by developers to debug sluggish code, the actual run speed is unaffected. Other features like SharedArrayBuffer are relatively new and their absence would go largely unnoticed. Unfortunately, web developers will have a harder time tracking down slow code under these changes.

Browser makers are calling this a temporary measure for now, but we won’t be surprised if they become permanent. It is a relatively simple change that blunts the immediate impact of Meltdown/Spectre and it would also mitigate yet-to-be-discovered timing attacks of the future. If browser makers offer a “debug mode” to restore high precision timers, developers could activate it just for their performance tuning work and everyone should be happy.

This is just one part of the shock wave Meltdown/Spectre has sent through the computer industry. We have broader coverage of the issue here.