If that sounds like a mouthful, just call it SCUTTLE – the open-source mobile robot designed at Texas A&M University. SCUTTLE is a low cost (under $350) robot designed for teaching Aggies at the Multidisciplinary Engineering Technology (MXET) program, where it is used for in-lab lessons and semester projects for the MXET 300 – Mobile Robotics undergraduate course. Since it is designed for academic purposes, the robot is very well documented, making it easy to replicate when you follow the instructions. In fact, the team is looking for others to build SCUTTLE’s and give them feedback in order to improve its design.
Available on the SCUTTLE website are a large collection of videos to walk you through fabrication, electronics setup, robot assembly, programming, and robot operation. They are designed to help students build and operate the mobile robot within one semester. Most of the mechanical and electronics parts needed for the robot are off-the-shelf and easy to procure and the rest of the custom parts can be easily 3D printed. Its modular design allows you the freedom to try different options, features and upgrades. SCUTTLE is powerful enough to carry a payload up to 9 kg (20 pounds) allowing additional hardware to be added. To keep cost low and construction easy, the robot uses a simple, two wheel drive system, using a pair of geared motors. This forces the robot to literally scuttle in a “non-holonomic” fashion to move from origin to destination in a sequence of left / right turns and forward moves, so motion planning is interestingly tricky.
The SCUTTLE robot is programmed using Python3 running under Linux and has been tested working on either a BeagleBone Blue or a Raspberry Pi. The SCUTTLE software guide is a good place to get acquainted with the system architecture.
The standard configuration uses ultrasonic sensors for collision avoidance, a standard USB camera for vision, and encoders coupled to the wheel drive pulleys for determining position with respect to the starting origin. An optional USB LiDAR can be added for area mapping. The additional payload capability allows adding on extra sensors, actuators or battery packs.
To complement information on the website, additional resources are posted on GitHub, GrabCAD and YouTube. Building a SCUTTLE robot ought to be a great group project at maker spaces wanting to get hackers started with Robotics. We have covered many Educational Robot projects in the past, but the SCUTTLE really shines with its ability to carry a pretty decent payload at a low cost.
Continue reading “Sensing, Connected, Utility Transport Taxi For Level Environments”
Automakers continue to promise that fully autonomous cars are around the corner, but we’re still not quite there yet. However, there are a broad range of driver assist technologies that have come to market in recent years, with lane keeping assist being one of them. [raja_961] decided to implement this technology on an RC car, using a Raspberry Pi.
A regular off-the-shelf RC car is used as the base of the platform, outfitted with two drive motors and a third motor used for the steering. Unfortunately, the car can only turn either full-left or full-right only, limiting the finesse of the steering. Despite this, the work continued. A Raspberry Pi 3 was fitted out with a motor controller and camera, and hooked up to the chassis. With everything laced up, a Python script is used along with OpenCV to run the lane-keeping algorithm.
[raja_961] does a great job of explaining the lane keeping methodology. Rather than simply invoking a library and calling it good, instead the Instructable breaks down each stage of how the algorithm works. Incoming images are converted to the HSL color system, before a series of operations is used to pick out the apparent slope of the lane lines. This is then used with a PID algorithm to guide the steering of the car.
It’s a comprehensive explanation of a basic lane-keeping algorithm, and a great place to start if you’re interested in learning about the technology. There’s plenty going on in the world of self-driving RC cars, you just need to know where to look! Video after the break.
Continue reading “Lane Keeping RC Car Uses OpenCV”
If you’ve purchased a piece of consumer electronics in the last few years, there’s an excellent chance that you were forced to use some proprietary application (likely on a mobile device) to unlock its full functionality. It’s a depressing reality of modern technology, and unless you’re willing to roll your own hardware, it can be difficult to avoid. But [krishnan793] decided to take another route, and reverse engineered his DDPAI dash camera so he could get a live video stream from it without using the companion smartphone application.
Like many modern gadgets, the DDPAI camera creates its own WiFi access point that you need to connect to for configuration. By putting his computer’s wireless card into Monitor mode and running Wireshark, [krishnan793] was able to see that the smartphone was communicating with the camera using some type of REST API. After watching the clear-text exchanges for awhile, he not only discovered a few default usernames and passwords, but the commands necessary to configure the camera and start the video stream.
After hitting it with the proper REST messages, an
nmap scan confirmed that several new services had started up on the device. Unfortunately, he didn’t get any video when he pointed VLC to the likely port numbers. At this point [krishnan793] checked the datasheet for the camera’s Hi3516E SoC and saw that it supported H.264 encoding. By manually specifying that as the video codec when invoking VLC, it was able to play a video stream from port 6200. A little later, he discovered that port 6100 was serving up the live audio.
Technically that’s all he wanted to do in the first place, as he was looking to feed the video into OpenCV for other projects. But while he was in the area, [krishnan793] also decided to find the download URL for the camera’s firmware, and ran it through binwalk to see what he could find out. Not surprisingly the security turned out to be fairly lax through the entire device, so he was able to glean some information that could be useful for future projects.
Of course, if you’d rather go with the first option and build your own custom dash camera so you don’t have to jump through so many hoops just to get a usable video stream, we’ve got some good news for you.
This multi-touch touch panel built by [thiagesh D] might look like it came from the retro-futuristic worlds of Blade Runner or Alien, but thanks to a detailed build video and a fairly short list of required parts, it could be your next weekend project.
The build starts with a sheet of acrylic, which has a grid pattern etched into it using nothing more exotic than a knife and a ruler. Though if you do have access to some kind of CNC router, this would be a perfect time to break it out. Bare wires are then laid inside the grooves, secured with a healthy application of CA glue, and soldered together to make one large conductive array. This is attached to a capacitive sensor module so it’ll fire off whenever somebody puts a finger on the plastic.
With RGB LED strips added to the edges, you could actually stop here and have yourself a very cool looking illuminated touch sensitive panel. But ultimately, it would just be a glorified button. There’s plenty of interesting applications for such a gadget, but it’s not going to be terribly useful attached to your computer.
To turn this into a viable input device, [thiagesh D] is using a Raspberry Pi and its camera module to track the number and position of fingertips from the other side of the acrylic with Python and OpenCV. His code will even pick up on specific gestures, like a three finger drag which changes the colors of the LEDs accordingly in the video below. The camera’s field of view unfortunately means the box the panel gets mounted to has to be fairly deep, but if recessed into the surface of a desk, we think it could look incredible.
Custom multi-touch panels have been a favorite project of hackers for years now, and we’ve got examples going all the way back to the old black and white days. But larger and more modern incarnations like this one have the potential to change how we interface with technology on a daily basis.
Continue reading “Building A Cyberpunk Multi-Touch Input Device”
If you’re looking for a simple project to start exploring the intersection of OpenCV and robotics, then the RPi Tank created by [Vishal Varghese] might be a good place to start. A Raspberry Pi and a few bits of ancillary hardware literally taped to the top of a toy M1 Abrams tank becomes a low-cost platform for testing out concepts such as network remote control and visual line following. Sure, you don’t need to base it around an Abrams tank, but if you’re going to do it you might as well do it with style.
As this is more of a tech demonstrator, the hardware details are pretty minimal. [Vishal] says you just need a relatively recent version of the Raspberry Pi, a MotoZero motor controller, and a camera module. To provide juice for the electronics you don’t need anything more exotic than a USB power bank, which in his case has been conveniently attached to the top of the turret. He doesn’t provide exact details on how the MotoZero gets wired into the Abram’s motors, but we imagine it’s straightforward enough that the average Hackaday reader probably doesn’t need it spelled out for them.
Ultimately, the software is the heart of this project, and that’s where [Vishal] really delivers. He’s provided sample Python scripts ordered by their level of complexity, from establishing a network connection on the Raspberry Pi to following a line of tape on the ground. Whether used together or examined individually, these scripts provide a great framework to get your first project rolling. Literally.
Line following robots, in their many forms, have been a favorite hacker project for years. Whether they home in with an analog circuit or replace the lines with hidden wires, they’re a great way to get started with semi-autonomous robotics.
There is a treasure trove of history locked away in closets and attics, where old shoeboxes hold reels of movie film shot by amateur cinematographers. They captured children’s first steps, family vacations, and parties where [Uncle Bill] was getting up to his usual antics. Little of what was captured on thousands of miles of 8-mm and Super 8 film is consequential, but giving a family the means to see long lost loved ones again can be a powerful thing indeed.
That was the goal of [Anton Gutscher]’s automated 8-mm film scanner. Yes, commercial services exist that will digitize movies, slides, and snapshots, but where’s the challenge in that? And a challenge is what it ended up being. Aside from designing and printing something like 27 custom parts, [Anton] also had a custom PCB fabricated for the control electronics. Film handling is done with a stepper motor that moves one frame into the scanner at a time for scanning and cropping. An LCD display allows the archivist to move the cropping window around manually, and individual images are strung together with ffmpeg running on the embedded Raspberry Pi. There’s a brief clip of film from a 1976 trip to Singapore in the video below; we find the quality of the digitized film remarkably good.
Hats off to [Anton] for stepping up as the family historian with this build. We’ve seen ad hoc 8-mm digitizers before, but few this polished looking. We’ve also featured other archival attempts before, like this high-speed slide scanner.
Continue reading “3D-Printed Film Scanner Brings Family Memories Back To Life”
People take their tabletop games very, very seriously. [Andrew Lauritzen], though, has gone far above and beyond in pursuit of a fair game. The game in question is Star War: X-Wing, a strategy wargame where miniature pieces are moved according to rolls of the dice. [Andrew] suspected that commercially available dice were skewing the game, and the automated machine-vision dice tester shown in the video after the break was the result.
The rig is a very clever design that maximizes the data set with as little motion as possible. The test chamber is a box with clear ends that can be flipped end-for-end by a motor; walls separate the chamber into four channels to test multiple dice on each throw, and baffles within the channels assure randomization. A webcam is positioned below the chamber to take a snapshot of each “throw”, which is then analyzed in OpenCV. This scheme has the unfortunate effect of looking at the dice from the table’s perspective, but [Andrew] dealt with that in true hacker fashion: he ignored it since it didn’t impact the statistics he was interested in.
And speaking of statistics, he generated a LOT of them. The 62-page report of results from his study is an impressive piece of work, which basically concludes that the dice aren’t fair due to manufacturing variability, and that players could use this fact to cheat. He recommends pooled sets of dice to eliminate advantages during competitive play.
This isn’t the first automated dice roller we’ve seen around these parts. There was the tweeting dice-bot, the Dice-O-Matic, and all manner of electronic dice throwers. This one goes the extra mile to keep things fair, and we appreciate that.
Continue reading “Automated Dice Tester Uses Machine Vision To Ensure A Fair Game”