[Stephen] has just shared with us the current progress of his night vision vehicle system, and it’s looking quite promising!
The idea of the project is to provide the driver with a high contrast image of the road, pedestrians and any other obstacles that may not be immediately visible with headlights. It’s actually becoming a feature on many luxury cars including BMW, Audi, GM and Honda. This is what inspired [Stephen] to try making his own.
The current system consists of an infrared camera, two powerful IR light spot lights, and a dashboard LCD screen to view it. It may be considered “not a hack” by some of our more exuberant readers, but [Stephen] does such a great job explaining his future plans for it, which include object recognition using OpenCV, so we felt it was more than worth a share, even at this point.
You see, the idea of vehicle night vision is not to constantly watch a little screen instead of the road — it’s designed to be there when you need it — and to let you know when you need it, [Stephen’s] planning on adding a Raspberry Pi to the mix running OpenCV to detect any anomalies on the road that could be of concern. We shudder at the amount of training a system like that might need — well, depending on the complexity of this image recognition.
Anyway, stick around after the break to hear [Stephen] explain it himself — it is a long video, but if you want to skip to the action there are clips of it on the road at 1:53 and 26:52.
Continue reading “The Beginning of a DIY Vehicle Night Vision System”
[Shane Ormonde] recently learned how to measure distance using just a webcam, a laser, and everyone’s favorite math — trigonometry. Since then he’s thrown the device onto a stepper motor, and now has a clever 2D room mapping machine.
He learned how to create the webcam laser range finder from [Todd Danko], a project we featured 7 years ago! It’s a pretty simple concept. The camera and laser are placed parallel to each other at a known distance, axis-to-axis. On the computer, a python script (using the OpenCV library) searches the image for the brightest point (the laser). The closer the brightest point is to the center of the image, the farther the object. Counting pixels from the center of the image to the laser point allows you to calculate an angle, which can then be used to calculate the distance to the object — of course, this needs to be calibrated to be at all accurate. [Shane] does a great job explaining all of this in one of his past posts, building the webcam laser rangefinder.
From there it was just a matter of slapping the rangefinder onto a stepper motor, driving it with a small PIC, and running the calculations on the fly! His results are fairly impressive.
Continue reading “2D Room Mapping With a Laser and a Webcam”
Last term’s project at Chico State University hopes to reduce driver distraction by alerting you when it notices you aren’t paying attention (to the road!).
The team designed SAM using OpenCV to track your face in order to recognize when you aren’t watching the road. It alerts you through a variety of audible beeps and LED lights, and is programmed to only alert you after set time values — i.e. it’s not going to go off when you’re checking your blind spot, unless you’ve been checking it for over a certain length of time. It also has a silence button you can press for situations like looking around while you are parked.
The proof of concept device was built using a Raspberry Pi, the PiCam, and a breadboard to accommodate some manual controls, the buzzer, and LEDs. It also continuously records video of you on a 30 second loop, and in the event of an accident, it saves all the video — perhaps proving it was your fault. Can you imagine if all cars had this installed? On the plus side you wouldn’t have to argue with insurance companies — but if it really was your fault, well then you’re straight out of luck.
To some of us, hacking an RC Car to simply follow a black line or avoid obstacles is too easy, and we’re sure [Shazin] would agree with that, since he created an RC Car that follows your face!
The first step to this project was to take control of the RC Car, but instead of hijacking the transmitter, [Shazin] decided to control the car directly. This isn’t any high-end RC Car though, so forget about PWM control. Instead, a single IC (RX-2) was found to handle both the RF Receiver and H-Bridges. After a bit of probing, the 4 control lines (forward/back and left/right) were identified and connected to an Arduino.
[Shazin] paired the Arduino with a USB Host Shield and connected it up with his Android phone through the ADB (Android Debug Bridge). He then made some modifications to the OpenCV Android Face Detection app to send commands to the Arduino based on ‘where’ the Face is detected; if the face is in the right half of the screen, turn right, if not, turn left and go forward.
This is a really interesting project with a lot of potential; we’re just hoping [Shazin] doesn’t have any evil plans for this device like strapping it to a Tank Drone that locks on to targets!
Continue reading “Android+Arduino – Face Following RC Car”
A team of mechanical and electrical engineering students at Olin College came up with a very fun semester project — a pneumatic powered marshmallow cannon that can track faces, and aim for the mouth!
The device — dubbed the Confectionery Canon — is an impressive mechanical build which required many of Olin College’s manufacturing resources such as the laser cutter, the mill, and the lathe. The majority of the device was made out of acrylic, which was chosen for easy laser cutting, and affordability. Specific aluminum pieces provide strength and were made using mostly scrap found in the shop.
Four servos, a webcam, a solenoid and an Arduino Uno make up the electrical system, which uses Python and OpenCV to track faces (GitHub). A PVC tank is used as the pneumatic reservoir, charged with a safety release valve at 30PSI. To fire the cannon, a sprinkler valve is controlled by a beefy solenoid. It currently only has a magazine capacity of 4 large marshmallows, but the team is planning on upgrading soon.
They have put together a great website with tons of information on the project, and following the break is a fun promo video they made for the project — they even got the VP of the college to try it!
Continue reading “The Face-Tracking Confectionery Cannon!”
Way back in April we looked at an impressive Pick and Place machine project which wasn’t actually up and running yet. Well it looks like [Brian Dorey] has really put the pedal to the metal with this fall, posting nine project updates since September.
The previous system was working just fine but required quite a bit of user intervention to do the actual placing. So the first modifications toward the new goal centered around motorizing the gantry. There’s a lot of information on this, as well as the vacuum tweezer heads that were designed for the system. But for us it was exciting to read about the vibrating chip feeder. This uses the vibrating motor from an Xbox controller to jiggle the ICs from their tube packaging to a staging jig off the side of the build table. You can see a video of this after the break along with a demo of the entire machine at work.
[Brian] seems to favor the Xbox parts as he also used an Xbox live camera along with OpenCV to detect the parts and ensure they are lined up correctly. For the best results possible the parts need to be illuminated properly which is why he also built a rather interesting light ring using 144 red LEDs.
Continue reading “Update: Semi-automatic Pick and Place Goes Fully-Automatic”
It’s a lot of fun to see a self-balancing robot project. Rarely do they go much further than being able to keep themselves upright while being piloted remotely and annoyingly shoved by their creator as proof of their ability to remain standing on two wheels. This little anthropomorphic guy is the exception to the rule. It’s the product of [Samuel Matos] who says he didn’t have a specific purpose in mind, but just kept adding features as they came to him.
Starting with a couple of carbon fiber plates [Samuel] cut the design by hand, using stand-offs to mount the NEMA 17 stepper motors and to connect the two halves of the chassis. It looks like he used some leftover material to make a nice little stand which is nice when coding at his desk as seen above. There’s also a carbon-fiber mask which makes up the face atop an articulated neck. It has two ultrasonic range-finding sensors as eyes, and the Raspberry Pi camera module as the nose. The RPi board powerful enough to run OpenCV which has kept [Samuel] busy. He set up a course in his living room containing tags directing where the little guy should go. It can also follow a tennis ball as it rolls around the room. What we found most impressive in the clip after the break is its ability to locate the next tag after making a turn.
Continue reading “Self-Balancing Robot Keeps Getting More Features”