[Teaching Tech] sprung about $80 or a kit to add dual extrusion to his 3D printer, plus another $20 for an accessory kit. He did get it to work well, but it wasn’t without problems which he covers in the video below.
The design of the head uses a servo to swing two hot ends to — in theory — the same point. Each hot end has an ooze shield, so you don’t need to deal with that in your G-code by building a priming tower. However, there are some requirements for your printer.
A common sight in automobile-congested cities such as New York are parking meters lining the curbs next to parking spots. They’re an autonomous way for the city to charge for the space taken by cars parked along the sidewalk near high-traffic commercial areas, incentivizing people to wrap up their business and move their vehicle out of a costly or time-limited parking space.
The parking meter is such a mundane device most people wouldn’t look at them twice, but on the inside it’s fascinating to see how they’re engineered, how that’s changed through the years, and how a software bug handicapped thousands of digital meters at the start of 2020.
The Origin Of The Parking Meter
Parking meters were originally commissioned in the 1930s by the government of Oklahoma City, due to the rapidly increasing number of automobiles, and therefore demand for parking space. Up until then, the city used patrolling policemen to regulate parking space, but they couldn’t keep up with the pace of the increased traffic and the lack of available parking space made business drop around downtown shops.
The first widely-adopted parking meter was dubbed “Black Maria”, a machine patented in 1935 by Carl C. Magee and Gerald Hale and first installed in the city in July of that year. This was a completely automated mechanical device made to solve the problem of regulating the time a driver can park their car in a given spot. It would take a nickel as payment, inserted into the mechanism by rotating a handle which also served to wind a clock spring. This clock would then tick down the remaining time the user could remain parked there, which could range from 15 minutes to an hour depending on the location.
Within days store owners noticed a positive effect in their profits thanks to the increase in customers with the regulated parking. What’s more, the coins collected from the meters also generated revenue for the city, and so, parking meters started spreading throughout the city. And as decades went, the mechanics were improved upon. A window was added into which a patrolling officer could easily look to check if the right amount of money (or money at all) was inserted. Separate panels for the coins to be easily collected without risking damage to the rest of the internal clockwork were also added.
The evolution of parking meters eventually passed through meters that could take care of parking spaces on either side of it, halving the amount of necessary poles per sidewalk. Electronic models starting appearing in the 1990s and eventually connectivity added. With meters all hooked up to the same network, the symbiotic connection between the parking meter and your spot was severed. It didn’t matter where your car was parked anymore; you could simply take your printed ticket and put it on your dashboard to be legally parked. Further advancements led to numbers spots that can be paid from any kiosk in the city, or though a smartphone app. But those digital advancements don’t always translate into reliability…
His exceptionally comprehensive write-up takes us through the entire process, from creating a rather useful set of 3D-printed brackets for a Pi and camera through deciding the combination of artificial intelligence software components required, to making the eventual decision to offload part of the processing to a cloud service through a 4G mobile phone link. In this he used Cortex, a system designed for easy deployment of machine learning models, which he is very impressed with.
The result is a camera in his car that identifies and reads the plates on the vehicles around it. Which in a way has something of the Big Brother about it, but in another way points to a future in which ever more accessible AI applications self-contained without a cloud service become possible that aren’t quite so sinister. It’s an inevitable progression whose privacy questions may go beyond a Hackaday piece, but it’s also a fascinating area of our remit that should be available at our level.
Watching a child learn to run is a joyous, but sometimes painful experience. It seems the same is true for [James Bruton]’s impressive Sonic the Self-Balancing robot, even with bendable knees and force sensitive legs.
We covered the mechanical side of the project recently, and now [James] has added the electronics to turn it into a truly impressive working robot (videos after the break). Getting it to this point was not without challenges, but fortunately he is sharing the experience with us, wipe-outs and all. The knees of this robot are actuated using a pair of motors with ball screws, which are not back drivable. This means that external sensors are needed to allow the motors to actively respond to inputs, which in this case are load cells in the legs and an MPU6050 IMU for balancing. The main control board is a Teensy 3.6, with an NRF24 module providing remote control.
[James] wanted the robot to be able to lean into turns and handle uneven surfaces (small ramps) without tipping or falling over. The leaning part was fairly simple (for him), but the sensor integration for uneven surfaces turned out to be a real challenge, and required multiple iterations to get working. The first approach was to move the robot in the direction of the tipping motion to absorb it, and then return to level. However, this could cause it to tip over slightly larger ramps. When trying to keep the robot level while going over a ramp with one leg, it would go into wild side-to-side oscillations as it drops back to level ground. This was corrected by using the load cells to dampen the motion.
When we last saw [isaac879]’s levitating RGB time fountain, it was made of wood which meant that it would absorb water and didn’t really show off the effect very well. His new version solves this problem with an acrylic case, new PCB and an updated circuit.
Like the original, this project drops water past strobing RGB LEDs creating an illusion of levitating, undulating colored water droplets. The pump at the top creates the droplets, but the timing has a tendency to drift over time. He thus implemented a PID controller to manage the pump’s drip rate, which was done by having the droplets pass by an infrared diode connected to an ATTiny85. The ’85 used the diode and PWM to control the pump motor speed and communicated to the Arduino over I2C.
The video shown below shows the whole process of designing and building the new time fountain. Everything from circuit and PCB design to 3D printing to assembly is shown along with narration describing what’s going on in case you want to build one yourself. If you do, all the files and components required are listed in the info section of the video.
There’s more that [isaac879] wants to do to improve the time fountain, but V2 looks great. It’s sleeker and smaller than the original and solves some of the design issues of the first. For more inspiration, check out some of the other levitating water fountain projects that have been posted over the years.
If you want to take beautiful night sky pictures with your DSLR and you live between 15 degrees and 55 degrees north latitude you might want to check out OpenAstroTracker. If you have a 3D printer it will probably take about 60 hours of printing, but you’ll wind up with a pretty impressive setup for your camera. There’s an Arduino managing the tracking and also providing a “go to” capability.
Would you like to know the great thing about this community we have here? All the spitballing that goes on every day in the comments, the IO chat rooms, and in the discussion threads of thousands of projects. One of our favorite things about the Hackaday universe is that we help each other out, and because of that, our collective curiosity pushes so many designs forward.
We gasped when we saw the new mechanism — a total of 15 rack and pinion linear actuators that make the kalimba look like a tiny mechanical pipe organ. Now the servos float, fixed into a three-part frame that straddles the sound box. [Gurpreet] melted servo horns to down to their hubs rather than trying to print something that fits the servos’ sockets.
Thumb your way past the break to check out the build video. [Gurpreet] doesn’t shy away from showing what went wrong and how he fixed it, or from sharing the 3D printering sanity checks along the way that kept him going.
Plucking kalimba tines is a difficult problem to solve because they’re stiff, but with timbre sensitive to many degrees of pressure. A slightly easier alternative? Make a toy player piano.