When teaching Industrial Automation to students, you need to give them access to the things they will encounter in industry. Most subjects can be taught using computer programs or simulators — for example topics covering PLC, DCS, SCADA or HMI. But to teach many other concepts, you need to have the actual hardware on hand to be able to understand the basics. For example, machine vision, conveyor belts, motor speed control, safety and interlock systems, sensors and peripherals all interface with the mentioned control systems and can be better understood by having hardware to play with. The team at [Absolutelyautomation] have published several projects that aim to help with this. One of these is the DIY conveyor belt with a motor speed control and display.
This is more of an initial, proof of concept project, and there is a lot of room for improvement. The build itself is straightforward. All the parts are standard, off the shelf items — stuff you can find in any store selling 3D printer parts. A few simple tools is all that’s required to put it together. The only tricky part of the build would likely be the conveyor belt itself. [Absolutelyautomation] offers a few suggestions, mentioning old car or truck tyres and elastic resistance bands used for therapy / exercise as options.
If you plan to replicate this, a few changes would be recommended. The 8 mm rollers could do with larger “drums” over them — about an inch or two in diameter. That helps prevent belt slippage and improves tension adjustment. It ought to be easy to 3D print the add-on drums. The belt might also need support plates between the rollers to prevent sag. The speed display needs to be in linear units — feet per minute or meters per minute, rather than motor rpm. And while the electronics includes a RS-485 interface, it would help to add RS-232, RS-422 and Ethernet in the mix.
While this is a simple build, it can form the basis for a series of add-ons and extensions to help students learn more about automation and control systems. Or maybe you want a conveyor belt in your basement, for some reason.
Remember that feeling when you first looked down on a microscope? Now you can re-live it but in slightly different way. [Venkes] came up with a way to make a Laser Scanning Microscope (LSM) with mostly off the shelf components that you probably have sitting around, collecting dust in your garage. He did it using some modified DVD pick-ups, an Arduino Uno, a laser and a LDR.
To be honest, there’s some more stuff involved in the making of the LSM but [Venkes] did a detailed Instructable explaining how everything fits together. You will need a fair dose of patience, it’s not very easy to get the focus right and it’s quite slow, an image takes about half an hour to complete, but it can do 1300x amplification at 65k pixels (256×256). From reading the instructions it seems that you will need a steady hand to assemble it together, some steps look kind of tricky. On the software side, the LSM uses Arduino and Processing. The Arduino part is responsible for the steering of the lens and taking the LDR readings. This information is then sent to Processing which takes care of interpreting the data and translate it to an image.
Most of us who have dabbled a little in electronics will have made our own printed circuit boards at some point. We’ll have rubbed on sticky transfers, laser-printed onto acetate, covered our clothing with ferric chloride stains, and applied ourselves to the many complex and tricky processes involved. And after all that, there’s a chance we’ll have ended up with boards that were over or under-etched, and had faults. For many the arrival of affordable online small-run professional PCB production from those mostly-overseas suppliers has been a step-change to our electronic construction abilities.
[Fran Blanche] used to make her own boards for her Frantone effects pedals, but as she admits it was a process that could at times be tedious. With increased production she had to move to using a board house, and for her that means a very high-quality local operation rather than one on the other side of the world. In the video below the break she takes us through each step of the PCB production process as it’s done by the professionals with a human input rather than by robots or ferric-stained dilettantes.
Though it’s twenty minutes or so long it’s an extremely interesting watch, as while we’re all used to casually specifying the parameters of the different layers and holes in our CAD packages we may not have seen how they translate to the real-world processes that deliver our finished boards. Some operations are very different from those you’d do at home, for example the holes are drilled as a first step rather than at the end because as you might imagine the through-plating process needs a hole to plate. The etching is a negative process rather than a positive one, because it serves to expose the tracks for the plating process before etching, and the plating becomes the etch resist.
If you’re used to packages from far afield containing your prototype PCBs landing on your doorstep as if by magic, take a look. It’s as well to know a little more detail about how they were made.
TOBE is a toolkit that enables the user to create Tangible Out-of-Body Experiences, created by [Renaud Gervais] and others and presented at the TEI ’16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction. The goal is to expose the inner states of users using physiological signals such as heart rate or brain activity. The toolkit is a proposal that covers the creation of a 3D printed avatar where visual representations of physiological sensors (ECG, EDA, EEG, EOG and breathing monitor) are displayed, the creation and use of these sensors based on open hardware platforms such as Bitalino or OpenBCI, and signal processing software using OpenViBE.
In their research paper, the team identified the signals and mental states which they have organized in three different types:
States perceived by self and others, e.g. eye blinks. Even if those signals may sometimes appear redundant as one may directly look at the person in order to see them, they are crucial in associating a feedback to a user.
States perceived only by self, e.g. heart rate or breathing. Mirroring these signals provides presence towards the feedback.
States hidden to both self and others, e.g. mental states such as cognitive workload. This type of metrics holds the most
promising applications since they are mostly unexplored.
By visualising their own inner states and with the ability to share them, users can develop a better understating of their own selves as well others. Analysing their avatar in different contexts allows a user to see how they react in different scenarios such as stress, working or playing. When you join several users they can see how each other responds the same stimuli, for example. Continue reading “TOBE: Tangible Out-of-Body Experience with Biosignals”→
A University of Utah team have a working prototype of a new twist on fluid-filled lenses for correction of vision problems: automatic adjustment and refocus depending on what you’re looking at. Technically, the glasses have a distance sensor embedded into the front of the frame and continually adjust the focus of the lenses. An 8 gram, 110 mAh battery powers the prototype for roughly 6 hours.
Eyeglasses that can adapt on the fly to different focal needs is important because many people with degraded vision suffer from more than one condition at the same time, which makes addressing their vision problems more complex than a single corrective lens. For example, many people who are nearsighted or farsighted (where near objects and far objects far objects and near objects are seen out of focus, respectively) also suffer from a general loss of the eye’s ability to change focus, a condition that is age-related. As a result, people require multiple sets of eyeglasses for different conditions. Bifocal or trifocal or progressive lenses are really just multiple sets of lenses squashed into a smaller form factor, and greatly reduce the wearer’s field of view which is itself a significant vision impairment. A full field of view could be restored if eyeglass lenses were able to adapt to different needs based on object distance, and that is what this project achieves.
When we learn about the internals of a microprocessor, we are shown a diagram that resembles the 8-bit devices of the 1970s. There will be an ALU, a program counter, a set of registers, and address and data line decoders. Most of us never go significantly further into the nuances of more modern processors because there is no need. All a processor needs to be is a black box, unless it has particularly sparked your interest or you are working in bare-metal assembly language.
We imagine our simple microprocessor as built from logic gates, and indeed there have been many projects on these pages that create working processors from piles of 74 series chips. But just occasionally a project comes along that reminds us there is more than one way to build a computer, and our subject today is just such a moment. [Olivier Bailleux] has created his “Gray-1”, a processor whose only active components are memory chips, both ROM and RAM.
The clever part comes with the descriptions of how the ROMs are used to recreate the different functions of the processor, through careful programming. Some functions such as registers for example use loops, in which some of the address lines are driven from the data lines to maintain the ROM at a set location. The name of the computer comes from its program counter, which counts in Gray code.
The full processor implements a RISC architecture, and there is a simulator to allow code development without a physical unit. The write-up is both comprehensive and accessible, and makes a fascinating read.
With interest and accessibility to both wearable tech and virtual reality approaching an all-time high, three students from Cornell University — [Daryl Sew, Emma Wang, and Zachary Zimmerman] — seek to turn your body into the perfect controller.
That is the end goal, at least. Their prototype consists of three Kionix tri-axis accelerometer, gyroscope and magnetometer sensors (at the hand, elbow, and shoulder) to trace the arm’s movement. Relying on a PC to do most of the computational heavy lifting, a PIC32 in a t-shirt canister — hey, it’s a prototype! — receives data from the three joint positions, transmitting them to said PC via serial, which renders a useable 3D model in a virtual environment. After a brief calibration, the setup tracks the arm movement with only a little drift in readings over a few minutes.