Beyond The Basics: Exploring Exotic Scope Trigger Modes

Will Rogers once said that veterinarians are the best doctors because their patients can’t tell them where it hurts. I’ve often thought that electronic people have a similar problem. In many cases, what’s wrong with our circuits isn’t visible. Sure, you can visually identify a backward diode, a bad solder joint, or a blown fuse. But you can’t look at a battery and see that it is dead or that a clock signal isn’t reaching some voltage. There are lots of ways to look at what’s really going on, but there is no substitute for a scope. It used to be hard for the average person to own a scope, but these days, it doesn’t require much. If you aren’t shopping for the best tech or you are willing to use it with a PC, oscilloscopes are quite affordable. If you spend even a little, you can now get scopes that are surprisingly capable with features undreamed of in years past. For example, many modern scopes have a dizzying array of triggering options. Do you need them? What do they do? Let’s find out.

I’ll be using a relatively new Rigol DHO924S, but none of the triggering modes are unique to that instrument. Sometimes, they have different names, and, of course, their setup might look different than my pictures, but you should be able to figure it out.

What is Triggering?

In simple terms, an oscilloscope plots time across the X-axis and voltage vertically on the Y-axis. So you can look at two peaks, for example, and measure the distance between them to understand how far apart they are in time. If the signal you are measuring happens repeatedly — like a square or sine wave, for example — it hardly matters which set of peaks you look at. After all, they are all the same for practical purposes.

Pretty square waves all in a row. Channel 2 is 180 degrees out of phase (inverted). But is that all there is?

The problem occurs when you want to see something relative to a particular event. Basic scopes often have level triggering. They “start” when the input voltage goes above or below a certain value. Suppose you are looking at a square wave that goes from 0 V to 5 V. You could trigger at about 2.5 V, and the scope will never start in the middle of a cycle.

Digital scopes tend to capture data before and after the trigger, so the center of the screen will be right on an edge, and you’ll be able to see the square waves on either side. The picture shows two square waves on the screen with the trigger point marked with a T in the top center of the display. You can see the level in the top bar and also marked with a T on the right side of the screen.

What happens if there are no pulses on the trigger source channel? That depends. If you are in auto mode, the scope will eventually get impatient and trigger at random. This lets you see what’s going on, but there’s no reference. If you are in normal mode, though, the scope will either show nothing or show the last thing it displayed. Either way, the green text near the top left corner will read WAIT until the trigger event occurs. Then it will say T’D.

Continue reading “Beyond The Basics: Exploring Exotic Scope Trigger Modes”

Up Close And Personal With A MEMS Microphone

If you’ve ever wondered what lies beneath the barely visible hole in the can of a MEMS microphone, you’re in luck, because [Zach Tong] has a $10 pair of earbuds to sacrifice for the cause and an electron microscope.

For the uninitiated, MEMS stands for microelectromechanical systems, the tiny silicon machines that power some of the more miraculous functions of smartphones and other modern electronics. The most familiar MEMS device might be the accelerometer that gives your phone a sense of where it is in space; [Zach] has a deep dive into MEMS accelerometers that we covered a while back.

MEMS microphones seem a little bit easier to understand mechanically, since all they have to do is change vibrations in air into an electrical signal. The microphone that [Zach] tore down for this video is ridiculously small; the SMD device is only about 3 mm long, with the MEMS chip under the can a fraction of a millimeter on a side. After some overall views with the optical microscope, [Zach] opened the can and put the guts under his scanning electron microscope. The SEM shots are pretty amazing, revealing a dimpled silicon diaphragm over a second layer with holes etched right through it. The dimples on the diaphragm nest into the holes, forming an air-dielectric capacitor whose capacitance varies as sound waves vibrate the diaphragm.

The most visually interesting feature, though, might be the deep cavity lying behind the two upper surfaces. The cavity, which [Zach] says bears evidence of having been etched by the deep reactive ion etching method, has cool-looking corrugations in its walls. The enormity of the cavity relative to the thin layers covering it suggests it’s a resonating cavity for the sound waves.

Thanks to [Zach] for this in-depth look at a device that’s amazingly complex yet remarkably simple.

Continue reading “Up Close And Personal With A MEMS Microphone”

Explore Neural Radiance Fields In Real-time, Even On A Phone

Neural Radiance Fields (NeRF) is a method of reconstructing complex 3D scenes from sparse 2D inputs, and the field has been growing by leaps and bounds. Viewing a reconstructed scene is still nontrivial, but there’s a new innovation on the block: SMERF is a browser-based method of enabling full 3D navigation of even large scenes, efficient enough to render in real time on phones and laptops.

Don’t miss the gallery of demos which will run on anything from powerful desktops to smartphones. Notable is the distinct lack of blurry, cloudy, or distorted areas which tend to appear in under-observed areas of a NeRF scene (such as indoor corners and ceilings). The technical paper explains SMERF’s approach in more detail.

NeRFs as a concept first hit the scene in 2020 and the rate of advancement has been simply astounding, especially compared to demos from just last year. Watch the short video summarizing SMERF below, and marvel at how it compares to other methods, some of which are themselves only months old.

Continue reading “Explore Neural Radiance Fields In Real-time, Even On A Phone”

A hand holds a LEGO replica of a Polaroid camera. The back of the "camera" has been removed to show the sereies of Technic pieces inside that allow the camera shutter to work.

How A LEGO Set Is Born

LEGOs are the first window into making something in your head become real for many makers. The Verge dug into how a LEGO set itself goes from idea to the shelves.

While most sets come from the minds of LEGO designers, since 2008, fans can submit their own sets to LEGO Ideas for the chance to become a real product. In this case, we follow the journey of [Marc Corfmat]’s Polaroid OneStep Camera from his initial attempts at LEGO stardom with his brother [Nick] to the current set that took off.

While the initial idea and build are the seed for a new set, once the project is in the hands of LEGO, designers meticulously make revision after revision to ensure the set is enjoyable to build and any moving parts continue to function for thousands of cycles. This is all weighed against the total cost of the BOM as well as any licensing required for intellectual property. One particularly interesting part of the article is how designers at LEGO are afforded a certain number of “frames” for custom bricks which leads to some interesting hacks and collaboration as all good constraints do.

For more LEGO hacks, checkout LEGO’s long lost cousin, testing LEGO-compatible axle materials, or these giant LEGO-like pieces.

NASA’s Tech Demo Streams First Video From Deep Space Via Laser

Everyone knows that the most important part of a tech demo is to make the right impression, and the team over at NASA’s Jet Propulsion Laboratory (JPL) definitely had this part nailed down when they showed off streaming a cat video from deep space using laser technology as part of NASA’s Deep Space Optical Communication (DSOC) program. This system consists out of a ground-based laser transmitter and receiver along with a space-based laser transceiver, which for this experiment was positioned at a distance of 31 million kilometers – 80 times the distance between the Moon and Earth – as a part of the Psyche spacecraft.

After a range of tests with the system to shake out potential issues, the team found that they could establish a 267 Mbps link, with a one-way latency of a mere 101 seconds, allowing Psyche’s transceiver to transmit the preinstalled 15-second high-definition video in effectively real-time and making the cat Taters instantly world-famous. Although the potential for space-based cat videos cannot be underestimated, the main purpose of DSOC is to allow spacecraft to send back much larger data sets than they could before.

For robotic and potential future manned missions DSOC would mean high bandwidth video and data links, enabling more science, better communication and possibly the occasional cat video during interplanetary travel.

Continue reading “NASA’s Tech Demo Streams First Video From Deep Space Via Laser”

Reverse-Engineering The Stadia Controller Bluetooth Switching Procedure

Ever since the demise of Google’s Stadia game streaming service, the associated Stadia controllers have found themselves in limbo, with the only way to switch them from the proprietary WiFi mode to Bluetooth by connecting to a special Google website. Yet as [Gary] found out, all this website does is flash a firmware file via WebUSB and WebHID over the original Stadia firmware with a generic Bluetooth controller firmware image. This is the reason why it’s a one-way process, but this wasn’t to [Gary]’s liking, so he figured out how to flash the controller himself, with the option to flash the original Stadia firmware or something else on it later, too.

[Gary]’s stadiatool follows the same procedure as the Google Stadia website, just implemented in Python and outside the control of Google. Although Google has recently announced that it will keep the Bluetooth switching website online one year longer – until December 31st 2024 – at some point this service will go away and only projects like [Gary]’s together with squirreled away firmware images can still save any stray Stadia controllers that will inevitably discovered in the back of a warehouse in the future.

Although we reported on the demise of Stadia when it happened in January of 2023, as Ars Technica notes it was common in 2022 to buy into Stadia and get a controller manufactured in the 2019 launch year, suggesting massive overproduction.

Robotic Rose Of Enchantment Drops Petals On Command

In Disney’s 1991 film Beauty and the Beast, an enchantress curses the young (10 or 11-year-old) prince to beast-hood for spurning her based solely on her appearance. She gives him a special rose that she says will bloom until his 21st birthday, at which time he’ll be turned back into a prince, provided that he learned to love by then. If not, he’ll be a beast for eternity. As the years go by, the rose drops the occasional petal and begins to wilt under the bell jar where he keeps it.

[Gord Payne] was tasked with building such a rose of enchantment for a high school production and knocked it out of the park. With no budget provided, [Gord] used what he had lying about the house, like nylon trimmer line. In fact, that’s probably the most important part of this build. A piece of trimmer line runs up through the stem made of tubing and out the silk rose head, which connects with a custom 3-D printed part.

Each loose petal hangs from the tubing using a short length of wire. Down at the base, the trimmer line is attached to a servo horn, which is connected to an Adafruit Circuit Playground. When the button is pressed on the remote, the servo retracts the trimmer line a little bit, dropping a petal. Be sure to check out the demo after the break.

Dropping petals is an interesting problem to solve. Most of the flower hacks we see around here involve blooming, which presents its own set of troubles.

Continue reading “Robotic Rose Of Enchantment Drops Petals On Command”