The VLF Transformation

People have long been interested in very low frequency (VLF) radio signals. But it used to be you pretty much had to build your own receiver which, luckily, wasn’t as hard as building your own VHF or UHF gear. But there is a problem. These low frequencies have a very long wavelength and, thus, need very large antennas to get any reception. [Electronics Unmessed] says he has an answer.

These days, if you want to explore any part of the radio spectrum, you can probably do it easily with a software-defined radio (SDR). But the antenna is the key part that you are probably lacking. A small antenna will not work well at all. While the video covers a fairly common idea: using a loop antenna, his approach to loops is a bit different using a matching transformer, and he backs his thoughts up with modeling and practical results.

Of course, transformers also introduce loss, but — as always — everything is a trade-off. Running hundreds of feet of wire in your yard or even in a loop is not always a possibility. This antenna looks like it provides good performance and it would be simple to duplicate.

Early radio was VLF. Turns out, VLF may provide an unexpected public service in space.

Continue reading “The VLF Transformation”

Silent Speak And Spell Gets Its Voice Back

While talking computers are old hat today, in 1978, a talking toy like the Speak and Spell was the height of novel tech. [Kevin] found a vintage one, but it didn’t work. It looked like someone had plugged in the wrong power adapter, leading to, undoubtedly, one or more unhappy children. There was some damage that suggests someone had already tried to repair it, but without success.

In addition to effecting the repair, [Kevin] took lots of pictures, so if you ever wanted to peek inside one of these, this is your chance. The case had no screws, just clips, although apparently some of the newer models did have some screws.

Continue reading “Silent Speak And Spell Gets Its Voice Back”

I, 3D Printer

Like many of us, [Ben] has too many 3D printers. What do you do with the old ones? In his case, he converted it into a robotic camera rig. See the results, including footage from the robot, in the video below. In addition to taking smooth video, the robot can spin around to take photos for photogrammetry.

In fact, the whole thing started with an idea of building a photogrammetry rig. That project didn’t go as well as planned, but it did lead to this interesting project.

Continue reading “I, 3D Printer”

Recto: In Case Programming Isn’t Hard Enough

There’s long been a push to stop writing code as a sequence of lines and go to something graphical, which has been very successful in some areas and less so in others. But even when you use something graphical like Scratch, it is really standing in for lines of code? Many graphical environments are really just interface builders, and you still write traditional code underneath. [Masato Hagiwara] asks the question: Can you write code that is actually a 2D graphic? Where the graphical layout isn’t a cover for code, but is the code itself? His answer is Recto.

Whereas a C program, for example, has a syntactical structure of lines, a Recto program has rectangles. Rectangles can contain data, and their structure naturally mimics the kinds of structures we usually use: columns, rows, matrices, and so on. Rectangles can also contain… wait for it… other rectangles. Special rectangles act as dictionaries or sets.

We thought this sort of reminded us of Lisp, and, in fact, [Hagiwara] makes that clear later in the post. The real problem is how do you…write? draw?… this kind of code? At first, he laid it out in a spreadsheet before compilation. Now he’s built an editor for it, and you can try it in your browser. There’s also a limited-feature compiler that can handle simple programs.

[Hagiwara] goes on to show how this representation would work for natural human languages, too. Honestly, we have enough trouble with English and the few other human languages we struggle with, but it is interesting to contemplate.

If you like strange languages, there’s Piet. Not that either of these is the weirdest we’ve ever seen.

The Nibbler Was Quite A Scamp

The late 1970s were an interesting time for microcomputers. The rousing success of things like the 8080, the Z80, the 6800, and the 6502 made everyone wanted a piece of the action. National Semiconductor produced its SC/MP. That was technically the Simple Cost-effective Micro Processor, but it was commonly known as Scamp. There were several low-cost development boards built around this processor and [Hello World] is looking at Digikey’s “Nibbler” which was a fairly nice computer for only $150. Check it out in the video below.

The SC/MP was made to be cheap. It had a strange bank switching scheme reminiscent of the Microchip PIC 16F family. It also had, like a lot of old discrete computers, a serial ALU, which made it slower than many of its contemporaries. It did have good features, though. It was cheap and required very few extra parts along with a single 5 V supply in the second and subsequent versions. In addition, it had pins that were made for connecting more than one CPU, which was quite a feat for those days.

Continue reading “The Nibbler Was Quite A Scamp”

Suggested Schematic Standards

We often think that if a piece of software had the level of documentation you usually see for hardware, you wouldn’t think much of it. Sure, there are exceptions. Some hardware is beautifully documented, and poorly documented software is everywhere. [Graham Sutherland’s] been reviewing schematics and put together some notes on what makes a clean schematic.

Like coding standards, some of these are a bit subjective, but we thought it was all good advice. Of course, we’ve also violated some of them when we are in a hurry to get to a simulation.

Continue reading “Suggested Schematic Standards”

Teletext Around The World, Still

When you mention Teletext or Videotex, you probably think of the 1970s British system, the well-known system in France, or the short-lived US attempt to launch the service. Before the Internet, there were all kinds of crazy ways to deliver customized information into people’s homes. Old-fashioned? Turns out Teletext is alive and well in many parts of the world, and [text-mode] has the story of both the past and the present with a global perspective.

The whole thing grew out of the desire to send closed caption text. In 1971, Philips developed a way to do that by using the vertical blanking interval that isn’t visible on a TV. Of course, there needed to be a standard, and since standards are such a good thing, the UK developed three different ones.

The TVs of the time weren’t exactly the high-resolution devices we think of these days, so the 1976 level one allowed for regular (but Latin) characters and an alternate set of blocky graphics you could show on an expansive 40×24 palette in glorious color as long as you think seven colors is glorious. Level 1.5 added characters the rest of the world might want, and this so-called “World System Teletext” is still the basis of many systems today. It was better, but still couldn’t handle the 134 characters in Vietnamese.

Meanwhile, the French also wanted in on the action and developed Antiope, which had more capabilities. The United States would, at least partially, adopt this standard as well. In fact, the US fragmented between both systems along with a third system out of Canada until they converged on AT&T’s PLP system, renamed as North American Presentation Layer Syntax or NAPLPS. The post makes the case that NAPLPS was built on both the Canadian and French systems.

That was in 1986, and the Internet was getting ready to turn all of these developments, like $200 million Canadian system, into a roaring dumpster fire. The French even abandoned their homegrown system in favor of the World System Teletext. The post says as of 2024, at least 15 countries still maintain teletext.

Continue reading “Teletext Around The World, Still”