How TTY Opened Up The Phones For The Hard Of Hearing

The telephone was an invention that revolutionized human communication. No more did you have to physically courier a letter from one place to another, or send a telegram, or have a runner carry the message for you. Instead, you could have a direct conversation with another person a great distance away. All well and good if you can speak and hear, of course, but rather useless if you happen to be deaf.

Those hard of hearing were not left entirely out of the communication revolution, however. Well before IP switched networks and the Internet became a thing, there was already a way for the deaf to communicate over the plain old telephone network—thanks to the teletypewriter!

Over The Wires

The teletypewriter (TTY) has been around for a long time. The first device came into being in 1964, developed by James C. Marsters and Robert Weitbrecht, both deaf. Their idea was to create a method for deaf individuals to communicate over the phone network in a textual manner. To this end, the group sourced teleprinters formerly used by the US Department of Defense, and hooked them up with acoustic couplers that would allow them to mate with the then-ubiquitous AT&T Model 500 telephone. Thus, the TTY was born. A user could dial another TTY machine, and key in a message, which would print out at the other end. The receiving user could then respond in turn in the same manner.

Continue reading “How TTY Opened Up The Phones For The Hard Of Hearing”

Transcribing The Source Of The First DOS For The IBM PC

Doing software archaeology can be a harrowing task, as rarely do you find complete snapshots of particular versions of software. Case in point the development of MS-DOS – also known as IBM PC DOS – from 86-DOS, which recently got a lucky break in the form of printed source listings. These printouts come courtesy of [Tim Paterson], the creator of 86-DOS and of MS-DOS during his time working for Microsoft.

These code listings contain the sources of the 86-DOS 1.00 kernel, multiple development snapshots, and also listings for utilities like CHKDSK. These printed listings additionally contain many handwritten notes, making transcribing it into working source code somewhat of a chore. The results can be found on the GitHub project page, with the original scans available on Archive.org.

Of the ten bundles of continuous feed paper prints all but two have been transcribed so far, though with the various DOS kernels and the Seattle Computer Products (SCP) assembler source already ready for compilation. This includes 86-DOS 1.00, MS-DOS 1.25 and PC-DOS 1.00-dev, requiring the same SCP assembler to create a binary.

In the project page README a number of blog posts are also linked that add even more technical detail. Anyone who wants to pitch in with transcribing and/or testing recovered source code is welcome to do so.

Building An X86 Gaming PC Without Intel, NVIDIA Or AMD Parts

This is an interesting challenge from the “why not?” files — [GPUSpecs] over on YouTube built a gaming PC without using a single component from NVIDIA, Intel, or AMD. That immediately makes us think of the high-power ARM workstations or perhaps even perhaps the new “AI workstations” coming available with RISC V architecture, but the challenge here was specifically “gaming PC,” not workstation. A gaming PC, without a GPU by one of those three? To make it even more interesting, the x86 CPU isn’t Intel or AMD either.

If you’re of a certain vintage, you may remember Cyrix. Cyrix reverse-engineered the x86 ISA and made their own compatible chips in the 90s, before being bought out by National Semiconductor, and then VIA Technologies. VIA partnered with the Government of Shanghai to found Zhaoxin, and it is from Zhaoxin that the KaiXian KX 7000 CPU hails — an x86-64 device, that isn’t Intel or AMD. We’ve actually covered the company before. This particular chip benchmarks like an old i5, so not spectacular, but usable. 

Continue reading “Building An X86 Gaming PC Without Intel, NVIDIA Or AMD Parts”

Network Scanner Finds Every Raspberry Pi

DHCP is great for getting machines on the network with a minimum of fuss. However, it can also make remote administration a pain because you never know which IP you’re supposed to be SSHing into. [Philipp] ran into this problem quite often, so decided to whip up an app to make things easier. 

At it’s heart, the app is a simple network scanner—of which many already exist. However, [Philipp] had found that many options on Android were peppered with ads that made them highly undesirable to use. Thus, he whipped up his own, with a particular eye to working with the Raspberry Pi. It’s not uncommon for a hacker to have a few scattered around the home network, and it can be a real chore keeping track of where they all end up in IP land. The scanner can specifically single out the Raspberry Pi boards on the network via MAC-OUI and mDNS detection. Plus, just in case you need it, [Philipp] threw in some GPIO pinouts and electronics calculators just to make the app more useful.

If you’ve been looking for an open-source network scanner without all the ugly junk, this project might just be for you. You can also check out the source over on Github if that’s relevant to your interests. We’ve seen some interesting custom network scanners before, too. If you’re whipping up some fun packet-flinging software of your own, don’t hesitate to notify the tipsline!

Why Model Collapse In LLMs Is Inevitable With Self-Learning

There is a persistent belief in the ‘AI’ community that large language models (LLMs) have the ability to learn and self-improve by tweaking the weights in their vector space. Although there’s scant evidence that tweaking a probability vector space is anything like the learning process in biological brains, we nevertheless get sold the idea that artificial general intelligence (AGI) is just around the corner if we do just enough tweaking.

Instead of emerging super intelligence, the most likely outcome is what is called model collapse, with a recent paper by [Hector Zenil] going over the details on why self-training/learning in LLMs and similar systems is a fool’s errand. For those who just want the brief summary with all the memes, [Metin] wrote a blog post covering the basics.

In the end an LLM as well as a diffusion model (DM) is a statistical model of input data using which a statistically likely output can be generated (inferred) based on an input query. It follows intuitively that by using said output  to adjust the model with, the model will over time converge on a kind of statistical singularity rather than some ‘AI singularity’ event. This is also why these models need to be constantly trained with external, human-generated data in order to prevent such a collapse.

In the paper by [Hector] a mathematical model is created to demonstrate that an LLM, DM or similar statistical model undergoes degenerative dynamics whenever said external input is reduced. Although in the paper a mechanism is suggested to counter the entropy decay within the model, the ultimate point is that a statistical model cannot improve itself without continuous external anchoring.

The idea of LLMs being at all intelligent in any sense has been a contentious one, with the concept of language models being equated with ‘AI’ dating back to the 20th century, including as fun home computer projects. Much of the problem probably lies in humans projecting intelligent behavior onto these statistical models, turning LLMs into ‘counterfeit humans’, not helped by how closely generated text can resemble something written by a human, even if completely confabulated.

Thanks to [deshipu] for the tip.

How To Kill Humidity Sensors With Humidity

An often overlooked section in the datasheets for popular humidity sensors like the BME280 and DHT22 is the ‘non-condensing humidity’ bit, which puts an important constraint on which environments you can use this sensor in. This was the painful lesson that [Mellow Labs] recently had to learn when multiple of such sensors had kicked the bucket after being used in a nicely steamed-up bathroom. Fortunately, it introduced him to sensors that are rated for use in condensing humidity environments, such as the SHT40 that’s demonstrated in the video.

This particular sensor is made by Sensirion, and as we can see in the datasheet it features a built-in heater that allows it to keep working even in a condensing environment. This heater has three heating levels which are controlled via the I2C interface, though duration is limited to one second in order to prevent overheating the sensor.

Of note is that you cannot take measurements while the heater is operating, and its use obviously increases power draw significantly. This then mostly leaves when to turn on the heater as an exercise to the engineer, with [Mellow Labs] opting to start the heater when relative humidity hit 70% as a conservative choice.

In the comments to the video other options for suitable sensors were pitched, including the Bosch BME690 which is similarly rated for condensing environments. All of which condenses down to the importance of reading the datasheet for any part that you intend to use in possibly demanding environments.

Continue reading “How To Kill Humidity Sensors With Humidity”

Using A VT-100 Today

You may not know what a ADM-3, a TV910, or a H1420 are, but you probably have at least heard of a VT-100. They are all terminals from around the same time, but the DEC VT-100 is the terminal that practically everything today at least somewhat emulates. Even though a real VT-100 is rare, since it defined what have become ANSI escape sequences, most computers you’ve used in the last few decades speak some variation of the VT-100’s language. [Nikhil] wanted to see if you could use a VT-100 for real work today.

While the VT-100 wasn’t a general-purpose computer, it did have an 8080 inside. It only had about 3K of RAM, which was enough to act as a serial terminal. A USB serial port and a terminal with modern Linux, how hard could it be?

Continue reading “Using A VT-100 Today”