Memory Mapping Methods in the Super Nintendo

Not only is the Super Nintendo an all-around great platform, both during its prime in the 90s and now during the nostalgia craze, but its relative simplicity compared to modern systems makes it a lot more accessible from a computer science point-of-view. That means that we can get some in-depth discussion on how the Super Nintendo actually does what it does, and understand most of it, like this video from [Retro Game Mechanics Explained] which goes into an incredible amount of detail on the mechanics of the SNES’s memory system.

Two of the interesting memory systems the SNES uses are called DMA and HDMA. DMA stands for direct memory access, and is a way for the Super Nintendo to access memory independently of the CPU. The advantages to this are that it’s incredibly fast compared to more typical methods of accessing memory. This isn’t particulalry unique, but the HDMA system is. It allows the SNES to do all kinds of interesting tricks with its video output display like changing color gradients and doing all kinds of masking effects.

If you’re interested in the inner workings of classic consoles like the SNES, this video gets way down in the weeds in the system itself. It’s interesting to see how programmers were able to squeeze more capability from these limited (by modern standards) systems by manipulating memory like the DMA and HDMA systems do.  [Retro Game Mechanics Explained] is a great resource for exploring in-depth aspects of lots of classic games, like how speedrunners can execute arbitrary code in old Mario games.

Continue reading “Memory Mapping Methods in the Super Nintendo”

Scanning Tunneling Microscope Packs the Bits

We don’t usually think of a microscope as an active instrument, but researchers in Canada have used a scanning tunneling microscope to remove or replace single hydrogen atoms from the surface of a hydrogen-passivated silicon wafer. If the scientific paper is too much to wade through, there’s an IEEE Spectrum article and a video that might run on the 6 o’clock news below.

As usual with these research projects, there is good news and there is bad news. The good news is that — in theory — a memory device made using hydrogen lithography could store 138 terabytes per square inch. That’s enough, apparently, to store the entire iTunes catalog on a quarter. The bad news? Well, right now this takes exotic lab equipment at very low temperatures and pressures.

Continue reading “Scanning Tunneling Microscope Packs the Bits”

Raytheon’s Analog Read-Only Memory is Tube-Based

There are many ways of storing data in a computer’s memory, and not all of them allow the computer to write to it. For older equipment, this was often a physical limitation to the hardware itself. It’s easier and cheaper for some memory to be read-only, but if you go back really far you reach a time before even ROMs were widespread. One fascinating memory scheme is this example using a vacuum tube that stores the characters needed for a display.

[eric] over at TubeTime recently came across a Raytheon monoscope from days of yore and started figuring out how it works. The device is essentially a character display in an oscilloscope-like CRT package, but the way that it displays the characters is an interesting walk through history. The monoscope has two circuits, one which selects the character and the other determines the position on the screen. Each circuit is fed a delightfully analog sine wave, which allows the device to create essentially a scanning pattern on the screen for refreshing the display.

[eric] goes into a lot of detail on how this c.1967 device works, and it’s interesting to see how engineers were able to get working memory with their relatively limited toolset. One of the nice things about working in the analog world, though, is that it’s relatively easy to figure out how things work and start using them for all kinds of other purposes, like old analog UHF TV tuners.

Flash Memory: Caveat Emptor

We all love new tech. Some of us love getting the bleeding edge, barely-on-the-market devices and some enjoy getting tech thirty years after the fact to revel in nostalgia. The similarity is that we assume we know what we’re buying and only the latter category expects used parts. But, what if the prior category is getting used parts in a new case? The University of Alabama in Huntsville has a tool for protecting us from unscrupulous manufacturers installing old flash memory.

Flash memory usually lasts longer than the devices where it is installed, so there is a market for used chips which are still “good enough” to pass for new. Of course, this is highly unethical. You would not expect to find a used transmission in your brand new car so why should your brand new tablet contain someone’s discarded memory?

The principles of flash memory are well explained by comparing them to an ordinary transistor, of which we are happy to educate you. Wear-and-tear on flash memory starts right away and the erase time gets longer and longer. By measuring how long it takes to erase, it is possible to accurately determine the age of chip in question.

Pushing the limits of flash memory’s life-span can tell a lot about how to avoid operation disruption or you can build a flash drive from parts you know are used.

The Ultimate iPhone Upgrade

While Apple products have their upsides, the major downside with them is their closed environment. Most of the products are difficult to upgrade, to say the least, and this is especially true with the iPhone. While some Android devices still have removable storage and replaceable batteries, this has never been an option for any of Apple’s phones. But that doesn’t mean that upgrading the memory inside the phone is completely impossible.

[Scotty] from [Strange Parts] is no stranger to the iPhone, and had heard that there are some shops that can remove the storage chip in the iPhone and replace it with a larger one so he set out on a journey to try this himself. The first step was to program the new chip, since they must have software on them before they’re put in the phone. The chip programmer ironically doesn’t have support for Mac, so [Scotty] had to go to the store to buy a Windows computer first before he could get the chip programmer working right.

After that hurdle, [Scotty] found a bunch of old logic boards from iPhones to perfect his desoldering and resoldering skills. Since this isn’t through-hole technology a lot of practice was needed to desolder the chip from the logic board without damaging any of the other components, then re-ball the solder on the logic board, and then re-soldering the new larger storage chip to the logic board. After some hiccups and a lot of time practicing, [Scotty] finally had an iPhone that he upgraded from 16 GB to 128 GB.

[Scotty] knows his way around the iPhone and has some other videos about other modifications he’s made to his personal phone. His videos are very informative, in-depth, and professionally done so they’re worth a watch even if you don’t plan on trying this upgrade yourself. Not all upgrades to Apple products are difficult and expensive, though. There is one that costs only a dollar.

We sat down with him after his talk at the Hackaday Superconference last November, and we have to say that he made us think more than twice about tackling the tiny computer that lies hidden inside a cell phone. Check out his talk if you haven’t yet.

Continue reading “The Ultimate iPhone Upgrade”

Reading out an EPROM – with DIP switches

We’re all too spoiled nowadays with our comfortable ways to erase and write data to persistent memory, whether it’s our microcontroller’s internal flash or some external EEPROM. Admittedly, those memory technologies aren’t exactly new, but they stem from a time when their predecessors had to bathe under ultraviolet light in order to make space for something new. [Taylor Schweizer] recently came across some of these quartz-window decorated chips, and was curious to find out what is stored in them. Inspired by the BIOS reverse engineering scene in Halt and Catch Fire, he ended up building his own simple reader to display the EPROM’s content.

The 2732 he uses is a standard EPROM with 32kbit memory. Two pins, Chip Enable and Output Enable, serve as main control interface, while 12 address pins select the data stored in the chip’s internal 4K x 8 arrangement, to output it on the 8 data output pins. You could of course hook up the EPROM to a microcontroller and send what you read via serial line, but [Taylor] opted for a more hands-on approach that lets him read out the data in a manual way. He simply uses a bank of DIP switches to set the address and control pins, and added a row of LEDs as display.

As you can see from the short demonstration in the video after the break, reading out the entire EPROM would be a rather tedious task this way. If you do have more serious intentions to read out the content, you could have a look at one of those microcontroller based solutions sending data via serial line after all.

Continue reading “Reading out an EPROM – with DIP switches”

Spectre and Meltdown: How Cache Works

The year so far has been filled with news of Spectre and Meltdown. These exploits take advantage of features like speculative execution, and memory access timing. What they have in common is the fact that all modern processors use cache to access memory faster. We’ve all heard of cache, but what exactly is it, and how does it allow our computers to run faster?

In the simplest terms, cache is a fast memory. Computers have two storage systems: primary storage (RAM) and secondary storage (Hard Disk, SSD). From the processor’s point of view, loading data or instructions from RAM is slow — the CPU has to wait and do nothing for 100 cycles or more while the data is loaded. Loading from disk is even slower; millions of cycles are wasted. Cache is a small amount of very fast memory which is used to hold commonly accessed data and instructions. This means the processor only has to wait for the cache to be loaded once. After that, the data is accessible with no waiting.

A common (though aging) analogy for cache uses books to represent data: If you needed a specific book to look up an important piece of information, you would first check the books on your desk (cache memory). If your book isn’t there, you’d then go to the books on your shelves (RAM). If that search turned up empty, you’d head over to the local library (Hard Drive) and check out the book. Once back home, you would keep the book on your desk for quick reference — not immediately return it to the library shelves. This is how cache reading works.

Continue reading “Spectre and Meltdown: How Cache Works”