The 512 Gigabyte Floppy Disk

There are times when a technology goes almost overnight as if in a puff of smoke, and others when they fade away gradually over time to the point at which their passing is barely noticed. So it is with removable media, while we still have the occasional USB flash disk or SD card , they do not come anywhere near the floppies, Zip disks, and CD-ROMs of the past in their numbers or ubiquity. If the floppy disk is just a save icon to you there’s still the chance to experience their retro charm though, courtesy of [Franklinstein]. He’s made a 3.5″ floppy disk that eschews 720 k, 1.44 M, or even 2.88 Mb, and goes all the way with a claimed 512 Gb capacity. We’re sure we can’t remember these from back in the day!

Of course as we can see in the video below he’s achieved neither an astounding feat of data compression nor a bleeding-edge method of storing bits in individual iron oxide molecules. Instead the floppy hinges open, and there’s a holder for micro SD cards where the disk itself would be. It’s a bit of fun, and we have to agree with him that it makes a very handy holder for micro SDs that can carry that much data. This sets us wondering though, whether it would be possible to somehow multiplex 14 micro SDs to a microcontroller on a PCB that could fit in a floppy shell. Perhaps an ESP32 could be a slow file server through a web interface?

He makes the point that 512 Gb of floppies would comfortably exceed the height of the tallest buildings were they stacked together, so at the very least this represents a space saving. If you’re looking for something slightly more functional and don’t mind modifying the drive, there’s always this classic approach to marrying a floppy with an SD card.

79 thoughts on “The 512 Gigabyte Floppy Disk

  1. Your mix of k, M, Mb and Gb for size units is giving me palpitations. And let’s not get started on how the 3.5” disk is 1440 KiB and not 1.44 M for *any* sensible value of M…

        1. 64 Kb (kilobit) is about the largest RAM chip ever specified in kilobits, because beyond that nobody made one bit (or even less than byte) wide RAM chips any more. There is a gray area from 2 to 16 kilobytes where chips (particularly part number designations) were sometimes specified in bits and sometimes rated in bytes. Once they got to 64 kilobyte chips nobody used kilobits any more because the chips were byte or larger wide and the bit spec would have been a stupidly large number. And RAM chips have always been rated in binary power of 2 blocks, because they are addressed in binary.

          1. Except that some of the most popular DRAMs ever manufactured were single bit and >64kbit.
            Check out the 41256 and 411000 DRAMs one of these days, they’re 256Kx1bit and 1Mx1bit respectively.

      1. You mean before computer software guys started messing with the everyone. The SI prefixes were invented in the 1790s and for 200 hundred years a kilo was 1000 before some lazy software guy changed it to 1024 cuz it’s faster to count by sectors or some such nonsense.

          1. Yes, but obviously, 1024 ≠ 1000. Bait and switches like that is exactly how the French Revolution got started and why the metric system was created in the first place. Stuff like that will get you in legal hot water anywhere in any other case. The obvious solution would’ve been to invent new names, but they didn’t do that, so they indeed were messing with people, and they were lazy. The IEC standard fixes that very thing, but the fact that some people still insist on the “kilobyte means 1024 bytes” nonsense to this day is ridiculous.

      2. The hard drive manufacturers weren’t the ones messing with people; the programmers were. I don’t know where this persistent myth seems to come from. Everybody knew what kilo- and mega- meant since the late 1700s, and network/communication people were using them properly before the programmers messed it up and just unilaterally changed it to be x1024.

        1. Actually it’s a mixture of both especially when dealing with chs… The 1024 standard makes it way simpler. Every block on all media is 512 bytes so to address a 1000 bytes you still have 24 bytes (12 bytes per block) unaddressed. Anybody who codes know what I headache it is to build a filesystem that addresses the unused bytes per block. (. ) Oan they changed the standard just for folks like you
          So KB or kilobyte is 1000 bytes and the binary standard is KiB or kibibyte is 1024.

          1. The point, however, is that file contents themselves can be an odd number of bytes, despite the storage medium those files are housed in may have base-2-sized blocks. And it wasn’t a standard; it was a deviation from the standard (which is 1000-based). They just stopped confusing everyone for the sake of a few, and came up with new standard to address the original problem in a sane fashion.

            And it’s kB for kilobyte, not “KB” (kelvin-byte) :)

        2. “The Programmers” being lazy isn’t compelling to me, but I would love to read about the first misuse of the terms. I (an electrical engineer) once suggested to a brilliant younger mechanical engineer that our department should now move toward metric units, and was met with a very fast NO that I didn’t explore but imagine familiarity and the mental labor didn’t outweigh what seems so obvious.
          The first use might have been in an academic paper or published article, and what programmer trying to get a job done is going to fix what they might not even know was a hijacked word? The scientists and engineers dealing with physical properties are fluent in milli micro mega kilo … decade definitions having exact correspondence of name to actual quantity, and I imagine some mathematician turned software pioneer committed the sin of this hijacking and most others didn’t know it was happening.
          My peeves… Quantum Leap is a BIG leap? Bi-weekly and semi-weekly? And the new so confusing THEY is sometimes a singular person of a different gender identity, yet forever multiple beings or objects? (can’t this one be fixed immediately?
          This plays out before our eyes, and my accidental word gender errors can be seen as disrespect… what a mess!

        3. There are seven SI base units.

          Meters, seconds, moles, amperes, candelas, and kilograms. There are 22 derived units. None of them are “bytes,” nor “bits”.

          A “mega” doesn’t make sense as a SI prefix doesn’t make sense in front of “byte” (ie. 8 bits) whether you use 1000 or 1024.

    1. And yet somehow you knew precisely what she meant.
      If this sort of thing gives you palpitations, you need to look up a good cardiologist in your area rather than sitting here griping about slight and human mismatches of abbreviations.

    2. Pedantic much?

      1.2 MB floppies were the 5 1/4″ varieties (double sided, double density?). Similarly the 3.5″ were 720 kB or 1.44 MB (can’t remember if the 720 kB was single sided or single density). Did you ever buy a 1440 kB disk, or was it marketed as 1.44 MB?

      That’s all ignoring the fact that they’re soft sectored so with a few adjustments you could get 1.8 MB, or should that be 1800 kB, or something else more precise but inaccurate?

      1. It started with single density, 35 track, single sided, at least 5.25″. But it wasn’t linear, double sided, double density, 40 tracks. So one computer would have one mix. Speed it up and get 1.2K.

        By the time of 3.5″ floppies, I doubt anyone was using single density. So single sided were 720K (800K for Macs).

        Nobody was confused about the disks they bought.

          1. Those were likely 8 inch floppies, larger physical sizes were not made, AFAIK. They predated the 5.25 disks by about 5 years.

            I too have booted up Unix on a LSI-11 (a PDP-11/03 crammed inside a VT-103) from a pair of 8″ drives. I seem to recall that they held 500 KB each, for a whopping total of 1 MB system storage.

      2. 720K was double density, 1.44 was high density, and 2.88 was extended density.

        Windows installation media and some DOS software took advantage of the flexible geometry to eek extra bytes out of diskettes.

        The 720K diskettes were actually advertised as “1M,” the 1.44s as “2M” and the 2.88s as “3M” in some cases.

      3. 360K 5.25″ floppies were double density. So were 720K 3.5″ and 5.25″ floppies. Those two used exactly the same number of tracks and sectors so an image of one could be written to the other.

        1.2M 5.25″ and 1.44M 3.5″ were High Density. 2.88M 3.5″ were Quad Density.

        Capacities below 360K 5.25″ could be single or double density, depending on the controller capabilities. Density didn’t matter to the 5.25″ drive. A common format was single density double side, exactly 1/2 of 360K or the same as a single sided double density disk.

        There were some early drives only capable of 35 tracks, attempting to access tracks 36-40 may cause damage.

        3.5″ HD and QD drives can read and write to lower density disks without problems. A 5.25″ 720k or 1.2M drive should absolutely never be used to write to a disk formatted and written to in a 40 track drive. the 80 track drive’s heads can’t write to the full width of the track laid down by a 40 track drive.

        If writing with 80 track to 40 track and the disk was bulk erased and freshly formatted, the 40 track may be able to read it, though the data signal will be mixed with the ‘raw’ format along the sides. Over writing data on a 40 track disk with an 80 track drive will leave ‘side bands’ of the old data which can cause read errors in a 40 track drive.

        80 track drives generally don’t have problems reading disks formatted and written in 40 track drives because the tracks are extra wide so the signals are very good.

        Bulk erasing then formatting 40 track in an 80 track drive is another recipe for problems because the 40 track will write data outside the formatted track. The data is what’s written combined with the formatting, so with just the data signals along the sides it’s ripe for read errors.

        TL;DR Cross-reading double density disks 40 track disks between 40 and 80 track 5.25″ drives usually works. Cross-writing is likely to be error prone and data destroying.

      1. Actually, I would trust an SD card to last longer than a DVD-R or DVD-RW. The organic substrate deteriorates with time, expecially if lefty exposed to light, and 10 year old DVDs are probably no longer fully readable. Printed DVD will not suffer the same fate, because the data pits are mechanically fixed at the moment the disk is printed, but writable and rewritable supports definitely suffer from data rot.

  2. I still have this floppy disk thing that takes 2 coin cells and a full size SD card. It was for a Sony camera that was pretty good at the time, I still have that too.

    1. Used to have it as well. It was slow when transferring from 128MB stick. I eventually threw it away some years later when USB 2.0 came along and floppy disk drive were hard to find locally

    2. Ah, FlashPath! Cool technology. Sadly it didn’t catch on in as big a way as it could have, they’re very useful these days for transferring data to computers which are incapable of communicating with USB or network devices. Slow, but bigger than the average diskette!

    1. I was thinking it should be possible to fit a slim card reader in there, then use a hinged USB connector (as seen on USB business cards) that lays flat under the sliding cover. Have it so you have to slide the cover open to access the connector.

      1. I’ve contemplated doing this, but for the few floppies I still use I CBA – I now can pull a floppy into a bit for bit image and have successfully used a usb flash drive to emulate a floppy by passing anti-piracy that read the floppy for serial numbers of the floppy and hidden files etc

  3. Bit of a silly article, I only watched enough of the video to count the number of uSD cards that fit (14) in this uneconomical circular holder.

    A quick check shows you can buy uSD cards upto 1TB these days.

    1. Smartmedia-to-floppy “FlashPath” adapters did something like this, but with one coil and no encoder.

      Since the cards were often much larger than floppy capacity, they didn’t 1:1 map the flash to cylinders and sectors. Instead a driver was required on the OS that turned one floppy head into a half duplex serial channel: https://twitter.com/whitequark/status/1177012361816883200

      Aside from losing pre-boot and old(er) OS compatibility, this has the advantage of higher throughput as there’s no head seek or sector rotation to wait for.

      Fortunately the first wave of consumer flash media came out around the same time as USB, so these were useful only for PCs that didn’t have or couldn’t be retrofitted with USB.

  4. Well that’s 43 seconds of my life wasted that I’ll never get back. I stopped it in disgust when I saw that all he had created was a holder for 14 MicroSD cards. He didn’t even fill it up, and who uses 32Gb cards any more?

    If you were going to post this at all, you should at least have held it for April 1st.

    1. I’d say it’s properly useful. Micro SD cards are so small that they need a case if they’re not in a device, and if you need to carry multiple SD cards something as relatively tactile but still slim like a floppy disk is perfect.

      1. There’s a misunderstanding, I think.

        3,5″ floppies had protective sleeves, just like 8″ and 5,25″ floppies, too.
        And they were useful. 3,5″ floppies weren’t designed to rely entirely on their sturdy enclose and protective slider. Dust could still enter through the hole at the bottom, for example.

        It’s just that a whole generation didn’t care.
        Professional users which worked with the IBM PC platform or the mainframes, did care.
        Their lifes depended on their work, after all.

        The “kids” with their Amigas and Atari STs, who used 3,5″ floppies exclusively, did mistreating their floppies just daily.
        They had drawers full of pirated games, software etc. All laying around in filthy drawers or stacked on the table without any sleeves, of course.

        Personally, I think I blame the Amiga people the most.😉
        The Atari ST had at least a few professional users, too.
        Musicians, people doing word-processing, programming in GFA Basic..

          1. This was pre-www. Of course, information about them is hard to find by now.
            If these were the 80s, 90s a paper catalogue of a computer store, stationer’s shop or a Radio Shack -type of store would have them. In colour.
            *sigh* 😒

            Here are a few links, though.
            Search for “micro floppy” and protection, sleeve, hull etc.

            Plastic sleeves:
            https://www.ebay.com/itm/164914562221

            3,5″ Disketten w/ sleeve:
            https://www.mdr.de/wissen/podcast/challenge/retrocomputing-speichermedien-datentraeger-100.html

            I can’t find the paper version, though.
            The web is contaminated by all these hipster articles and nostalgia merchandise.

            My father and me have original vintage 3,5″ floppies with them boxed in the attic, though. Maybe somewhen I’ll take some pictures of these.

            Just imagine that the paper version was similar to the 5,25″ version. It protected the slider part and the backside where the metal plate is.

      1. If you want to do yourself a favour, please have a look at FlashFloppy or HxC firmwares first.

        The Gotek hardware itself is fine, but the original firmware is very restrictive, as far as I know. It will only support one format (720 model does 720KB, 144 model does 1440KB) and the USB pen drive must be partitioned in dozen of slice.

        With a free firmware, all these restrictions go away and you can use image files and add features to the hardware. OLED display, rotary encoder, floppy sound.. All configurable through a text file in you USB pen drive. Also, they will support low-level floppy controller tricks that vintage software may use.

  5. I have a floppy that reports it is 4 GB, due to some corruption in the file system. You can (or could, anyway, last I tried it some decade ago) store files on it, but it used a “lossy” storage format. REALLY lossy.

  6. Put a microSD card in one of the internal voids of the corners of the floppy disk, pass through the pins to contacts to the underside of the disk, and modify a USB floppy drive to house a small USB hub and a microSD to USB adapter. Stealth microSD “floppy” that can still work as a regular floppy disk in any normal floppy drive.

  7. What would be neat is hacking a FlashPath adapter to support SDHC and SDXC, and writing drivers for it for Windows XP and newer.

    Olympus and Fuji were the major purveyors of the FlashPath since they used Smart Media (before killing Smart Media right before 256 meg SM cards were due for introduction) until switching to Xd Picture Card.

    Sony’s later floppy disk Mavica cameras could use a FlashPath with Memory Stick.

    Supposedly there was an SD Card version of the FlashPath, likely limited to a maximum of 1 gig or 2 gig cards. Though if it used the 1 bit serial interface of SD cards it could be forward compatible with any later size, like an old GPX MW3836 MP3 player. Doesn’t matter that it has a measly 1 gig internal storage and IIRC only officially supports up to 4 gig cards, I’ve used cards up to 64 gig in it. It makes a *very slow* card reader but apparently as long as the filesystem is FAT32 it can read it via the slow serial interface that’s fast enough to stream MP3 through its decoder.

    Wouldn’t it be a hoot if the SD Card Flash Path works that way and the only thing required to put a terabyte of storage into an ordinary floppy drive is a driver? Nevermind it’d take a month or so just to transfer all that data to or from it.

  8. I wonder if it would be possible to place a coil behind the slider, an encoder on the part that turns and cram a microcontroller plus an sd card inside. Then with a whole lot of clever coding have something that allows the sd card to be read in a real floppy drive.

    Of course, to simulate something larger than a standard size floppy would also take some low level code on the computer side to pretend that somehow there are a ridiculous number of sectors on that disc. And it would still be slow. But it wouldn’t require breaking any compatibility.

Leave a Reply to NoneCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.