USB-C Is Taking Over… When, Exactly?

USB is one of the most beloved computer interfaces of all time. Developed in the mid-1990s, it undertook a slow but steady march to the top. Offering an interface with good speeds and a compact connector, it became the standard for hooking up interface devices, storage, and even became the de-facto way to talk old-school serial, too.

In late 2014, the USB Implementers Forum finalised the standard for the USB-C plug. Its first major application was on smartphones like the Nexus 5X, and it has come to dominate the smartphone market, at least if you leave aside the iPhone. However, it’s yet to truly send USB-A packing, especially on the desktop. What gives?

Peripherals Are Where It’s At

USB-C peripherals are hard to find, but they are out there. They’re primarily aimed at the laptop market, as desktops lag in implementing USB-C.

Fundamentally, it all comes down to peripherals. Even in 2020, the average computer comes with a bunch of classic USB-A ports, sometimes 10 or more on a well-provisioned desktop. Meanwhile, it’s still possible to buy laptops that come without a single USB-C port. If the average user were to pick up a new keyboard off-the-shelf, and got it home to find a USB-C connector, they’d be completely out of luck – and likely quite furious. Manufacturers simply haven’t adapted their product lines to the future of USB-C yet. Thus, for the meantime, commodity peripherals – keyboards, mice, and the like – will all continue to ship with classic USB-A connectors.

There’s also the problem of compatibility. For example, the Intel® NUC NUC8i7HVK is a compact computing system that packs a full 11 USB ports. There are five USB 3 ports (type A), two USB 3.1g2 ports (type A), two USB 3.1G2 ports (type C), and two more USB 3.1g2 ports that also support Thunderbolt 3 (type C).

Flash drives are actually a solved problem. SanDisk have been shipping drives with both connectors for several years now, and as a bonus, they can plug straight into your smartphone for added storage.

This leads to a situation where a user can plug in devices to ports that fit, but don’t support the hardware attached. For example, a Thunderbolt to HDMI connector will fit in either type C port, but only work in the two that support Thunderbolt. It’s an absolute headache for even experienced users, most of whom don’t have the time to memorize a multitude of arcane specifications and what ports support which interfaces. Colour coding and labels help, but fundamentally, it’s a backwards step from the old world where you plugged in to a USB port, and things just worked.

 

Even in the smartphone world, where USB-C made its beachhead, things remain uncertain. The new standard allows for higher current and higher voltages, allowing charging to happen faster than ever. However, not all USB-C cables are up to the job, with many omitting several lines or components necessary to enable this operation. Having a single connector used for both data and charging is handy, but it has fragmented the market into “data-only” and “data and charging” USB-C cables. What’s more, laptops can use the Power Delivery standard too, again creating an even higher grade of USB-C cable that can handle up to 100 W.

To the uninitiated, these all look the same. It takes a solid understanding of hardware and electronics to be able to tease out the different capabilities of each. The standards are so confusing, even the Raspberry Pi foundation got things wrong at their first attempt. 

Regardless, the hardware community continues to adapt. Hackers fly to a good supply of power like moths to a flame, and we’ve seen many mods taking advantage of the USB-PD standard. USB soldering irons are now common, and others have put it to the job of recharging batteries. We’re also beginning to see staples take up the cause, with Arduino boards starting to sprout with the new connector in place. It’s clear that the community is ready for the new standard, even if the industry is yet to catch up.

One Real Solution – Give Us The Ports!

An ASUS gaming motherboard launched in late 2019 – featuring just one USB-C port.

Realistically, peripheral manufacturers aren’t going to start making keyboards and mice with USB C connectors just yet. With laptops having one or two ports at best, often with one usually needed for charging, it’s simply unworkable. The desktop scene is worse, with even high-end motherboards often featuring just one USB-C connector. With a normal setup usually involving a keyboard, mouse, webcam, and often a headset, too, one cable is simply nowhere near enough.

Hubs, dongles and adapters are worst-case workarounds, not a way of life. Instead, to move forward is going to require commitment on the part of hardware companies. Laptops and desktops need to start shipping with three or more USB-C ports, and slowly reduce the number of USB-A ports, if we are to see a transition to a singular connector ecosystem. Once there’s an installed base, it won’t take long for factories to switch over to shipping hardware with a USB-C connector on the end instead. Legacy computers will then be able to get by with adapters from USB-A to newer USB-C hardware, where it’s much more acceptable to make such compromises. Devices like existing printers won’t even need an upgrade – a simple USB-C to USB-B cable will allow them to work seamlessly with newer computers.

Additionally, the USB-IF, in conjunction with Intel, should do whatever is possible to bring about a stable capability set for the port. With USB 4 on the horizon, the timing couldn’t be better. Obviously, with a single cable handling high-power charging, high-bandwith video, and general interface duties, there will always be confusion. The less technologically inclined will look to the skies and wail when their pocket-sized USB power bank won’t run their Macbook Pro – and rightfully so, I might add. The die has been cast, however, and there is room to at least ease the process going forward. Make as many USB-C ports as available as possible, and make as many of them act the same as each other so that users know what to expect. Then, and only then, will we know peace – and the rest of the world will join the party!

 

172 thoughts on “USB-C Is Taking Over… When, Exactly?

  1. Most beloved? Or most successful/ubiquitous?

    Not dissing the article – I enjoyed it quite a bit! – I’ve just never heard anyone expressing their love for USB.

    1. Those who do not love USB don’t remember the dark times before it. The snail-paced parallel ports, and the pain of getting serial port drivers working.

      And Windows seeing a certain pattern of data in your serial port and deciding that meant it was actually a serial mouse…

      No complaints about PS/2 keyboards, though.

      1. Disagree. I was there, as an electrical engineer just out of school when USB was introduced. USB replaced the PC-AT keyboard port, the PS/2 keyboard and mouse ports, and serial and parallel ports, but it did this in a very bad way, replacing admittedly poorly-documented “standards” with CLOSED standards that required a high initiation fee to get access to, and access was not legally available at ANY price to individual developers. USB was an attempt to drive a large number of small players out of the PC peripherals business, and for a time, did just that. I believe that USB was responsible for shutting a generation of would-be hackers out of their own computers.

        Sure, serial ports and IEEE 1284 parallel ports were slow, but USB 1.0 wasn’t exactly a bolt of lightning either. In fact, it fell far short of the IEEE 1394 (Firewire) standard that preceded it, and not even USB 2.0 was able to reliably transfer live video, something which Firewire could do from day one.

        And “the pain of getting serial port driver sorking”???? Few people even TRIED to write USB drivers, because of their closed nature. Yeah, the patchwork that was PC peripherals in the port + IRQ setting days needed an upgrade, but USB was not a good one. Even today, with open-source backward-engineered USB stacks freely available, non-corporate USB developers have to either subvert one of the Human Interface Device standards or use a jacked vendor ID in order to make their own USB device. FIE on you, USB!

        Now let’s move on to USB-C. The committee that came up with THAT was clear enough on the limitations of USB 3.0, that they specified conductors to be used for SOME OTHER SERIAL STANDARD, such as Thunderbolt, DisplayPort, SATA, or PCI-e, to be included, just so the USB-C connector could, at least in theory, be used for anything needing to connect to a computer.

        1. Um, I lived through the same time period and developed USB devices, as an individual, using assembly language. I don’t recall it being a closed standard or requiring any reverse engineering. In fact, what I remember most is downloading and printing a two-inch-thick exhaustive full standard directly from the USB-IF website. If there was a time when USB was a closed standard please correct me, but it surely wasn’t closed when I was using it around 2000.

          Now, the VID/PID assignment and licensing fees are a huge sore spot for me, but the standard itself never seemed to be an obstacle.

          1. You are right – I misspoke. What USB was conspicuously missing, though, was a reference design. What I really meant when I said “reverse engineered”, was that USB was designed to be difficult to implement, with not even the most basic designs shared openly. So to a developer outside the “in group”, as you are probably too well aware, it was almost as challenging as reverse engineering.

          2. I remember the sheer wonder at all of the crazy input devices that the folks on the committee thought they needed to document as “standard” HID devices. There’s a baseball bat in there!

            Ask Hackaday: What’s the Most Off-the-Wall HID device?

        2. When USB 2.0 came along, right around when Apple was releasing OS X (well before it was really ready), Apple decided to NOT support it on Mac OS 9.

          No 3rd party was interested in producing USB 2.0 drivers for Mac OS 9, despite that whomever did it could have made a pretty good chunk of change for a while while Apple was putting a polish on OS X. There would’ve been a market for not outrageously priced USB 2.0 OS 9 drivers until Apple took away “Classic” support.

          But nope. Nobody doing Macintosh software an/or hardware would touch it, because Apple didn’t do it themselves, and of course Apple Is God to Apple’s core customer base. Sonnet and Orange PC would’ve been the companies most capable of doing USB 2.0 drivers since they sold USB cards and many other pieces of hardware for Macintosh.

          The computer industry has the same lack of vision in refusing to produce a USB C ExpressCard, despite producing a wide array of USB C cards for PCIe x1 slots in desktops, and one company makes a USB C card for Mini PCIe. ExpressCard is also PCIe x1 so there’s no technical reason to not make a USB C ExpressCard. There are *millions* of laptops with ExpressCard slots.

      2. I still love Centronics and V. 24 ports. 😍
        They are reliable, have screws and can be use without requiring a magnifier. ;)

        Most importantly, they allow for logic circuits without any microprocessors.
        I still use them, along with game port, to design little electronics tools for personal use.
        Unlike other ports, they can be accessed from within Quick Basic 4.5 or PowerBASIC running on a real-time OS like DOS. 💚💙❤️

      3. Why do people seem to have such fondness for PS/2 keyboards?

        While the USB keyboard protocol does have several dumb design decisions, the low speed of PS/2 means that it’s going to have more latency than USB does at 1000Hz polling.

        The bit of real world testing I’ve done with slapping a key on a high speed camera, the PS/2 boards I tested were all slower than the USB ones.

        1. The tiniest of delays either way – as you kind of proved by having to use a high speed camera to find out. Its not like they are actually slow – each button is registered fast enough you would never know it wasn’t immediate by human senses alone.
          The reason I suspect most folk who prefer PS/2 Keyboards prefer them is that bad PS/2 keyboard were not at all common until the very end of the connector being commonly used (about the same time all the really crappy USB ones arrived too). We all either remember or have discovered the wonderful build quality and great user experience of the well built keyboards of yesteryear.

        2. They don’t know better.

          It is true that PS/2 works on IRQ while USB use a polling mechanism. That however doesn’t mean that PS/2 has lower latency as the data transmission speed on the PS/2 is very slow – 10kbps max slow. PS/2 key events usually requires multiple bytes, so if you actually do the analysis, you’ll find out that USB takes a hit on the initial delay but can transmit an entire packet much faster. So in the end, a key event get sent faster than PS/2.

    2. Oh, I’m a big fan of classic USB, particularly on microcontrollers. I remember the dark days of homebrewed parallel programmers that never worked right, and I don’t ever want to go back.

      Glad you liked the piece! Always nice to get feedback from the community. :)

      1. Might want to remove those rose-colored glasses. In those dark days, THERE WERE NO USB microcontroller programmers! It took YEARS for these to show up due to to USB’s closed nature, and even then, these were basically USB-to-parallel-port or -serial-port adapters coupled to the old programmers. You might want to think again about who the forces of darkness were.

        1. what are you babbling about? There were cheap Chinese USB mice and serial port adapters available from the earliest days of USB. I also have an ancient no-name USB SCSI adapter. Certainly no license fees were paid for these.

          1. Which were made the same way they are today – by directly copying existing higher-priced products, even down to using their vendor and product ID numbers, in some cases (look up FTDI USB-to-serial adapter, for example). USB did nothing to reduce the import of direct copies; it only shut down innovation by discouraging (explicitly) the development of USB products by individuals or small companies.

            I say “explicitly”, because the USB-IF made it clear on their website, that hobbyists and experimenters were not welcome, even if they were willing to pay the exorbitant fee. This was part of their design, as the Vendor ID value was 16 bits, meaning that there could only – EVER – be 65535 USB vendors in the world, by the original specification. Nor could hobbyists band together to share Vendor IDs – when some companies tried doing this, USB-IF threatened revoking their Vendor IDs and taking other legal action, claiming that they were violating USB-IF’s license agreement. This meant that, for example, you could buy a microcontroller from Atmel, and use the circuit and firmware described in Atmel’s application notes for creating a USB device with it, BUT you could not use Atmel’s Vendor ID or Product ID, unless you made the product identical to Atmel’s. Which mean that you could not produce a modified version of that circuit unless you had a Vendor ID of your own.

            These WERE the dark ages.

          2. we have different definitions of “dark ages” because computer technology flourished massively during your “dark ages” and even if USB were cheap and easy you’d still be bitching about how it’s impossible for individuals to get their gear tested for FCC and UL compliance and you’d still be talking about “dark ages”

        2. Can’t reply to your below comment (that starts with “Which were made the…”), but I reply here to that.

          I wasn’t doing USB related things in the good old bad old days, but the limitation of VID/PID IDs (as daft as that seems now) can now mostly be worked around. I don’t know how legal it is but for years now both Microchip and FTDI lets you register a PID on their VID, provided you are using their controllers. I’ve personally done that. Presuming that those two fairly large corporations lets you do this tells me it is legal now, though I honestly haven’t checked yet.

          USB-IF seems like a group who formulates ideas, puts them all in a hat, then grabs a handful out of the hat and calls it a standard. I mean the constant retroactive renaming of USB 3 ‘versions’ is madness.

          1. You can “work around” the VID/PID, but it does mean that small scale manufacturers (Commercial hobbyists, essentially) are often forced to supply you with sketchy instructions to install their non-compliant drivers via Zadig, etc, if you want to use them in windows, or they hijack another VID/PID, which sometimes backfires when windows downloads a driver update for a different device and applies it.

      2. I like good old USB 3.0 because it can run all these
        power-hungry USB devices so nicely.
        Especially the older NEC/Renesas USB PCIe cards were working quite well.
        I used them since ~2010. On Windows XP, even! 😎

        Also, USB 3.0 finally catched up with fast
        CF-cards. USB 2.x card readers never satisfied me.
        I keep them, though, since they can handle classics, like SmartMedia Cards. 😁

        1. I was having problems with a PCIe USB 3 card with a Renesas chip. It refused to hook up some USB 3.0 devices to its USB 3.0 root hub, but if I plugged them into the ports connected to the hub chip on the card they’d work at their best USB 3.0 speed.

          I tried various drivers from Microsoft and Renesas, found out how to update the Renesas firmware, went round and round between either having the ports directly connected to the controller chip OR the ports connected to the on card hub chip operating properly.

          I finally dug in, did A LOT of research on various cards and found that Inatek makes a 5 external, 2 internal port PCIe x1 USB 3.0 card that uses Fresco Logic chips. All ports correctly recognize everything USB 3.0 that I’ve plugged in.

          Renesas quit doing their own USB 3.0 drivers years ago, before the release of Windows 10, they provide information to Microsoft, and you’re stuck with whatever Microsoft produces.

          Fresco Logic’s latest USB 3.0 drivers are much newer and were released for Windows 10.

        1. Probleme is not so much the clone chip, I think, but rather the drivers made by the “original” manufacturer who take innocent users hostage (ftdi gate).
          If we had good natured, unofficial drivers made by product pirates, things would be less troublesome.. ;)

    3. USB seriously a one of most horrible expensive, insecure, huge latency, mechanically unstable, firehazard interface in history.
      Even in the past I was accept everything even if it SpaceWire over (Micro-D)DE-09/Mini-DIN-9 but USB.
      … just burn it before it burn you.

  2. An ASUS gaming motherboard launched in late 2019 – featuring just one USB-C port.

    and 2 headers for cases but ya know. USB C on the computer is just a mess, on my desktop what do I use this for? nothing I own plugs into it, including my usb C enabled phone.

    On my laptop I can plug my dock into it, it sucks cause there’s this thicc cable hanging out of this dainty little port with nubbins for restraint, breathe on it and loose a screen, wait for 3 min to regain usb and network super awesome. Maybe we can shove more shit in a smaller more fragile connector and call it progress

    1. I haven’t noticed USB-C being particularly fragile, my charging cables have remained snug, better than the old micro-USB did. The one that is a real threat in terms of equipment lifespan is still HDMI, where the jack is designed to be less durable than the plug. I’ve lost at least one piece of video equiment (a DirecTV HD DVR) to rough handling of the HDMI cable. The idea that certain jacks and certain cables have different levels of functionality is pretty absurd — there needs to be a color-code, preferably not visible when plugged in, like the old yellow-for-powered and blue-for-USB3 you’d see on type A jacks.

      1. How is the HDMI jack less durable than the plug?
        We’re talking about the male jack (device) and female plug (cable)?
        It was my understanding that the contacts in the plug on the cable are springs while the contacts in the jack are static. Springs will wear out while the static HDMI jack remains.

    2. The only real beneficiall use I can see for desktop users (unless stuff changes) is a usb-c cable coming out
      of the server closet heading to a dock and spliting out for everything for a super clean setup.

    3. I could see a thunderbolt usb-c cable coming out the home server room to a dock on your desk for a clean desktop
      setup. But then you would still have the usual cable mess on the desk. Maybe a bluetooth dock and all bluetooth
      stuff? But then you get the battery changing/charging nightmare thing of bluetooth.

    4. “Why USB3 Type-C Isn’t on More Cases | How Cables Are Made Factory Tour” See link below.
      The first line of the video pretty said USBC front panel cables are between 4-10 times more expensive.that USB2.0.

      Then you add the hardware overheads of what the USB C should support – each of those ‘features’ requires some additional resources be it lke extra PCIe links (very limited on PC), power electronics for PD (extra parts), the logistic of routing video signals (unless it is iGPU or on the GPU card) to the USBC ports.

      1. It is for now, but eventually economies of scale should close that gap, no?

        But in any case, this is just how standards work lol. There is never really a standard. Right around the moment USB-C starts becoming ubiquitous, there will be a new master standard and people will be scratching their heads wondering why it’s being picked up in such an uneven fashion.

        1. You haven’t bother watching the video before making the comment.

          >economies of scale
          It is ironic because it is too labor intensive that it cannot be scaled up. They could exploit cheaper slave labors in other 3rd world countries if that’s what you are getting at.

          They’ll need to improve manufacturability of these cable assemblies with more automation before scaling up.

  3. Is the USB Implementers Forum being jerks about licensing?
    I mean, USB A, D, mini, and micro patents are probably expired, so it would cost mfgrs very little to continue using those connectors.
    Anyway, I’m upset (mildly) that 9-pin RS-232 ports are not on new PCs.
    Get off my lawn, ya whippersnapper!

    1. I have one on the thin client I just found off ebay :) 1.65GHz, up to 16GB of RAM, got an RS-232 on the back, has DisplayPort but also a DVI connector (with the analog cross on the right side for experimenting with sync pulses and such). AMD CPU and GPU. Twenty bucks. Honestly I dunno why people buy SBCs sometimes when thin clients are sold for nothing by the pallet-full on auction sites.

      I’m gonna turn it into my truck computer. Hook it up to an old Motorolla MDT-9100, get real weird with it.

  4. >Instead, to move forward is going to require commitment on the part of hardware companies.

    “Hey, let’s stake our next fiscal year’s survival on devices that are so poorly differentiated (the USB-C/Thunderbolt and charge/data farragos mentioned in the article) that it will only disappoint the end user.”

    What’s gotten lost is that these devices (and the standards behind them) are at the appliance level. Nobody cares how they work, so long as they do so, simply and reliably.

    Anything that gets in the way of this simplicity will sell some adapters but the disappointment in the device requiring adaptation will remain.

      1. That would be nice. But “USB 3.1 Gen1” as a seemingly pointless retcon of USB 3.0 and “USB4™” as opposed to USB 4.0 are IMO very bad signs about the future. My brain says “officially you can run PCI-E over some differential pairs, isn’t that good and proper?” while my gut says someone lost the plot a while ago.

    1. ” Laptops and desktops need to start shipping with three or more USB-C ports, and slowly reduce the number of USB-A ports”

      “Hello yes we would also like to remove the ports that our users still use, what the VGA port god no something way newer we mean the USB-A port. Also can we start adding in these ports that all look identical but have slight differences, no not DVI that at least worked when you used the wrong cable we can’t have that!”

      Yeaaahhhh USB-C may be a huge improvment in a lot of ways but companies don’t care about tech they care about keeping their customers happy and this shit makes no one happy.

  5. “Classic” usb had many of the same problems: cables that were power-only, cables that only supported slow data rates, and of course the mini-A disaster. Their naming has been absurd too, with high speed, super speed, 3.1 being the same as 3.0 (I think).

    The USBIF missed an opportunity to require some sort of labeling (overmold shape, symbol, color, *something*) and has continued their inscrutable name scheme. Type C is great, but it has been hobbled by idiots.

      1. There is (apparently) no plain 3.1– 3.1 Gen 2 is the successor to 3.0 which they renamed to 3.1 Gen 1 because “reasons”. But there is also 3.2 Gen NxN which doesn’t only follow but also *encompasses* all of 3.1 Gen N (and by extension 3.0), and things just get better and better…

      2. Hey, sequential numbers are hard. I mean who could say what comes after 3 without looking it up?! I think it might be 2 again, but there’s just no way to know for sure.

  6. When mice and keyboards transitioned from PS/2 they shipped with an adapter dongle, like this: https://images-na.ssl-images-amazon.com/images/I/51l4bK6A44L._AC_UL160_.jpg
    USB A to C is arguably simpler, because they are electrically compatible, and don’t need alternate firmware implementations speaking both PS/2 and USB protocols. I’d think that manufacturers could ship their keyboards or mice with a simple adapter as well, solving the chicken and egg problem.
    https://m.media-amazon.com/images/I/61hh93h2ZbL._AC_UY436_FMwebp_QL65_.jpg

  7. I love the concept of USB-C, but I hate the implementation.
    The connector is a royal pain after a year or so using it on my phone, the connector just falls out while charging, and now it has stopped charging at all.
    Mini USB has a much more robust shell, it is hard to get wrong when inserting. Couldn’t have they used that as a base for USB-C?

      1. elmesito said MINI, not micro. i have yet to find a broken mini cable or port on a device i own and have had to replace a micro cable about every two weeks and a connector in a phone about once every half year…

        but that is just the mechanical part. the electrical part chaos is something we have to learn to live with untill the “one connector to rule them all” era is over and finally you can see by the shape of the connector what it’s for…

        1. Heck I’d settle for just going back to the Serial/Parallel where you knew which damn way was up all the time, while being electronically simple to implement! Not like its hard to step up the speeds now – heck we could use HDMI cables for everything if we wanted – by their nature they are capable of fast data transfer and usually shielded no reason they couldn’t be used for every single purpose (assuming you don’t mind either the complexity of something like EDID/USB ID’s so every device is smart enough to work ‘right’ or a careless user being able to release the magic smoke putting power on a data bus). Not saying its the best for the job but a cable that only plugs in one way simplifies the electronics, no smarts needed in the cable and every line can only go to exactly one place (assuming correctly made cables).

          I’ll Also second the crap durability of the smallest USB connectors. Mini is alright but still pushing it a little for me. I don’t see what was wrong with normal USB A and B type connectors, other than that the annoyance of finding its the wrong way round more than once half the time, never seen em fail without really serious abuse…

        2. OK, I’ll be that guy – I broke a MINI cable! Nikon D2/D200 used it (to be pedantic, “mine still does”) and I put a _lot_ of cycles on mine. Bad part is I needed a replacement before 5:00pm, and wound up driving 45 minutes to get one – went to a Radio Shack Dealer (franchised)! Micro and full sized everywhere, USB-C even in gas stations, but Minis are a bit of an oddity now. :-)

          The guy who owned the Shack claimed it was the last one in the state. He had some inventory in there that looked contemporaneous with the Reagan administration. The cable, BTW, was generic, and definitely not RS packaged or branded. Got the pictures off, uploaded, and done with several minutes to spare. Now that I think about it, I could have just bought a CompactFlash adapter. I guess I was a little panicky.

        3. Been using Mini and micro for the last ~10 years now. Ran through aprox. 15 MINI cables on the way. Tons of connection issues and broken ports. I can’t tell why exactly but with the same treatment I broke one MICRO cable in the time frame and even the daily plugging on the 6 year old phone did not really wear it out. I have to say, cleaning a MICRO – from being clogged into malfunction with dust – is a big pain. I have a special piece of hair thin sheet metal around for that task.

          My theory is that charging is always intended with a MICRO cable and the manufacturers just use slightly more resilient cables.

          Also, MICRO is either in or out for me. With MINI I always hat to wiggle, replug, wiggle and sometimes had to hold it in a skewed position with tape to even get it to work.

          I even reworked some Dev boards to MICRO to ease the pain.

      2. We went from a port that only charged in one direction using a bevy of non-standard mechanisms to set the voltage to a slightly newer one that works in _two_ directions with some standard mechanisms for setting the voltage and slightly fewer non-standard mechanisms…

        All USB-D needs is to add infinity more directions to plug it in before it catches up to old-school barrel jacks as a charging port.

    1. Exact opposite experience here. USB C ports on devices have been rock solid reliable, cables have worn out with time and been replaced. Mini USB on the other hand killed three £100 microphones because they were mounted on movable arms and the connection wiggled with time causing the socket to wear out outside of warranty. Amusingly the £5 cables were fine afterwards. My current mic has both mini USB and XLR for when mini USB fails.

      If you’re having connection problems, try a new cable.

    2. I’ve had usb C ports get loose over time. I found out it’s because (in my case) they had lint trapped in them. Get a small wooden toothpick or something and see if you can clean the port up a bit. It made a huge difference for me!

  8. The first thing most phone users tend to buy is a USB-C to USB-A adapter.

    USB-C will probably become the new Betamax format, as a 24 pin micro-contact plug with support chips violates the original use-case for USB-A: Simple and inexpensive hot-pluggable robust connectors that auto-configured in the host OS.

    Also, faster bit rates are meaningless given most video hardware connections are already handled over HDMI or Displayport, and these are less complex stacks to run in the peripheral hardware. I would wager the lack of popularity is due to USB-C devices having increased warranty failure rates for some vendors, and 3rd party adapter sales are just not going to make up the profit loss difference. The only reason general consumers even gave the new ports any attention was likely due to the good-will decades of USB-A devices have earned with people already.

    Mandatory XKCD ;-)
    https://xkcd.com/927/

    1. “Also, faster bit rates are meaningless..” … given the crappiness of most cables, just driven home to me as I was dumping files to an external HDD painnnffffulllllyyyy slllooowwwwllllyyy and I grasped the cable and the rate shot up *sigh*

          1. You can buy really cheap SATA-to-eSATA back panel mounts for other full-sized machines of yours if that’s more comfortable. Usually, one’d expect to have a eSATA retranslator chip in the chain, but the connection does work without it, too – just not for large distances.

          2. I was meaning really that in general ESATA would be a whole lot more useful if it appeared more on cramped (compact) and portable systems rather than on higher end boards that have boatloads of storage options.

            The kind of machines that would want a panel mount probably have both SATA ports spoken for.

    2. This consumer is VERY VERY happy with their new MacBook Air and its two USB-C connectors. I got a USB-C “docking station” for very cheap for when I want to plug lots of stuff into my featherweight computer. The I/O bandwidth is breathtaking! Simultaneous full blast gigabit ethernet and full blast disk I/O. Very sweet. Everything just works every time and it is really glorious to not fumble with USB plugs, just plug them in.

      Blah blah blah your words are just words but USB-C is REAL and it works great.

      1. So you are VERY VERY happy that you can plug an extra device into your MacBook Air to get the connectors you actually have a use for. You want to know what my happiest experience with MacBook was? When on my 2012 MacBook, I had three USB 3 jacks, an Ethernet jack, an HDMI port, and a Thunderbolt port. Isn’t it great, that eight years later, you can now buy a docking station to get all of this?

        1. The purpose of a docking station is to let you use your laptop as a desktop but still easily revert it back to laptop form. Personally I use a docking station and love it. One cable for keyboard mouse and monitor. simply unplug one cable and go. I don’t believe that this should be used as an excuse to remove jacks from the actual device however as a supplement docking stations are great for devices that you don’t want to move with your laptop, like video or Ethernet.

          1. Agreed – Never going to argue against a docking station – but the point of a portable computer is that you take it places. You don’t tend to take the dongle rats nest or a full docking station experience with you! So portable machines need enough I/O options to be useable on the move!

            How am I supposed to take my laptop to debug a CNC/3d printer/normal printer configure and verify LAN or test some audio visual gear if I have to go home and dig out the right dongle, or carry a huge bag and upend it to then utangle the dongle pile. To finally get the connectors I actually want right now! In my computer callout bag I have just known good cables of all the common types the computer (if its not in use) and its charger – I don’t need yet more crap in there to let me use those cables with the computer!

      2. Meanwhile… on my laptop, there’s:

        – RS-232
        – VGA
        – Cardbus/PCMCIA
        – ExpressCard
        – 1GbE Ethernet
        – Dial-up modem (Connexant WinModem, so not terribly exciting)
        – 2 USB 2.0 ports (type A)
        – 2 USB 3.0 ports (type A)
        – HDMI
        – a dedicated power connector (barrel jack)
        – 3.5mm microphone/headphone jacks

        The machine is getting on 7 years old now, still running just fine. About the only thing that has gone wrong is the rubber feet started going mushy, so I dug those out and glued on some feet that came with a rack-mount switch.

        For a while I had a PCMCIA network card installed, handy for configuring routers. (Windows 7 didn’t have drivers but under Linux it JustWorked.) I don’t use VGA much, although it is there if I need it (the projector we have here is VGA).

        The only ports I really don’t have a use for would be the modem and the ExpressCard. Everything else sees regular use. The serial port is brilliant, low-latency and with control lines that work perfectly, unlike USB.

        Whilst the laptop is heavier than most, it’s nice to just grab “the laptop”, and everything is there. No adaptors needed.

  9. Keyboards and mice are hardly the killer app it’s looking for. They don’t strain USB 1.0

    However, the new demand for homeworking IT equipment might drive adoption a bit, when people find they have to use a USB C dock to get all the lovelies, like more than one extra screen.

    I guess I wouldn’t be too upset if a new laptop only came with two USB C provided it also had two USB A. It would annoy me though if it only came with one and that needed to be the charge port.

  10. “The less technologically inclined will look to the skies and wail when their pocket-sized USB power bank won’t run their Macbook Pro” … this actually works fine.

    Sure, you won’t get much input power (I see 1A @ 5V according to iStatMenus), but yes the macbook pro will charge with whatever you can give it. It’ll charge slowly when asleep and discharge slightly less slowly when running, just as a naive user who knows nothing about volts and amps would expect.

          1. No, he’s saying the macbook will still go dead, it’ll just take a little longer. It’s like using a 45-ish watt Dell charger on a Dell Precision Workstation laptop (a few years ago, again an emergency kludge). It still went dead, but it took longer. BIOS warns about that at boot time if it detects too small a power supply.

      1. Run as in be completely powered by? It doesn’t. Instead you can plug an insufficient power supply in and the macbook will drain its battery a little whenever it exceeds that power supply.

        That’s pretty neat though, getting 5W or 10W constantly is enough to really limit the battery drain to the point where you could work for 20 hours before your laptop needs to shut down. If you need to charge your laptop in an emergency you can slow-charge it overnight. Great for when you forget your charger because USB A ports are everywhere as are A to C cables. I wish more laptops implemented this.

      2. It doesn’t have to run on 5W. If you can supply 5W then the battery needs to supply 5W less than without the 5W cable when more than 5W are needed. If asleep then they battery will charge… just very slowly

    1. It should not work with any charger, it should be a charger that can supply the required voltage once negotiated over USB-PD, it will not work with say a dumb 5V charger.

      I am not contesting the 5W, I measured quite a few PCs at less than 4W once idle and with a full battery at low brightness.

  11. I definitely wouldn’t design something like USB-C if I had the choice, but it seems to be the best we’ve got.

    The whole combination with HDMI thing seems a bit awful, but understandable.

    The CC pins are just a straight up horrible waste that could have been something like Dallas 1 wire… Or nothing. I’m quite sure that there is no need for a separate configuration pin when Ethernet auto negotiates perfectly well, while also supplying power, on only 8 pins.

    I’m not entirely sure why separate pins are needed for the USB 2.0 legacy mode, and they can’t just make dual function transceivers. Do ASICs not exist in this universe? Is the 2.0 protocol impossible to extend? Do they not expect to make a large enough quantity to do this stuff cheaply?

    It seems like pin count for the same features could be reduced by at least 2, maybe 4 if they dropped the alternate uses thing, and they could have used the legacy USB pins for I2C and finally given us something truly ubiquitous to easily connect to even the cheapest microcontrollers, plus lots of cheap EPROMS to identify cables with. It seems to have worked fine for HDMI.

      1. Technically there’s the trick land-line phones had where they could do full duplex on two wires, or the similar single antenna wireless duplex, and signalling levels can be auto-negotiated (doesn’t HDMI already do variable drive levels?).

        They also could just stick with half duplex, since it’s already Pretty Darn Fast.

        They still might need an extra pair if they wanted to go with separate full duplex, but it’s better than 2 extra pairs. I guess they had their reasons though.

  12. My main gripe about “USB-C” is how fragmented the connector is.
    Like its used for everything from USB to thunderbolt, to displayport.

    When one sees a USB-C port, it might not even support USB itself… And that is frankly a bit annoying.
    (Plug in a thumb drive, and nothing happens, that there my friend were a displayPort disguised as a USB-C connector. And nope, that one to the left is thunderbolt, and no the one on the right if the charger port and nothing else on this laptop model…)

    So that is my main guess to why USB-C isn’t getting picked up in the desktop world.

    Not to mention that a whole slew of people are either waiting on case makers to put more USB-C ports on their front panels, and other people also like sticking with the same case for a few of their builds. Though, the majority of desk top users likely don’t really need more than 1 USB-C port regardless. And mainly use it for connecting to things like phones and storage devices. (Some do use USB-C for thunderbolt expansion. But that is a bit exotic as is…)

    And then we have the issues of the power delivery standard.
    It would have been lovely if it were sending over the power state information as a DC coupled “constant current” control signal over one of the differential data lines. Like 1mA per volt (+ 5V as a start, so 0mA = 5V, 1mA = 6 volts.). (This would require that our digital signals for high speed data are AC coupled, but that is just some capacitors on either side, we could use a fancy transformer, but that is overkill…)

    Since then we could use PD for driving brainless loads and make USB-C usable for practically anything that fits within its 20 volts 5 amp capabilities. (And if one implements it “correctly”, then we can have bidirectional charging, ensuring that both sides can state their desires, while the power sources also gets the ability to state their abilities. (and we only need two differential pairs to do all of that. And can do most of the power controlling with fairly trivial components.)) In short, we could use a single resistor to inform the supply that we currently want about 18 volts, or any other voltage for that matter. (A sufficiently low valued resistor would practically tell the supply to give you as many volts as it can… And yes, if you want to also send data, then including some inductors to ensure that you aren’t shorting out your data line is recommended.)

    It would have been nice if Power delivery were a simple standard that didn’t actually require a microprocessor in the load…

    Lastly, considering that we have USB-C cables that ends with the same connector on both sides, that brings up the question, if I were to plug my computer into another computer, could I then send files between them? I know the answer is currently, “NO”, but it would be nice if one could access a “shared” folder on that other computer and drag and drop files over to it that way. like USB-C can currently reach 20 Gb/s (USB3.2 x2 or something… Otherwise 3.2 with its 10 Gb/s is also fast), and it’s CHEAP compared to 10 Gb/s Ethernet… Not to mention that network sharing is it’s own can of worms if one wants to quickly send files between two devices where one might not be on the network. (And USB sharing is likely safer than inviting a random device onto your network for a quick file transfer. (Though, rubber duckies and the whole issue of DMA on USB and PCI accessing anything in the first 32 bits of RAM is kinda making USB file sharing even more sketchy to be honest….))

    1. “It would have been lovely if it were sending over the power state information as a DC coupled “constant current” control signal over one of the differential data lines.”

      I’m afraid you’re a couple decades late to the party. Back in the “dark ages” before USB, it was possible to create computer peripherals to plug into a serial or parallel port using nothing but logic gates or other non-microcomputer chips such as UARTS. (Even then it was starting to get difficult – when I wanted to plug a printer into a serial port, I found that the only UART available that did not require a microcomputer bus to set it up, was the Intersil 6402. This was in the mid-1980s.) But with USB, your peripherals had to be computers, just to communicate with the USB host. Now the same thing is happening with power management – up until USB-C, it was possible to make devices that could use a USB port as a power supply. It still IS possible, as long as you’re staying within the legacy voltage and current limits, but anything more requires negotiating with the USB controller. Tough, but that’s the way the USB-IF feels about it – any non-trivial USB device will have at least a microcontroller in it anyway, and once that is a given, you do away with hacks like the cell phone charging schemes that depend on seeing a certain resistance across the pins before stepping up to the high current mode. So no, you can’t have “single passive component” access to features like a variable power source.

      1. I know that I am indeed exceptionally late to the party of keeping things nice and simple.

        Honestly, should have taken a stab at Intel’s development team back in 1978 to see about making the 16+16 memory scheme they use equal 32 instead of 20. Would have made X86 memory management so much more trivial today….

        Or make their boot sequence a bit more manageable… Getting a homebrew OS up and running is a pain, and the way the address spaced is butchered up with various memory mapped objects just makes it all the more fun… https://wiki.osdev.org/Memory_Map_(x86) Would be nice if it were cleaned up and organized in a more standardized fashion. But obviously that would mean that such a system wouldn’t be backwards compatible at all. So that is a downside.

        Or talk to Microsoft about implementing screen color correction correctly instead of the current mess that leads to an accumulation of incorrect colors if one uses it just slightly incorrectly… (color correction applies on an application layer, sort of. Color correction is meant to correct the output image shown on the screen so that the screen’s incorrectness is accounted for, so that it shows the colors correctly. Ie, applications shouldn’t even know color correction happens… Since in software, all color values are perfect, by definition. (changing from one color space to another is though a different story all together!) But currently, Microsoft’s solution leads to the screen’s incorrectness smearing it’s faults onto the values in software… (print screen for an example captures the color corrected version. (ie, the image that we intentionally distorted so that it would be shown correctly on our imperfect screen.)))

        Or maybe make Ethernet a bit more agnostic about the order of the twisted pairs, and their polarity for that matter. (Would mean that making Ethernet cables would suddenly be trivial. And same thing can be applied to USB-C, HDMI, DisplayPort, etc… It would decrease manufacturing costs by a noticeable amount…)

        Or how BIOSes handles different CPUs? (Including a small ROM in the CPU itself for holding CPU specific information/code would be a nice method of expanding the amount of CPUs that a BIOS would be able to handle. If AMD for an example used this approach, then they could support all AM4 CPUs on all AM4 motherboards, for as long as they wish to have the socket on the market… Maybe they read this and does that for the AM5 socket, that hopefully also has 4 memory channels as standard….)

        Would also be wonderful if PCIe, SATA, etc had a standardized way of reporting device temperature to the BIOS. So that we could go into the BIOS and set fan curves based on the temperature of those devices, or groups of devices.

        Also, DisplayPort should turn its 3.3 volt power pin into a 5 volt one instead… So that one has more power to drive an inline range extender with. Also specify that there should be some reverse power protection on said pin! (Honestly, I would though give one more pin for power, but make it a 12 volt one, that only starts sending out 12 volt if there is current draw on the 5 volt one. Since if the 12 volt one can supply 1 amp (or more), then we could run an moderately sized LED display with only 1 cable.)

        Then there is also the dreaded family of DDR memory buses…. Don’t get me started, but there is another way that provides more bandwidth. (And its surprisingly simple as well and gets around a whole bunch of nasty problems with the DDR memory bus implementation. (And no, its not the “double data rate” that is the issue.) We lost IDE/Parallel-ATA for similar reasons, but DDR memory buses are going strong oddly enough…)

        There is also IPv4 that could have been 64 bits from the get go… (With 4 billion times more address space than it currently has, we wouldn’t start running out of IPv4 addresses in at least another couple of decades.) And the additional 32 bits wouldn’t majorly increase the burden on systems, even back in the mid 80’s. (Well, at least systems that were “internet” connected back then. Though, that is the main reason 32 bit seemed gigantic at the time, surely it would never run out.)

        Talking about “address ranges”, the GPS week counter, the new version is stupid! Instead of adding on X bits that continues on, on top of the old value, they made a new counter, that increments at the same rate… Ie, it counts 1024 times faster than it could have. (They are still also keeping the old value btw. For the foreseeable future that is.) Though, even more stupidity is that certain systems relies on the GPS week counter for showing their time correctly…. I can understand a GPS location device, or a bedside clock. But a whole train? Or the steering system on a ship? Among other things…. WHY?!?! “Sorry, can’t open the doors on the train, the GPS week counter rolled over and our system crashed due to thinking its in the past.” (BTW, ALWAYS test your system if it “can live in the past” if you ever handle time in any way shape or form…)

        And the list goes on for a very, very long time.

        Keeping things simple isn’t really what the electronics industry is about.
        Simplicity frankly doesn’t sell. It isn’t impressive if its simple.
        You can’t market simplicity, unless its a user interface.
        Processors, buses and other stuff “needs” to be as unnecessarily complicated as possible. And preferably one should pour a ton of unneeded “features” into it while one is at it. Like AI based power management in laptops…

          1. Auto-MDI-X is a very very tiny bit closer indeed.
            But it still doesn’t actually allow you to have the 4 twisted pairs switched about carelessly.

            All it does is automatically detect, “oh, we need a cross over cable. I’ll fix that!”

            So if a cable is half cross over and half straight, then Auto-MDI-X will not fix that for you.
            Not to mention that it isn’t agnostic about polarity either….

            In short, each side of the cable would have a connector with pin pairs for the twisted pairs.
            What pair on connector A is connected to what other pair on connector B isn’t important, if the polarity is flipped, no worries, we will fix that with encoding. (PoE isn’t a problem thanks to 8 cheap diodes… Send power down two random pairs, and have it return on the other two.)

            There is 384 different ways to pin that cable, but currently, only 2 of those ways are supported by Auto-MDI-X. My idea were to have all of those 384 different variations actually work.

            (And no, we aren’t actually fully free to place a wire where ever we please, a twisted pair must still be connected to two pins that are a pair. If we were fully free to pin it however we wished, there would be 40320 different ways to pin it.)

            If an Ethernet cable had 8 differential pairs, there would be 10 321 920 different ways to have it organized…

          2. Alexander Wikstrom: Stupid, stupid, stupid. What utter idiocy.

            Auto-MDI-X exists because of a specific case, where two ethernet terminal devices are connected together without a hub. This is a real case. Because of Ethernet’s choice of connectors, this is necessary. It’s a little like a null modem in RS-232 connections, made necessary for the same reason. But the fact that a “null modem” cable (known generally as a “crossover cable”) CAN be made, guarantees that some ARE made. Which then brings up the possibility that a crossover cable will be used when connecting a terminal device with a hub. Auto-MDI-X just eliminates this problem, removing the troubleshooting necessary to deal with this case.

            So please name one specific case where being able to wire multi-twisted-pair cables willy-nilly would be useful. Just one. And I’ll stop you before you even mention the obvious one, because if you want cables to work without even thinking about what wire goes where, I need to point out that ELECTRICALLY, pairs need to be preserved. You can’t just wire 8 wires randomly, because the pairs act as transmission lines, and simply WON’T WORK if you do that wrong.

            Now look at how RJ-45 connectors are pinned:

            1a 1b 2a 3a 3b 2b 4a 4b
            Never mind the actual naming, of which this is just one convention. The point is, the pair I’ve identified as 2a and 2b MUST be on pins 3 and 6, or it can not work.

            Because of this, it is not possible to just do the most obvious miswiring,
            1a 1b 2a 2b 3a 3b 4a 4b
            because this does not preserve the integrity of the pairs.

            In short, you CAN’T just wire your cables randomly, because PHYSICS. Your proposal fixes forty thousand possible miswirings out of millions that can occur due to random miswiring, making it useless.

            So please, give just one example where your proposal would be of any use to anyone.

          3. BrightBlueJim

            I didn’t state that you should be able to wire the 8 wires how ever you desire.
            The pairs must be preserved, this I outlined twice….

            The main reason for being able to switch differential pairs is to lower manufacturing costs.
            And why I also stated it can just as well be implemented on other connectors using multiple differential pairs, like HDMI, DisplayPort and USB 3.x (and USB-C for that matter)

            Now, I am not saying that “WE SHOULD DO THIS!” but rather, “If I could go back and enforce this to have been the standard from the get go, it would have been nice.”

            Now for next time, read…
            “What pair on connector A is connected to what other pair on connector B isn’t important” talks about pairs, not pins…. You can swap pairs, but not mix pairs.

            And yes, my proposal only handles 384 different ways of configuring a cable. Since the other 39936 ways of pinning the connector leads to issues with patch cables, keystone jacks and other extensions. The individual twisted pairs must be preserved along the whole link, otherwise differential signaling won’t really work. But that differential pair won’t care if it leaves connector 1 on pin pair 1, and goes through pin pair 2 through an extension, before arriving at the other side on pair 4.

            But that issue doesn’t exist if we only switch pairs.

            In the end, the main goal of having the ability to use any of the 384 configurations is mainly to lower manufacturing costs, and reduce trouble shooting for people terminating their own cables.

            Identifying the individual pairs adds extra time and room for mistakes, if a pair can be considered to be just another pair, then this removes the need of keeping track of what pair goes where. So let me repeat once more, the pins in the connector that are supposed to be a pair should always remain a pair. If you put a twisted pair onto 1a, then the other wire in that twisted pair should be on 1b, but what twisted pair that is doesn’t matter, just that its a twisted pair, end of story.

          4. “Identifying the individual pairs adds extra time and room for mistakes,”
            Again, nonsense. Anybody who is making cables really must be capable of putting each specific wire in one specific location in a connector, and to test for this. Why on Earth should the network hardware try to tolerate errors in cable fabrication? In the specific case of crossover cables, this was justified, but I’m not hearing a reasonable case here.

        1. “Or how BIOSes handles different CPUs? (Including a small ROM in the CPU itself for holding CPU specific information/code would be a nice method of expanding the amount of CPUs that a BIOS would be able to handle. If AMD for an example used this approach, then they could support all AM4 CPUs on all AM4 motherboards, for as long as they wish to have the socket on the market… Maybe they read this and does that for the AM5 socket, that hopefully also has 4 memory channels as standard….)”

          Back in the K6-2 days we were nailing ’em into any old pentium board that had between 2.4 and 2.6 Vcore, we’d snarf the microcode from a newer BIOS and stick it in the older board. Out you go Mr P100, this is now a K6-2-400 no wait, *jumpers clock to 75* FOUR FIDDY, boo yah.

          For giggles I made a socket 5 board support a mendocino Celeron, I never did get round to wiring up a socket adapter to see if it would actually boot though. Probably would have been easier to put a K6-2 on a slotket and stick it in a PII board.

          1. It’ll cost a lot more as ondie or multi-die package than the $1 QTY 1 BOM on the motherboard. a chip that’s tune for logic speed is horrible for implementing memory cells. Same reason why FPGA for the longest time do not have on-die config prom.

            Everything that OP said has increased complexity/cost associated.

          2. Tekkieneet.

            Adding in a ROM chip onto a CPU and having its I2C or SPI bus going straight down to the socket won’t really add much complexity to the overall system. Especially considering that it is the chipset that reads the BIOS ROM as is. All the chipset needs is a couple of new pins for also reading the ROM on the CPU.

            This doesn’t greatly impact motherboard complexity nor cost.
            And the new pins on the socket costs a couple of cents to the end user. (yes, sockets are fairly cheap. They cost a few dollars in bulk and each have a 1000+ pins.)
            And the ROM chip won’t need to be very large, a few tens of KB at most, and such can be had for about 1 cent. (especially if one is Intel or AMD and buys literal hundreds of millions of them, we are talking thousands of reels, per month. And a few tens of KB is sufficient since it only needs to store data for that specific CPU, nor does it need to store all of the BIOS either, after all, we still have the BIOS rom.)

            Not to mention that CPUs from AMD has used a multi chip approach for a couple of years now.
            And Intel is doing the same….

            It is not like Intel, AMD, NVidia, etc already sprinkle on a smattering of passive components on their chip carriers as is, so adding in a small sot packed ROM chip won’t really increase costs.

            (And programming it can be done as part of the production testing. (Yes, they do socket the CPU and run it before putting it in a box. It is though more of a functional test, so smearing thermal past onto it isn’t really needed.))

            And if one doesn’t want to waste pins on the socket, no reason it can’t talk to the IO/processor chip instead, and simply relay its data to the chipset in that way.

            But it’s easy to just look at the problems and blow them out of proportion, instead of looking at actual impact, and the benefits a solution has to bring. Yes, it does add a tiny bit of cost, and it adds a bit of complexity. But it ensures that a CPU vendor can take radical steps towards making a new product, without having it colliding with preexisting products on the intended platform. And releasing a new platform is not always a good option either. The introduction of a ROM on the CPU gives a lot more flexibility and its frankly worth the cost of a few more cents in production.

          3. It does add to the cost of assembling the bare dies on silicon interposer and changes the yield. You have no idea how they make these chips. That’s what the new 3000 chips are already multiple dies with core and I/O separate.

            Look up what silicon interposer means, also look at the space on the 3900 packaging. There isn’t much left for the extra core die needed for the 3900/3950 bear in mind some space is need for routing and manufacturablity. You made a lot of claims without knowing how the part is made and considering the consequences.

          4. Tekkieneet.

            There is a difference between a chip carrier and a silicone interposer.
            One is more akin to a regular PCB, the other more similar to a chip, but with TSVs.

            Secondly, we don’t need to put the ROM chip on the silicone interposer, I recommend putting it on the chip carrier instead.

            And it can even be placed on the underside of the CPU among the cluster of passive components already there. Or just along one of the edges of the chip carrier under the IHS, together with the typical smattering of caps you find there as well.

            A 10+ KB ROM chip isn’t huge after all. I am not talking about the 16 MB ROM chips you typically see on motherboards. (But even those are actually smaller than what the package would have you believe.)

            And as stated, yes, it adds a few cents to the cost per CPU, and a bit of complexity.
            But it gives a CPU vendor a lot more flexibility when developing new products.
            As a solution it is though still fairly simple and straight forward.

            And in terms of routing, we are talking about a ROM chip. Not another CPU die, or HBM memory, or an IO controller, but a ROM chip that needs 4-8 pins.

            Not to mention that the ROM chip can be placed a bit where ever there is room for it, it uses a low speed serial bus, so it won’t really need much in terms of signal integrity either.

    2. Your “plug two computers together to share files” is already possible using the method you describe, but this would not (and really should not) be part of the USB spec. A long time ago, there were USB-A to USB-A adapters that did something like this; from either end, you were accessing a shared buffer, and this allowed an app to be used to transfer data between the computers. And of course, you can use a USB Ethernet adapter at each plug, and then a piece of CAT-5 between them. But then, we’re still talking about an app to do the actual work.

      But this has always been the case. Sure, you could connect two RS-232c ports together (don’t forget the null-modem!), but they would just sit there doing nothing until you ran something like Xmodem on both machines.

      1. Yes, shouldn’t really be part of USB “specs”.
        Though, USB is a protocol with a host device.

        To my knowledge, USB 2.0 and earlier can only have 1 host. And plugging two hosts together would not work. (I might be wrong about that though) Can’t though find any evidence that one can plug two hosts together and actually have communication.

        And yes, the actual file transfer part should be handled by software, though a standard for how to do it would be nice, since then all OSes could follow the standard and it would “just work”.

        1. The USB-A to USB-A “cables” are actually active devices, and look like a peripheral to both hosts. But this was no longer necessary with USB OTG, which allows a port to be configured as a host or peripheral.

        1. You’re right, but that’s because USB C is a physical spec and nothing else. No PD, no USB 3.X, no thunderbolt, nada. That’s why I specifically said it’s a feature of thunderbolt 3 and not universal.

          That said it’s not limited to Apple, it is a thunderbolt 3 feature so works on windows laptops too.

    1. your loss indeed, my USB-C docking station was super cheap and it has excellent bandwidth. I’ve dropped my USB-C phone with the charger cable plugged in, about a thousand times and it still clicks in and out like new. I can unplug one cable and I’m on the go. I only have to carry one charger that works with my phone, my tablet and my laptop.

      You spout meaningless words when the reality is the opposite.

      1. Apple user sitting with 3 flat devices and one charger “It’s amazing how they had the forethought to deparallelise your digital dependence and reconnect with each platform serially in a way that enhances user experience…”

  13. That the USB-IF did not mandate some form of distinguishing mark on USB-C cables to help differentiate those with/without high power handling, alternate modes, etc strikes me as something extremely short-sighted (to avoid use of more disparaging term as I contain my rage) for a standards committee .

      1. Just to be clear, I was referring to cables with a USB-C connector on both ends . They all look alike but can differ in functionality. In my book, that’s a bad design .

  14. USB-C is a common, standardized plug that always looks the same but only works if you plug the right thing into the right other thing using specific, often unmarked ports with the right cable. So in the end, it causes more problems and confusion than it solves. At least in the times before, you knew which one the power lead was, which one was for data and which one for video.

  15. there must be some sort of alternative reality thing going on here because my wife and I have no problems with USB-C, we are both clumsy and lazy and these new cables are excellent because we don’t have to find the “right” charger, we don’t have to fumble in the dark to plug in the phone, and I can take my MacBook Air away from its docking station with one small cable as opposed to the expensive, flake monstrosity that is my Dell with its docking station.

    1. If all the USB-C cables that you have support the same baseline features required by all the devices you use, they are, unsurprisingly, interchangeable . This is not the case for all cables and devices . I would say you have been lucky, more than anything .

      1. Unless you know the specs of the cable then you can but plug it in and see if it works. Back in the good old days cables (e.g. Ethernet) had writing on that specified the cable’s rating. Now if you’re lucky you can the the USB-C capabilities from the marketing description of the cable.

        Choice isn’t necessarily bad though. Why mandate a 100W cable to charge a phone that needs 20W? The fact that even using a lesser rated cable still tends to work albeit at the lesser standard of the cable is better than not working at all. The latest Macbooks with USB-C charging have majorly cheaped out on the the charger – there’s no longer a mains lead included (1.5m long I think), just the rabbit connector, and the supplied usb-c cable is barely 1m again compared to the magsafe leads of about 1.75m. So you’ve gone from over 3m from wall socket to laptop to under 1m.

        Again the choice of cable types is a benefit if you take advantage. I’ve bought a 3m 100W charging cable which only has USB2 data, but it’s permanently attached to my charger. To get a 3m 100W USB3 was to double the price.

        I also bought a USB-C – USB-C coupler which works quite well. Interestingly if you pair a 100W cable with a lesser cable then it’s possible for the device to see just a 100W cable if you plug it all together in a certain way (flipping the USB-C orientation does make a difference in this case).

  16. USB-C is great in my experience with macbook pro. I can charge it, run multiple monitors off it (they are thunderbolt enabled) and other USB-A peripherals. Whats great is my work laptop is HP, and I can bring my work laptop home and charge off the same USB-C charger. I never thought I would see the day when an Apple laptop and an HP laptop woulduse the same charger.

    Honestly we just need standardized marks on jacks and cables, to inidicate which functions are supported by that jack/cable. The 3 main functions are power delivery, data, and video. I will admit that the thunderbolt logo, for video, is kinda dumb and misleading.

    If I were in charge, USB cables/plugs and portsd would have these markings:
    -A lightning bolt would indicate high current capability (on a laptop jack this would mean it needs this high current cable, so your thin cheap phone charging cable wont do).
    -A rectangle “monitor” icon would indicate that video can be fed out of that port (thunderbolt port).
    -If the cable/jack supports both, then it has a lightning bolt inside the rectangle.

    The trade-off to having multiple protocol POSSIBLE with one type of connector/cable is that users will be confused if the protocol they want doesnt magically happen even though the cable fit (“This 2A charger charges my phone, why wont it charge my 15″ laptop while I’m gaming?”, or “when i plug my USB-C monitor into my laptop it works as an external monitor, why doesnt it work as an external monitor when I plug it into my phone?”. The only way around this is to communicate functionality/capabilities on the ports/cables themselves.

    1. Just like to point out: Mini DisplayPort and Thunderbolt are not the same thing. Thunderbolt has the ability to be used as a Mini DisplayPort, but not vice-versa.

  17. The original USB plug never seemed to go right in the socket even if it was in the correct orientation. I don’t think they ever tested the difficulty in actually getting this damn plug in at the back of a PC. I’ve never liked the physical USB plug and socket, a masterclass in how not to bend thin tinplate, was it a design by a committee?

    1. You think that’s bad? You should try the old mini-din. You can’t plug it in unless you line up both the axis and the angle and you have to do that at the back of a PC. USB-A was a major improvement – you only need 3 or 4 tries. :P.

  18. I think one thing hindering adoption is the mess of compatibility with anything more advanced than what you can get over a USB 3 cable anyway. There’s a mess where for various different features you need a compatible port on the host, a compatible device, and a compatible *cable*, and it’s often not clear which is which. The whole mess of different things like the various power delivery standards, “USB 3.1 Gen2”, “USB 3.2 Gen2x2” (whyyyyy), the DisplayPort alternate mode, the *HDMI* alternate mode and Thunderbolt (which I do not actually want given that it allows direct access to system memory, and the relevant protection features appear to not work very well, but that’s a separate issue) just creates a landscape of confusion for anyone using it.

  19. The problem I see with USBC is that all the alternative functionality (power in, HDMI, DP, thunderbolt etc) requires extra hardware per port. So making all the ports with full functionality is expensive and space consuming which is why most manufacturers don’t do this.

    The only sane solution I see right now is to have both C and A ports. If you make it a C port, put all the functionality there, otherwise make it a normal A port. Don’t confuse the user with “this port looks the same but it has different functionality compared with the other one.” It’s been too many years since we have been doing the “if it fits in this hole it should work in this hole” logic with ports.

    1. And then there’s also a lot of new laptops that have a USB-C port, but you can’t use it for charging, DisplayPort, Thunderbolt or anything like that – it’s just a plain USB3 port, except it doesn’t work with your usual USB3 devices. No, thank you.

  20. All this mess with USB doesn’t make any sence at all.
    One reason why they make those small connectors is because of mobile phones and such stuff getting
    thinner and thinner and the connectors no longer fits.
    Another reason is to make controlled obsolence so you no longer can use old equipment ( without hacking it )
    It’s like they getting faster and faster computers ( CPU graphics ect ) but at same time ‘slow’ down external stuff.

    I have a Business laptop beside my big studio computer and at the time it came out it was ‘slow’ compared to other
    brands and it didn’t have anything like webcam ect.
    Only have a 128 mb ATI dedicated graphics ( seperate with OWN memory )
    The laptop have just 4 Gb Ram.
    Just look like a boring dark grey finish.
    Can’t really do anything in games.
    But……
    Here it also stops.
    It have 3 seperate Wifi , old modem , Paralell port , serial port , lots of USB2 and everything Sata inside ,
    , PCMCIA slot ( I have added a PCMCIA card with 3 extra Firewire 800 ports )
    it also have an IR port at the front ( Infrared port ) that one is very handy read later in this text why.

    It have Chip reader to install sim card directly for internet access and it have a
    Smart card security reader/writer , fingerprint reader ect ect.
    But most impressive is the Hardware Virtualization feature , you can enable this in its bios and unlike a normal
    Standard ‘ gaming pc’ It do virtualization directly by hardware CPU is by standard capable of this too.
    What it totally lack in game performance it DO have when it comes to run scientific programs / virtualization and communication

    ect ect. Its a really powewerfull laptop if used correct.
    I use it in my music studio as an effects machine .
    On its Firewire 800 port I connect my Forewire soundcard and on USB 2 another
    soundcard they run at SAME time doing different tasks.
    .
    When I plug my guitar in to the Firewire soundcard I run REALTIME Synthesizers on my guitar
    (you can hear how it sound here

    – go to ALL MUSIC
    Play the track
    no 12 Stronz Vivaldi 03 Winter allegro And 01 Stronz – Purple Galaxy
    Those tracks is where you can hear it best ( most clear its a guitar ) this synthesizer is MY GUITAR )
    By using REALTIME Synth I play this on my guitar.
    I have made the whole Vivaldis Four Seasons that way.
    .
    All play in realtime with only 2 ms Latency delay ( as good as zerro )
    Take a top modern gaming pc and you will NEVER be able to get that Low Latenzy ever.

    Why ?
    When you use USB ( 1 2 and 3 ect ) They all run in SERIAL so waiting to get ask from CPU if
    ready and /or have any data to send.
    This increase latency .

    Why is a ps2 mouse better than an USB ?
    Because the PS2 is based on interupts that mean here the ps2 port/device tell the CPU
    when any data is present , It INTERUPTS the CPU ( there come it’s name )
    A USB doen’t do that nope.
    All time the USB ask the different device ”’ Do you have anythinhg ” all time..
    It’s a waste of cycles and getting worse with more devices connected at same time.

    With the use of Firewire you doesn’t have this problem because it’s based on interupt system
    and the firewire is connected more direct to the CPU than any USB that mean the Firewire
    Actually run much faster and more stable espicially when used with latency sensitive devices.

    Remember it all comes to WHAT USE WAS THE COMPUTER DESIGNED FOR.
    Shure you can get thousands of laptops ”faster” Than mine . YES.

    BUT what is faster and by what way is it measured .
    Cinebench ?
    Benchmarking
    my laptops Sucks.when try use those..Yes..
    .
    My computer wouldnt finish that in days , slow slow slow.
    .
    But Realtime Guitar Synthesizers and Realtime Effects …. YES YES runs perfect perfect , I Love it……
    I never had ANY laptop before this one that could do this despite being really good at gaming.
    BUT it’s just a boring HP Business laptop …
    Yes , Specially made for OTHER purposes than gaming.

    It have all types of connections just as my Big Studio computer and that give flexibility to use whatever
    devices that I want.

    It’s all about money , When they made USB type C the one and only ( I hope that fail ) connector then
    you will realice that your computers and devices are more locked than ever.
    You wont be able to use whatever stuff want without also being able to do hardware hacking and this is
    not all people who is able to do so.

    It’s all about Limiting you so you have to buy new stuff to get functionality.
    I still use serial port on my computer where I have an Ir port connected ( Infrared port )

    I use IR ports on my laptop if I need to send a quick command to my external soundcard or the sound system
    in my Big Computer , Its faster than reach for the remote ( if its in other end of the room )
    and instead of using 4 different remotecontrols its much easier to just send the command from the laptop
    when ever its the computer screen or the soundcards ect.

    Those companies that want to remove all those different connectors ONLY want to streamline their
    products so the end user get an empty MAC style look with nearly no ways of connect external devices.

    Bear in mind that serial port and paralell ports is still in use in many scientific labs.
    And last but also very important as a HACKERs community it must be in the interest of people here
    to keep all those differet connectors instead of just defending this new USB C standard.

    ( I know that most who call themself ‘ hackers ‘ ect they just use those USB ‘clip together’ boards
    connected to maybe , an arduino with a few usb stuff connected and ‘ CLIP ON stuff )
    Shure this ”’LEGO” Kind of creating getting it easier but there still is people out there who DOESNT
    just use a one fits all solution and actually create stuff themself without just buy
    MODULES TO CLIP TOGETHER.

  21. One party of USB I’d really like to see phased out is the old 1.5 Mbit/s low-speed mode. It just increases latency for any other device on the same hub. Why do we still need that anachronism? Can’t modern UEFIs deal with 12 Mbit/s keyboard and mouse USB interrupt transports?

    1. Every project based on V-USB needs it because of course not all micros have USB support and bit-banging low-speed, while not great, is at least possible. The way you say it sounds like if _you_ don’t plug in an old device then there is no problem and so you want _nobody_ to be able to ever do that again on new machines. I suppose that what you actually want is for there to be no more market for anything new like that… If you find out too late that you bought a cheaply factory-made keyboard or mouse with inherently worse latency because some misguided mfr. figured low-speed was ok, then *just don’t use those things* and in the future, stay away from the bottom shelf. Am I missing/overlooking something?

      1. A lot of the keyboards are still on 1.5Mb/s because they use bitbang USB. Their firmware still supports PS/2.

        The slowness only affect the physical port it is connected. Just put all your slow devices on the same hub if you are out of ports. Computers these days have plenty of USB ports and that is no longer an issue.

        Another person with the “I got mine, so screw you”.

  22. USB-C ultimately won’t take over until HDMI is ditched. And given how many TVs and monitors are in households around the world that still work fine but don’t have USB-C… it’s not gonna take over anytime soon.

  23. I think the idea of making all the ports do all the things is not understanding the realities of how things are built.

    There’s only so many PCIe lanes on a processor, so you would probably need to add a bunch of expensive switch chips, and a bunch of power switching chips to make every port capable of putting out 100w.

    Then everyone can put a 1500W PSU in their computer so it can run while they have all 10 of the USB-C ports loaded up charging a stack of laptops, and a couple soldering irons.

    I think the coloured ports are a reasonable solution, and are certainly better than going by the names, which constantly get renamed on existing port types.

    That’s not a USB 3.0 port, It’s USB 3.2 gen 1.

    Also, as is discussed in the GN video someone else linked above, the cabling, and work to install it in a case is fairly labour intensive.
    on-boards would mostly get around the manual labour expense, but it’s still a very high speed serial link, which does have layout limitations to get the impedance correct, etc. That’s also true for standard USB, and has worked fine, but there’s a lot more pins, which can have multiple purposes on type-C, so it does still complicate things.

    1. With its variable meanings for certain pins, USB-C doubles down on the problem that Apple invented with Thunderbolt. With Thunderbolt, they wanted to extend the Mini DisplayPort so that it could be used for high speed connections other than video displays. The result was a connector that looks exactly the same as Mini DisplayPort, except for the symbol printed next to it. Similar to the “SS” logo on USB 3 ports, but without the color coding. But it’s not enough to see that the connector is there, because seeing a MiniDP/Thunderbolt connector means that you can connect another monitor and you MIGHT be able to connect, say, an Ethernet adapter. But what makes USB-C even worse, is that there are too many possible types of ports that can use this connector, so there will be no logo that the user can use to have any confidence that a given USB-C port will be usable for a specific purpose. The Very Bad consequence of this is that people will be led into buying devices they have every expectation will work, only to be met with Nothing Happening when the device is plugged in. Most of the time, this will be relatively benign, where the device that doesn’t work didn’t cost that much, and can be returned anyway. But when buying a computer, one is buying what they expect they will need. One does not automatically check that every possible combination of external devices they might one day add will work, right out of the box. They will simply say, “Okay, it’s got four USB-C ports, one of which is needed for power. This means I should be able to connect a wired network, an external monitor, and oh, maybe an SDI video capture device, which I know I will need, but not today.” And with USB 3, this has been pretty consistent – what you could do with one blue USB port you could do with another blue USB port. But the long term result will be that people will have no confidence in what they can use their USB-C ports for.

  24. I’m considering replacing my USB-C phone with a micro-USB one so that Android Auto in my car doesn’t lose connection and cut out my Spotify and/or navigation every time I hit the slightest bump. Also so my phone doesn’t go flat because the connector has disengaged by being brushed lightly.

    Don’t even get me started on using Android Auto over USB-C on unsealed roads and cattle grates….

    Reversible? Cool. High current/fast charge? Nice. Mechanical design? Absolute crap.

    1. If I were you I’d try a better quality cable first. I have a USB-C cable that disconnects if I jog it in my phone, but plenty of others that are absolutely fine in the same phone.

  25. Maybe I’m a luddite but I’m of the opinion that USB-A doesn’t truly *need* replacing. If it ain’t broke, don’t fix it. USB-C and its various protocols (Thunderbolt 3, for example, but commonly 10 gbps 3.1 as well) certainly beats USB-A at a lot of things- data transfer, power delivery, video output, eGPU communication, etc. But the only advantage it has over run of the mill USB-A when it comes to peripheral useage is that it is orientation agnostic. Given that there are probably billions of peripherals that are out there that have USB-A, and it still works very well for them, is it really worth undergoing such a monumental effort to shift all of these ports for the sole benefit that you can plug them in easier? Peripherals of the future such as keyboards, mice, printers, etc. are not going to need data throughputs or power delivery exceeding that of USB-A 3.0, or even 3.1, and even if they do, USB-A can still support more advanced USB protocols and be backwards compatible with the billions of USB-A devices that still exist.

    I have a (probably) unpopular prediction for all these “forward thinkers” who think USB-C is the future and we will eventually live in a world filled only with sleek little oval shaped ports- USB-A is probably not dying anytime soon. And when I say anytime soon, I wouldn’t be surprised if it didn’t die anytime in the next fifty years. Remember, when USB-A was introduced, the reason it caught on so quickly was because there was a crazy amount of various proprietary peripheral connectors that people could barely keep track on. USB-A caught on like wildfire because it *solved a problem* that actually existed. Now I’m not saying USB-C doesn’t solve any issues- like I said, common implementations are much more effective for a whole host of problems, and one place where I think USB-C will really take over is in video IO, as we’re seeing on more and more monitors, for example. However, USB-A as a peripheral connector doesn’t really have that many problems, to be honest. It’s not super thin but thinness is a niche concern for high end portable device designers and has little to no bearing on business use cases. The ease of plugging it in is a mild convenience but not groundbreaking. And the ultimate disadvantage, of having to use a ton of adapters to be backwards compatible, erases any gains that could be gotten from *exclusive* use of USB-C. So USB-A is not going anywhere. It’s staying on business computers for a long time and as long as it is on business computers, so too will it remain on other consumer products.

  26. As an open hardware developer, I was not on board with USB-C at the start. The typical connectors one finds had no fewer than 18 pins because of the flippable nature of the connectors. I’ve since discovered that you can get USB 2.0 C connectors. They have 10 pins on them, leaving off the super-speed connectors. Since my projects top out at high (480 mb/s) speed, this is ideal for me. I’ve already made a couple of boards with these new connectors and I am sold on them. I like them much better than the micro-B connectors I’ve mainly used up to now and with them I could use them for any conceivable project that doesn’t involve the super-speed pairs, which includes even high-power projects.

  27. Personally, I’ve always liked the full size USB-B connectors, and would have preferred sticking with those while replacing USB-A with something more like them, maybe wider and with an indent on one side so they’re different enough. Why would I ever worry about a connector that’s thinner than my thumb being too thick?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.