Raspberry Pi Grants Remote Access Via PCIe (Sort Of)

[Jeff] found a Raspberry Pi — well, the compute module version, anyway — in an odd place: on a PCI Express card. Why would you plug a Raspberry Pi into a PC? Well, you aren’t exactly. The card uses the PCI Express connector as a way to mount in the computer and connect to the PC’s ground. The Pi exposes its own network cable and is powered by PoE or a USB C cable. So what does it do? It offers remote keyboard, video, and mouse (KVM) services. The trick is you can then get to the PC remotely even if you need to access, say, the BIOS setup screen or troubleshoot an OS that won’t boot.

This isn’t a new idea. In fact, we’ve seen the underlying Pi-KVM software before, so if you don’t mind figuring out your mounting options for a Raspberry Pi, you probably don’t need this board. Good thing too. Judging by the comments, they are hard to actually buy — perhaps, due to the chip shortage.

While it seems seductive to have a remote solution that doesn’t depend on fiddly software — or even what operating system you are using — [Jeff] notes that latency is relatively high, so you probably won’t be happy with it for any gaming or video. But that’s not really what it is for.

It did make us think, though. The PCI Express has 12V and 3.3V power and ground connections. Some motherboards even provide 3.3V when the computer is off. What else could you mount inside the computer with one of these things? Or what else could you do with this Pi card? Networked USB maybe?

We’ve seen a Pi get surgery to include a PCI bus, too. Or, you can opt for the easier surgical method. While plugging one of these KVM boards into a modified Pi would be pointless, we also think it would be amusing.

28 thoughts on “Raspberry Pi Grants Remote Access Via PCIe (Sort Of)

        1. Won’t work. The PCIe controller in the BCM2711 isn’t capable of being an endpoint. It was confirmed by the Raspberry Pi engineers as well. You’d need a non-transparent PCIe bridge to glue them together which defeats the point of it.

        1. I do not think it takes power from the PCI slot – as far as I know, the only power sources are on the bracket (PoE, or USB-C). And it does make sense, considering that the unit needs to work even when the PC it controls is powered off.

      1. They were fully compatible. But they ran extremely hot, expensive, and there just wasn’t a lot of interest at a time when the Ultra 5 was the typical Solaris workstation.

        The intended use was to let you run Sun applications from your PC. Back when these were made, Solaris SPARC was a popular platform for semi design tools, so this let you have both your business desktop and engineering workstation in the same box.

        However there was no real hardware integration between the two different environments. They shared a case and power supply, and nothing more. You were expected to have both the Plug and the PC connected to a hub or switch with Ethernet, and you’d remote X from the PC into the Plug if you wanted a single monitor and keyboard. A slot adapter brought the keyboard/mouse bus and video out of the Plug so you could have a separate console for the Plug, but then the novelty is lost.

        Overall most users were better served by remoting into a large Sun server, or having a cheaper Ultra 5 or 10 desktop.

    1. We had a bunch of similar things, essentially Wintel/x86 computers on a board that were supposed to live inside our SPARC boxes. This never got off the drawing board, besides the purchase, and x86 dell servers were used for the Windows components.

  1. The original article is stupid. Why go to the trouble of mounting an R.Pi on a PCIe card just to ground it?! That’s really dumb. I think connecting it to the bus was the idea.
    Not sure what you’d gain, I think the RPi KVM is more useful.

    1. The idea is to have a proper mounting point inside the tower. It could be a box, sure, but fixing it on a bus is better than to keep a box with magnets, glue or something on the top of the computer.

      1. But the point with the pi kvm is to connect it to a common kvm switch and have access to all your servers not just one of them surely ?
        Else it’s an expensive prospect doing one for each and might as well buy a sever with out of band management in the first place ?

    1. I can think of another reason: by having that as an IPMI device as opposed to something like an HP iLO card / Dell DRAC / Cisco CIMC, you have full control of the software running on it instead of trusting the manufacturer to not put a back door into it (tinfoil hat territory there) or keeping on top of security updates for it- instead, it’s treated like any other system on the network and gets updates the same way any other Pi does.

      (It’s been my experience with updating the DRAC and CIMC modules on servers that it usually involves having to reboot the host it’s attached to, which requires a scheduled outage and downtime.

      And finally, I can also think that if it’s on a PCIe bus along with other Pis and maybe shared storage that uses PCIe, you have a reasonably modular cluster in a box without having to fabricate a chassis for it. But I’m pulling at straws now, so I’ll shut up. :D

  2. This is cool, I just wish it was possible to actually buy Raspberry Pi hardware.
    While I’m wishing, it would be nice if the people behind the PiKVM project would consider targeting SBCs that are actually attainable since RasPis look like they will be in short supply for the coming year (or more).

  3. What would be really cool is – use an SoC which can do PCIe endpoint, implement a graphics adapter and USB/UART over pcie and then connected that via Ethernet to the rest of the network. Kind of plugin-BMC Controller

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.