If you’ve had any dealings with Cat 5 and Cat 6 cable, and let’s be honest, who hasn’t, you’ve probably wrestled with lengths anywhere from 1 meter to 25 meters if you’re hooking up a long haul. Network admins will be familiar with the 0.1 m variety for neat hookups in server cabinets. However, a Reddit community has recently taken things further.
It all started on r/ubiquiti, where user [aayo-gorkhali] posted a custom-built cable just over 2 inches long. The intention was to allow a Ubiquiti U6-IW access point to be placed on a wall. The tiny cable was used to hook up to the keystone jack that formerly lived in that position, as an alternative to re-terminating the wall jack into a regular RJ45 connector.
Naturally this led to an arms race, with [darkw1sh] posting a shorter example with two RJ-45 connectors mounted back to back with the bare minimum of cable crimped into the housings. [Josh_Your_IT_Guy] went out the belt sander to one-up that effort, measuring just over an inch in length.
[rickyh7] took things further, posting a “cable” just a half-inch long (~13 mm). In reality, it consists of just the pinned section of two RJ-45 connectors mounted back to back, wired together in the normal way. While electrically it should work, and it passes a cable tester check, it would be virtually impossible to actually plug it into two devices at once due to its tiny length.
We want to see this go to the logical end point, though. This would naturally involve hacking away the plastic casings off a pair of laptops and soldering their motherboards together at the traces leading to the Ethernet jack. Then your “cable” is merely the width of the solder joint itself.
Alternatively, you could spend your afternoon learning about other nifty hacks with Ethernet cables that have more real-world applications!
Yes this is a useful diagnostic tool for a dangling keystone jack, I just might make one for myself.
For diagnostics i’ve been using loopback ethernet plug. Just one pair connecting RX and TX of the same RJ45 plug of 100M ethernet.
Put it on your keychain to quickly test network hardware for fried ports. If you plug it into port of switch, the link LED should light up for that port. Can even test cables by putting them between switch and loopback plug. Unfortunately does not work for gigabit. But it might be possible to create gigabit version.
Great idea. I was thinking of doing something similar back in my network admin days but only have a rare occasion of testing it these days for nixers as the “local IT guy”. Might revisit it though!
=) Got quite a chuckle out of this, thanks!
Same here, it just goes to show, on Reddit, size isn’t everything!
If you are going to connect the two computers directly then you can even get rid of the magnetics and really just wire them together.
This is what happens inside BAFO BF-110N and similar USB PC-to-PC cables. Two USB-to-Ethernet chips wired back to back.
That’s essentially what a passive DAC cable does, between SFPs.
pointless competition imo…
but it reminds me… my at&t fiber modem sits right behind my PC (router) so once i finiished setting it up, i made a 10″ cable for it so it would be nice and neat. i noticed, it was only connecting 100baseT, not GigE, but cheapest tier of fiber service is 100Mbits nominal and does better than that in bursts. i googled it and it is well known that this at&t fiber modem will fail to go into gigabit mode sometimes, and people had been trying desperate hacks like buying different ethernet cards for their PCs.
i tested the short cable, and it works with other devices. i went back to the 6′ cable i had pulled out of a bin, and it did GigE fine! i made another short cable, again it worked with everything but the modem. whatever heuristic it uses to determine if the link is gig-capable doesn’t like short cables! so now there’s a big coil of wire *shrug*
when you think about it, the idea that GHz signals would be sensitive to a delay of 1ft/ns isn’t that surprising. but kind of a bummer.
Are you implying non-pointless competitions are a thing?
The olympics fit into this discussion somewhere.
Something something 15 year old… Something something performance enhancing drugs… something something competing for a banned country… Something something meaningless.
Are you saying that curling is pointless?!?!
But all that hard work… Pushing er… Ummm I mean the… Sweeping…er… Ok
😉
It has to do with reflection in the electrical signal. All cables have an amount of reflection at the joins. If the cable is too short the reflected signal will bounce back into the transmitter causing problems. Using a longer cable will attenuate the reflected signal.
My thoughts too. No problem with a race to the bottom though right? Be interesting to benchmark a short cable to see power consumption and data throughput
Yes, what is happening is that the reflection is arriving back at the transmitter before the entire packet has made it on to the wire. As long as the packet has completely left the interface, the reflection does no harm (in full duplex mode, in half duplex it might be seen as a collision).
The ethernet spec has both maximum (100m) and minimum (90cm) lengths given, as well as how much wiring can be untwisted at the connector (13mm)
There’s even a spec on how much must be solid core (90m) and how much can be stranded (5m on each end)
I’m personally surprised the minimum is 3 foot, having used shorter cables myself. But clearly there’s a lot of effects at play
Came here for just this reason; While I have a small collection of ‘stubby’ ethernet cables, they are by and large intended for the remote end of my cable tester for when I’m tracing back a line or other diagnostic reasons.
It’s just a question of how good the front end is and how, uh, “not bad” the cable is. Short cables have very little attenuation so the equalizer has to be able to get very flat (constant losses) and the echo cancellation has to have good dynamic range. If you do the echo cancellation digitally and you don’t have enough dynamic range at the front end, the echo can swamp you.
The 3 foot minimum has to do with the time it takes to detect packet collisions in half duplex operation and when we were using hubs and repeaters. In switched networks in full duplex, there will not be collisions due to queuing at the switch port which won’t put two packets on the wire simultaneously. In half duplex both ends of the cable could transmit at the same time and the distance matters in order to catch a collision in time to detect and jam it.
The other issue is reflections coming back from the connection before the entire packet has been transmitted. These will add or subtract from the levels on the wire. This would be interpreted as a collision happening which might cause the interface to error out if it is in full duplex where a collision should never occur. The wiring standard goes back to much older technology.
Paragraph 1 is plausible.
Paragraph2 – electrons are pretty fast you know. I’m certain the reflection is going to return long before the full packet has been transmitted on long lengths of cable.
I think it has more to do with specification and design limitations. If you have a variable length to 100m starting from 90cm then your increments to design the electronics to cope with reflection can have a resolution of meter (close enough) so 100 programmable options. If you reduce that to 10cm then electronics has to have 1000 programmable options for the same performance, and just for the sake of really short cables.
The length restriction comes from the amplitude of the reflection, not the timing. Heck, some of the reflection comes right away due to near-end crosstalk through the hybrid!
But that being said, 3 feet (so 6 feet there and back) is actually longer than GbE (and 100baseT’s) baud rate, so you could make an argument that 3 feet is designed to guarantee the reflection is delayed by at least a sample at the receiver. So it helps the digital design of the echo canceller a bit.
But it’s really just about how strong the reflection is. Longer cable = weaker reflection due to losses in the cable.
Patch panels use much shorter cables. Who says they’re uncommon?
Well at least they’re seeing who’s is smaller, that’s a bit of a change.
Gigabit Ethernet isn’t GHz. GHz signals through Cat5 die a horrible death over any serious distance.
GigE is 1000 Gbit/s, but it gets that by using 4 pairs bidirectionally with PAM5 encoding (with an effective throughput of 2 bits/baud) at the same symbol rate as 100baseT (125 Mbaud/s).
It’s not the negotiation that’s failing. Autonegotiation works exactly the same as 10/100 negotiation, just with extended pages. GigE contains both echo cancellation (because it’s bidirectional) and an adaptive equalizer to compensate for cable losses because, well, Cat5 is utter crap.
In other words: GigE’s *designed* for crap transmitting conditions. Give it perfect transmitting conditions, and if the front end was designed like crap (like apparently that one was) it goes deaf.
I came across this discussion by accident and was amused to see cat 5 cable described as utter crap. I was on the 10baseT standards committee and we had to design to operate on cat 3 cable as that was the installed base at the time. Cat 5 was a fantastic step forward. Time moves on, data rates increase and volumes dictate what is economically viable. If we had not designed for cat 3 then we would not have created the volumes. We would not have won the LAN wars of the 80’s and you would be using Token ring on IBM’s clunky expensive cabling system.
My involvement ended in the early stages of 100baseT when the PHY was still in the FDDI committee and I have not read the 1000baseT standard. I was surprised that there should be a minimum cable length. The difference in cable compensation requirements between 0.7m and zero must be insignificant as is the 3.5ns delay when compared to encoding/decoding etc. In the context of this thread, the standard must allow for pcb implementations (less than zero) without transformers, which also affect the compensation. Perhaps this is considered an internal implementation issue and you can probably disable the compensation in the ICs.
From what I can see, cat 6 appears to be very similar to cat 5, the difference being improvements in tolerances and in crosstalk attenuation. Crosstalk and common mode conversion improvements are presumably achieved by increasing the twist rate. The cable attenuation will not be any better as this is down to the physics of skin effect. Insertion loss and structural return loss will be improved however, by more tightly controlled impedance.
Twisted pair has it’s limitations and if you make it as expensive as fibreoptics then why not just use that!
Is your short cable CAT6 or at least CAT5e?
It would appear you have stumbled into the wrong forum my friend…
I’m pretty sure “but why would you even do that?” are fighting words in these parts.
A belt sander, love it.
I have “spliced” ethernet cables, by letting two pairs connect to the RJ45 connector, then a hole in the sheating 10cm down, one pair pulled out (with phone cable sheating around it) and installed into an RJ11 connector, plugged into the modem’s phone port, to have POTS and Ethernet over the same wire.
Indeed, I did the same kind of thing myself. The house had some cat-5 going between rooms for a phone-line extension. At both ends, I removed the phone plate, attached the phone wires to a modular phone port, attached the unused pairs to a modular ethernet port, and plugged the two ports into a two-port plate. Works fine up to 100Mb.
“Works fine up to 100Mb.”
It’d be hard for it to work higher, considering gigabit uses all 4 pairs.
A couple of Broadcom 1000BASE-T adapters with the Ethernet@WireSpeed parameter available+enabled will easily work higher than 100Mbps in those conditions, by using a 2-pair variant of the spec.
That is a testament to how robust the Ethernet and CAT specifications are.
With that though your data device won’t connect over 100mbs. Might as well just terminate 2 pairs on a jack and use other pairs for voice.
There’s an old (proprietary) standard for systematic cabling in offices and so on (by Kerpen) which uses standard 4-pair Ethernet cables but the plugs and sockets are quite special.
1. The smallest square plug connects to just one pair an you can plug up to four of those into one socket (POTS or any other 1-pair application).
2. The middle rectangular plug (2*[1.]) connects two pairs and two plugs can fit into on socket (100Base-TX).
3. The largest square plug (4*[1.]=2*[2.]) connects all four pairs (eg. 1000Base-T).
And you can mix and merge the different plugs as you like – eg. {2*100base-TX} or {100Base-TX + 2*POTS}.
Can’t remember the name right now and only found one image…
http://wwwhomes.uni-bielefeld.de/stwerk/House-Net/8021X/de/general/screenshots/Kerpen_%20anschluesse_1024.jpg
(in the lower left corner you can see one socket with one 2-pair plug inserted on the left)
I’ve got a single CAT5e F/UTP running the intercom & 2*10W LEDs for my front gate.
Too lazy to tear up & re-lay the run for the old single pair used for an old doorbell, I used the old cable to pull through a length of old shielded CAT5e I had lying around. The old run was so narrow, even that single cable barely made it through.
IIRC, it was something like…
+127v AC Br+B (Intercom & LEDs)
-127v AC O (Intercom)
-127v AC G (LEDs)
-/+12v DC O (Intercom receiver)
I wasn’t sure if it would cause some sort of audio interference, but a decade on & no issues yet.
And then there’s the CFTV, which runs off homemade PoE fed off an old PSU.
Grr, where’s the edit button HaD?!
Unless my cable came with a bonus twisted pair, the 127v neutral must be running on single strands. But at 24AWG, the current’s still within spec.
That kind of sounds shocking.
Thankfully not the fatal form – electrocution, well at least not “so far” or yet.
It sounds to me that perhaps your putting 127VAC (RMS) 180V peek over wire that has a maximum insulation rating of 80V. And perhaps passing that cable through an environment (IP67) that it is not designed or rated for.
If that’s the case then I would mention that you can’t do that in my country because of the regulations introduced because of dead people.
Yeah, I know, I’m a right PITA on safety issues. Probably as I worked on systems up to 127kV and I spent a lot of time on “no break loads” where you can’t turn the power off to work on anything.
So just think of this as discouragement to others an a way. Because I don’t think that either of us will change.
No offence intended.
Does CAT5 really have a maximum insulation rating of 80V? I was always under the impression that it could still carry phone signals, and the POTS ringer voltage is 74-100V AC, depending on the country. (And I don’t know if that’s 100V peak-to-peak or RMS)
In my country (Australia) POTS ring current was 60VAC.
No, Harting Cat5e is max. 125 V
Van Damme says Dielectric test 1500Vdc x 1 min 500Vdc x 1 min
So if you pick the right one, it’s probably fine.
Current carrying capacity also depends on duty cycle which in that case of a ringer is pretty low. A ring is 90V RMS at 20 Hz so the amount of time spent over 80V would be minimal. My guess is that the insulation rating is 80V continuous.
ABB VF drives use a custom back to back rj45 connector for the control panel when mounted on the device, it uses a standard cat 5 cable to mount it remotely
I used to carry around a very short 2″ crossover cable an Ethernet coupler and a long standard cable in case I needed a long crossover cable (computer to computer or switch to switch) well before Auto-MDIX was a thing.
That’s a good idea. I’ll have to look up auto-MDIX. I’m unfamiliar with it. Sounds like it just reverses the data lines in code.
Auto-MDIX has been standard even in low-end devices for the best part of a decade now. In fact, it’s not even required in most modern equipment as Gbit+ doesn’t have dedicated Rx/Tx pairs – *all* pairs are Rx/Tx.
It reverses the pairs that are used for TX and RX.
Old equipment like home routers (perhaps a Wi-Fi base) usually had 4 white sockets (Rj45) and one yellow socket. The yellow socket was reverse wired so it could plug into an “upstream” device using a normal cable.
Alternatively we had normal cables – blue and crossover cables – red but now cables are made in any colour so the only way to check is to look at the connectors on the ends to see if it crosses over.
d.c. continuity (wiremap) tests of a patch cord do not say anything about its Ethernet performance.
High frequency NEXT (near-end crosstalk) certification margins are harder to pass in short cables, due to naturally lower RL (return loss), resulting in more XT contribution from the far-end.
Connector and termination quality are therefore much more critical in short links/cords, due to these high-frequency characteristics.
How do I buy six (6) of these?
I’ve seen something similar in a robot for a robotics competition using cat 6 to communicate between different control boards. We had some super cute cables.
This is the same guys from primary school who competed to see who can sharpen the smallest double sided pencils. Boys will be boys.
I’m a low voltage tech by trade and this should be listed as a FAIL OF THE WEEK. The “shortest patch cable” thing is honestly a meme in the low volt world. Has no real use.
In the LV world I have seen plenty of short jumpers. They are normally used on wall plates where you don’t have a lot of room to stuff a meter of cable behind a wall phone or camera. So yeah, these are a thing. Normally a few inches to allow for getting the device off the wall plate and unplugging it.
A use case for connecting the no cord Ethernet cord thing could be to hook two raspberry pi’s together of connecting a raspberry pi to a wireless hotspot.
Does this improve internet speed? If not, that’s a lack of service.
No, for that you need “oxygen free” CAT 5e cables.
All this has happened before, and all this will happen again, heh. I’m too young for BBS but old enough to have seen this same competition (on the Web) before Reddit was a thing. New specs to comply with this time, though!