Designing For The Small Grey Screen

With the huge popularity of retrocomputing and of cyberdecks, we have seen a variety of projects that use a modern computer such as a Raspberry Pi bathed in the glorious glow of a CRT being used as a monitor. The right aesthetic is easily achieved this way, but there’s more to using a CRT display than simply thinking about its resolution. Particularly a black-and-white CRT or a vintage TV has some limitations due to its operation, that call for attention to the design of what is displayed upon it. [Jordan “Ploogle” Carroll] has taken a look at this subject, using a 1975 Zenith portable TV as an example.

The first difference between a flat panel and a CRT is that except in a few cases it has a curved surface and corners, and the edges of the scanned area protrude outside the edges of the screen. Thus the usable display area is less than the total display area, meaning that the action has to be concentrated away from the edges. Then there is the effect of a monochrome display on colour choice, in other words the luminance contrast between adjacent colours must be considered alongside the colour contrast. And finally there’s the restricted bandwidth of a CRT display, particularly when it fed via an RF antenna socket, which affects how much detail it can reasonably convey. The examples used are games, and it’s noticeable how Nintendo’s design language works well with this display. We can’t imagine Nintendo games being tested on black-and-white TV sets in 2022, so perhaps this is indicative of attention paid to design for accessibility.

While they require a bit of respect due to the presence of dangerous voltages, there’s a lot of fun to be had bringing a CRT into 2022. Get one while you still can, and maybe you could have a go at a retro cyberdeck.

23 thoughts on “Designing For The Small Grey Screen

  1. CRT monitors are awesome, I think.
    Especially the monochrome versions which have no mask.
    Their picture is so natural and soft! 😍
    Their resolutions can be much higher than that of color models, too.
    Monochrome security systems used to display about 1000 lines.
    And then there are/were models with long persistence, for a flicker free experience.
    Just think of these green monitors..

    PS: If you’re going to use a recent mono TV for a Pi, please consider using an AV mod for the TV (bypass for the tuner), with an extra decoupling capacitor (100uf?).
    And please, please consider disabling colour burst in the config file.
    Don’t let PAL or NTSC color signals destroy the purity of a monochrome “AV” signal (VBS, more precisely). Composite/CVBS is horrible, but plain VBS is fine. :)

      1. And I have books about radio from the fifties because some things are better explained when something is new.

        That’s not always the case those books don’t cover solid state or digital communication.

        1. There are also some advancements that these old books may not include yet.

          For example, b/w TV sets from the 60s/70s started to include a separation transformer.
          It both protected the user and the TV a like.
          For example, voltage fluctuations on the AC line didn’t kill/damage the TV anymore so easily.
          Before that, TV tubes got their anode voltage directly from AC – which is very dangerous.
          Luckily, the antenna jack was insulated because of its nature, at least (it was meant to receive RF; and RF receivers incorporate galvanic insulation usually).

          TVs with a plastic or bakelite case, like those classic portables with a handle, have a higher chance of working from ~12v internally.
          Which means they do have their own PSU that generates all needed voltages from ~12v. That’s why they also had a 12v input on the back.
          These TVs are suited for video monitor conversion (bypassing the tuner).

          Unlike those killer TVs from the 1950s or the Junost series from the Russia..
          They’re better left unmodified. And positionned near a fire extinguisher, hi.

  2. Two thing about that article seem odd to me.

    Firstly we see luminance contrast logarithmically and colour in a more linear way and that is why chroma and luma are separated in TV transmission and allocated different bandwidths. We don’t consciously compare the the intensity of the different frequencies that make up a colour, we just see it as a colour.

    Secondly the spectrum that is used to compare the intensity of colours makes no sense.

    Red #FF0000, Green #00FF00 and Blue #0000FF for some reason seem less intense than Cyan #00FFFF, Magenta #FF00FF and Yellow #FFFF00.

    Well is there any wonder why when you have chosen different intensities to start with.

    Because Red #FF0000, Green #00FF00, Blue #0000FF, Cyan #008080, Magenta #800080, Yellow #808000 and Grey #333333 all have the same intensity.

    I just discovered that the above wont produce the right colours as they have to be adjusted. The comparative Human con sensitivities are Red 564, Green 533, Blue 437 and Luma 498 but this doesn’t help much as it’s only half the equation Because although Human Blue and Green are closely centred to actual Blue and Green, Human Red is centred at actual Yellow.

    Anyway, the point being (although I got the example wrong) These colours can be made at equal intensities.

    1. This is backward. The separation of luminance and chroma has nothing to do with better encoding for human perception, and everything to do with the fact that the FCC required color broadcast signals to be compatible with millions of then-existing B&W TV sets. The result was a compromise that mostly works because of the limits of human perception.

      The luminance signal is single-sideband AM, and the B&W signal is designed to have approximately equal horizontal and vertical resolution. Sound is already added via a sideband that is frequency modulated; since it has a constant amplitude AGC just subtracts it out of the luminance. (One purpose of the “front porch” in the NTSC signal is to make sure the AGC always sets the same baseline for a series of dark or light frames.)

      In order to add color the designers stole about 30% of the luminance bandwidth and added another sideband which is phase shift modulated. Added to the front porch is a “color burst” tone which synchronizes a local oscillator so that the color phase shift can be detected during the frame. This signal has horrible bandwidth, and certain color combinations smear badly, as much as several percent of the width of the scan line. This isn’t usually apparent in motion video of real life images, but it’s very noticeable with the saturated colors and relatively static patterns often used by computers and video games none of which existed when the NTSC signal was designed.

      The reason there is a “color killer” circuit in color TV’s isn’t just to prevent stray color artifacts from appearing, but also to recover the stolen bandwidth for the color signal; otherwise, color TV’s would be very noticeably worse than B&W sets at displaying B&W signals.

      1. The NTSC and PAL standards certainly have their limitations, but to this day whenever I dig into the details I marvel that the engineers of yore were able to pull it off in consumer-grade equipment.

        I can imagine the geeking that must have gone on – “OK… so I have to design a 3MHz phase-locked loop that grabs this here color burst and compares it to the rest of the line. And I have a budget of 3 vacuum tubes. Alright…. Hold my beer.”

        1. Weaving color broadcast into compatible B&W signals is one of the greatest hacks of all history; The fact that they managed to make it work at all without obsoleting millions of exiting B&W TV sets was a miracle.

      2. “In order to add color the designers stole about 30% of the luminance bandwidth”

        not exactly, luminance and chrominance are actually spectrally interleaved. An analogue TV signal has no continuous spectrum, it consists of many narrow sidebands at integer multiples of the line frequency. The colour subcarrier in NTSC and PAL is locked to the line frequency and was carefully chosen so that the chrominance sidebands lie between the luminance sidebands and that the resulting modulation of the luminance signal is cancelled out in the picture tube and the human eye. The colour killer just disables the chrominance amplifier output if there’s no colour burst, as the chrominance synchronous detector would only produce (colourful) noise without a locked carrier.

        1. The interleaving trick eliminates some interference but no, really, they stole the bandwidth from the luminance signal, reducing its ability to respond to in-line luminance variations, in order to accommodate the color signal. This is a very basic and obvious result of applying information theory (thanks Claude Shannon) to the whole scheme. The color signal smears and the luminance signal isn’t quite as tight but for the most part we don’t notice it because we’re watching Gunsmoke or Manum PI. When people who grew up on modern RGB and digital signaling see a 70’s era NTSC signal they are usually horrified at how bad it looks. We all took it for granted in the day because there was nothing else, and it seemed like a miracle.

          1. You can actually make their heads explode if you play a tape in a crapped out VCR where the colors wander off an inch or two to the right.

      3. I don’t have it backwards and it has nothing to do with the FCC NTSC because we don’t have the FCC or broadcast NTSC (Never The Same Color) in my country. We have (had) PAL. So although things might be different in your country, that doesn’t mean I have it backwards.

        And yes absolutely PAL BG and PAL I (UK and AUS) has design consideration that definitely include human colour perception. I owned and operated a domestic electronics repair centre for decades just prior to LCD screens. So I have probably fixed more CTR’s than you have seen right back to the nightmare era of colour adjustment/geometry on early day tri-spot (mask) screens.

        I mean why allocate more bandwidth to a colour spectrum that humans have less sensitivity to.

        And here luma was double side-band with about half of one side-band filtered off with a slope filter to make way for the chroma and FM audio signals as they were re-ranged to fit in the existing B/W bandwidth.

        I think UK went for a 5.5MHz FM Audio sub-carrier so they can “hear the bloody soccer in the pub” and Aus went for 4.5MHz FM Audio sub-carrier and that’s about the only difference.

        There is actually so much about this technology (CRT TV era) that very few people will ever understand. Even the choice of the many frequencies involved has a reason, they’re not arbitrary, I lot of math went to these decisions.

        So here’s a puzzling thought for you. The very early day CRT TVs would loose the correct colour balance/contrast or put simply end up with the wrong colours with age. (they could be almost as bad as NTSC 😋) and My business was the preferred place for customers to go for a number of reasons but particularly for colour adjustment when I’m colour blind. This may seem logical but it’s not. A person with normal colour vision will simply turn the pot’s until it looks right “to them”. If their colour perception (eye cone peek sensitivities) are in the middle of the bell curve then the resulting adjustments will be good for other peoples colour perception. If their colour perception if away from the centre of the bell curve then it will look good to them but not to customers. A colour blind person doesn’t attempt to do it by colour sight and instead does it in accordance with the TVs specifications.

    1. grey and blue at the same time until you look at the author’s name. Gibson’s was (implied) grey. Sawyer’s version was explicitly blue.
      Funny. Both Canadians, they both spelled “colour” incorrectly. Pandering to the audience, I suppose.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.