Realtime Shadows On N64 Hardware

Although the Nintendo 64 console has in the minds of many been relegated to the era of ‘firmly obsolete graphics’, since its graphic processor’s (GPU’s) lineage traces directly to the best which SGI had to offer in the 1990s, it too supports a range of modern features, including dynamic shadows. In a simple demo, [lambertjamesd] demonstrates how this feature is used.

As can be seen in the demonstration video (linked after the break), this demo features a single dynamic light, which casts a shadow below the central object in the scene, with a monkey object floating around that casts its own shadow (rendered into an auxiliary frame buffer). This auxiliary buffer is then blended into the main buffer, as explained by [ItzWarty] over at /r/programming on Reddit.

This effectively means that the main scene uses a shadow volume, which was used extensively with Doom 3. The primary reasons for why the N64 didn’t use shadow volumes all over the place was due to the limitations this places on the shadow caster (objects) in the scene, such as the need to be convex, and overlap is likely to lead to artifacts and glitches.

Doom 3 would fix this with the use of a stencil buffer that would further refine the basic dynamic lighting support on the N64, which ultimately would lead to the fancy video game graphics we have today. And which no doubt will look properly obsolete in another decade again, as usual.

22 thoughts on “Realtime Shadows On N64 Hardware

  1. I think we have reached the stage where it doesn’t matter how much time passes graphics have gotten about as good as a human eye can perceive (for those titles that have really good graphics by today’s standard – 4K monitors that pretty much fill the field of view have nearly imperceptible pixels at that viewing distance), at least until proper full 3d holodeck type tech comes along you are stuck in 2D screens or sort of 3d VR, and as it stands VR can if you have you the hardware be not far behind the 2d screens in resolution and sharpness…

    I think from here on out it will be the quality of the physics interactions, animations and HUD/UI that make games look dated more than their graphics at least while playing – I expect the screenshot still image will keep improving a little, and there will be more interesting little details visible in the rare occasions ray tracing tech really shows up to make them pop, but for the most part ray tracing doesn’t do much as level design and faux shadow methods are combined to mean its rarely that noticeable…

      1. Seriously? Unreal Engine 5 tech Nanite already allows for near unlimited polygons.. It has been demonstrated in many tech demos including the publicly playable The Matrix Awakens. Euclideon is probably just vaporware at this point and it looks weak compared to Nanite.

    1. I tend to agree for the most part – and disagree in some aspects – but I’d claim that it’s a lack of proper animation (as in “realistic” animation) that’s currently making games look like games. Yes, physics are a part of that, but after all, movies and games suffer from “artistic directed physics” anyway, looking stupid for at least 99% of the time.
      The better animations get – which, unfortunately, means “get rid of those animation art directors you hired” – the more “believable” games (and movies alike) will become.

        1. I get the feeling Nitpicker has some very particular examples in mind that would have given the glib “fire the animators” comment some context. He just didn’t share them for some reason.

    2. I think this is why Nintendo will continue to succeed regardless of usually using outdated hardware.
      Their focus has always been on the games and sooner or later, any old cheap hardware will be able to fulfill their needs and still rival other PC’s in a box.

      1. That is true, Nintendo’s focus has always been on unique hardware. Keeping up with the latest technology is not as important to them as something novel. Sometimes they will get stuck on a gimmick, but sometimes it produces something competitive in that no one else would think to do that. They only want their games playable on their hardware. They are like the Willy Wonka of video games: weird, isolated, but fascinating.

    3. I wholeheartedly agree, obviously games are always looking to improve physics when it gives them the leg up, but not too many games that I’m aware of go out of their way to make physics the focal point anymore, since graphics were the driving force behind sales.

      Hopefully with VR getting more and more popular the focus on physics to keep the immersion fluid will push games further than ever before, holodecks feel too far away, but I don’t mind dreaming alongside you on that one.

      1. Without anything interrupting the current rate of progress no doubt they will, because they won’t be played in the same way at all most likely – being displayed in a more holographic 3d way… All the physics and animations will be better, more realistically imperfect world maps – probably some AI clutter and decor generation so there are more than 8 posters/pictures on the walls, not every room has the same desks etc (only way you can possibly get more without also having to have ever more 3d modeler, artist to create the ever more diverse world suitable objects)…

        But purely graphically at least for 2d screens the best looking games out there are crisp, with stupidly high polygon counts and detailed textures to the point that from the normal viewing distance while playing you can’t see any flaws as they aren’t really visible – the texture resolution and polygon counts etc are all so detailed and the eye is not capable of seeing the rougher edges, especially in the ever moving image (go taking screenshots looking for flaws and you are going to find them, and actually be able to see them, but from playing the game point of view that is entirely irreverent)…

        (heck games from quite some time ago really aren’t showing their age the way they would have over the same time period say 10 years earlier, and go back to the N64/PS1 era and boy do the best looking games of that era show their age in relatively no time – of course many games even now are not pushing the AAA we must be the Crysis of this year graphically level of effort, many are quite happy not bothering as they still look pretty good only putting in the effort to meet the best of 360 type game age, which takes substantially less work)

  2. The article hints at it, but just to be completely clear: only the cuboid “casts” a real shadow. The monkey, Suzanne, is just rendered into an off-screen buffer, then blended onto the ground. If there was another object below it, it would not be shadowed by the monkey. A bit confusing at first to read “object must be convex” but then seeing the monkey “cast” a shadow — it doesn’t.

  3. Not sure what you mean by “as good as a human eye can perceive”. We are far from that. I’ve been consistently underwhelmed by this generation of consoles in terms of graphical fidelity. Even that matrix tech demo was underwhelming. I think he the tools for photorealistic real time rendering exist today, but the amount of man power required to fill in every detail, of a city scape in the martix for example, is far more than a dev is willing to do. That tech demo looks great from about 10 feet away. It’s only once you get close to cars and inside windows that the illusion is shattered.

    1. “Inside” the windows breaks because the windows don’t have an “inside” to them. The demo achieves the speed it does because the buildings don’t have interiors, the trick is the window texture. In real time, those flat textures are shifted just enough to make the building look like it has an inside.

      The same trick is actually used in Half Life Alyx to make it seem like bottles have water (or beer or whatever) in them.

      Will actual games look better? I hope so. Nanite (dynamic rescaling of meshes) combined with real time retexturing works great for things that arw further away. It all reduces the amount of work needed for things in the background. And that leaves more time budget for the up-close details.

      1. What is this trick, btw? Do you know the technique? I’m working on materials for my game lately now that I have a proper lighting engine, this just involves PBR’s for the moment – so is it using a displacement map or is it some custom shader work?

        As for dynamic mesh scaling and real time (re-)texturing, these honestly just sound like a more game-designer-friendly spin on existing known about LoD methods, for example sparse octrees. Real time retexturing is nothing that you couldn’t do in any engine via manual mipmap management already; but it can be a bit of a headache with different hardware (Basis Universal texture format has made this a lot easier for Devs lately).

        ….with that said, I agree that tech is at the point where it’s capable of making photorealistic games – it’s the artists and designers who have trouble with it. So that’s why engines like UE5 are trying to make it easier.

    2. Oh, and what annoyed me with the Matrix demo was the eyes. Human eyes twitch constantly. Seems like everybody except animators know that eyes don’t stay still and locked. When Mr Anderson was staring down the barrel of the game camera, it just felt wrong.

Leave a Reply to Roxxy CashCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.