Infinitely Scrolling E-Ink Landscape Never Repeats

Traditional Chinese landscape scrolls can be a few dozen feet long and require the viewer to move along its length to view all the intricate detail in each section. [Dheera Venkatraman] replicated this effect with an E-Ink picture frame that displays an infinitely scrolling, Shan Shui-style landscape that never repeats.

E-ink picture frame with infinitely scrolling landscape
A new landscape every time you look

The landscape never repeats and is procedurally generated using a script created by [Lingdong Huang]. It consists of a single HTML file with embedded JavaScript, so you can run it locally with minimal resources, or view the online demo. It is inspired by historical artworks such as A Thousand Li of Rivers and Mountains and Dwelling in the Fuchun Mountains.

[Dheera]’s implementation uses a 10.3″ E-ink mounted in an off-the-shelf picture frame connected to a Raspberry Pi Zero running a forked version of [Lingdong]’s script. It does a decent job of avoiding the self-illuminated electronic look and creates a piece of decor that you could easily just stand and stare at for a long time.

Computer-generated art is making a lot of waves with the advent of AI models like Dall-E and Stable Diffusion. The ability to bring original art into existence with a simple phrase will have an undeniably profound long-term effect on the art world.

52 thoughts on “Infinitely Scrolling E-Ink Landscape Never Repeats

    1. I agree. A video would be nice. The demo is useless, since it only scrolls one frame.

      And for the smart guys who answered before: there is this thing called timelapse.

      1. Thank you for the link! The generated landscapes are quite beautiful. It’s something I might enjoy on a panel that’s much larger than 10″, so I wonder whether they might look as nice on Samsung’s The Frame TV (QLED, but anti-glare matte displa in Art mode) vs. on an E Ink display.

  1. Not to be a jerk, this is not a “landscape that never repeats”. The display is only capable of displaying 16^(1872×1404) possible distinct images, so that is the maximum number of frames that it can possibly scroll before it it has to start repeating them. Just sayn.

    1. That is such a large number that you’d have to be chewing through them at 2.35*10^3154660 frames per second to go through all combinations before the heat death of the universe, so I think the fact that it can’t refresh that quickly means there is, effectively, no repetition.

    2. The landscape doesn’t repeat. Frames might repeat, though.

      Mathematicians will now start to talk about “disjunctive sequences”, but that would get pretty unfunny and pretty boring pretty fast.

      Consider the number “pi” (π), where digits do re-appear, but the decimal representation never ends, nor enters a permanently repeating pattern.

      1. I will point out that the current value of pi is about £1.25 for a steak and kidney one.

        Frankly, with the number of frames available and the time taken to change frames, the practical difference between “it never repeats” and “you’ll probably never notice a frame repeating” is both trivial and meaningless ….

        …. unless you are really into that kind of thing (yes, I accept that theorists raise valid aguments, but they are irrelevant to most of us, not being theorists).

          1. The local farm shop bakery – they’re best straignt from the oven.

            Don’t get many theorists around here …. mostly dairy farmers, who tend to deal purely in down to earth stuff.

      2. If I am looking at the display now and I see an image of a landscape, then it changes, and then I look at it again next week and see the exact same that I saw the first time then I’d describe that in english as “the landscape repeated”! :)

        If I say something and then I later say it it again, then I am repeating myself – even if I subsequently say something else in between and after.

    3. At 0.0033fps, each of your 16^(1872×1404) images should show up once every 10^3164777 seconds. The age of the universe is less than 4.4*10^17 seconds (estimated 13.82 billion years). I think it’s a safe bet you won’t see the same two frames in your lifetime.

  2. >The ability to bring original art into existence

    Is it original, or just unique?

    As far as I understood, the whole system is basically like making a google image search with some keywords for what the image is supposed to have, and then selecting the top ten images out of the search result. Then you run an evolution algorithm over a bunch of noise until the result is statistically similar to the previously selected images, then finally having someone pick the best looking result out of multiple iterations.

    At all points, the algorithm is not making original art, but comparing, “Does this look like Andy Warhol? No? Then does this look like Andy Warhol? No? Then does this… “

    1. This gets very meta very fast, but making art in the style of another artist is still original art. I don’t see any reason to not call this original art, other than some vague definition of original art being limited to being created by humans. The question is not “Does this look like Andy Warhol?” it’s “Does this look like *any* Andy Warhol?”

      In a sense, humans are using a GANN process to refine their ability to draw a human figure, or to paint photorealistic images. You start basics when you’re young and continue to refine your technique until you are very good at it. If you make an impressionist-style painting, it will be compared to the famous impressionists of history. That will be how its qualities are defined. That doesn’t mean it’s not original.

    2. Go read the linked github repo. It’s no AI, the author of this article just added that at the end with no relation to the actual project they were highlighting. The algorithm used in this project seems to just use good old perlin noise, as far as I can tall.

    3. DallE and similar trained NNs have effectively learnt in the same way human artists do: by looking at a whole lot of existing art, remembering what subjects looks like in different styles, and then producing object in style when asked. People have the idea that they are somehow “copy and pasting” from a set of sample images, but that is not the case. They are somewhat cargo-cultish in what they produce (e.g. a NN trained on a bunch of images with signatures will often generate images with a word-like squiggle, because it has learnt that art will often have some sort of squiggle near a corner), but so are human artists (e.g. artists who grab random japanese characters to slap into an image).
      Amusingly, “copy and paste from random images” is a human art style often used for concept art, and has a physical analogue in kitbashing.

  3. Playing around with the demo, I noticed a little “Chinese traditional hut” appear. Wit a clear label on it’s roof spelling “Pizza Hut”, funny Easter Egg :-)

    Unfortunately I can’t share the screen shot here.

  4. Just finished an artist collab event here in Hawai’i. Being the tech geek in the mix, this looks like a really fun piece for next year. The power pylons are a nice touch.

  5. This is cool! I do want to voice that Generative Art is not the same as AI art like what dall-e or SD does, though! Generative Art is programming the computer and it often takes good programming and maths skills (e.g. demoscene, vfx, graphics). And, it’s often original. AI art uses ML models like GPT-3 to associate pictures with words and then tries to “predict” where pixels and colors should go–it is not original because it’s only possible by sourcing and processing huge amounts of content and examples, and it’s not programmed with generative or mathematical algorithms, but trained with millions of existing pictures and often needs tons of compute resources. This is Generative Art. It’s programmed by a human. All it needs is that html file and it runs on low-end hardware.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.