Can Digital Poison Corrupt The Algorithm?

These days, so much of what we see online is delivered by social media algorithms. The operations of these algorithms are opaque to us; commentators forever speculate as to whether they just show us what they think we want to see, or whether they try to guide our thinking and habits in a given direction. The Digital Poison device  from [Lucretia], [Auxence] and [Ramon] aims to twist and bend the algorithm to other ends.

The concept is simple enough. The device consists of a Raspberry Pi 5 operating on a Wi-Fi network. The Pi is set up with scripts to endlessly play one or more select YouTube videos on a loop. The videos aren’t to be watched by anyone; the device merely streams them to rack up play counts and send data to YouTube’s recommendation algorithm. The idea is that as the device plays certain videos, it will skew what YouTube recommends to users sharing the same WiFi network based on perceived viewer behavior.

To achieve subtle influence, the device is built inside an unobtrusive container. The idea being that it could be quietly connected to a given WiFi network to stream endlessly, in turn subtly influencing the view habits of other users on the same network.

It’s difficult to say how well this concept would work in practice. In many cases, sites like YouTube have robust user tracking that feeds into recommendation algorithms. Activity from a random user signed into the same network might not have much of an influence. However, conceptually, it’s quite interesting, and the developers have investigated ways to log the devices operation and compare it to recommendations fed to users on the network. Privacy provisions make this difficult, but it may be possible to pursue further research in this area. Files are on Github for the curious.

Ultimately, algorithms will always be a controversial thing as long as the public can’t see how they work or what they do. If you’re working on any projects of your own in this space, don’t hesitate to let us know!

[Thanks to Asher for the tip!]

22 thoughts on “Can Digital Poison Corrupt The Algorithm?

  1. In practice Google will soon ask you to click ReCaptcha “I’m a human” checkbox. After some more time it will do ReCaptcha again, but this time they’re asking you to select all bicycles. Finally it will give you special ReCaptcha where you keep selecting the right thing but they just don’t quite feel your humanity and ask you to try again and again (in other words you’re now doing unpaid work, training their AI).

    1. And this is how Google literally forced me to switch to Duck Duck Go. Not because of any ideology or whatevet, but because I, in the most concrete sense of the word, was unable to conduct a Google search at $work.
      I didn’t mind the odd captha. I didn’t mind the odd AI training. And then the algorithm went one step too far one too many a time.

  2. The videos aren’t to be watched by anyone; the device merely streams them to rack up play counts and send data to YouTube’s recommendation algorithm.

    This is what most of the “See this incredible worker doing X”, or other clickbait titles on Youtube are about. They’re not meant to be watched by anyone, they’re just there to rack up bot views so the authors could collect the ad money. That’s also why they show up in your feeds, so there’s an amplification effect where bots increase the number of authentic viewers who are duped to watch copied and AI re-generated videos.

    1. Mind, this has been going on for years now. The only new thing is the AI-complied and voice narrated videos like “This bear came to humans to ask for help – see the incredible story” that used to be made manually but are now completely automated.

  3. Just watch without being signed in and stream via yt-dlp to mpv. I largely discover content by search and bookmarked channels rather than looking what others watched.

    Spoiler alert: On a clean session on a fresh IP or even a VPN. There is always the most gangrenous politics injected into technology feeds. You do not escape the algorithm. It was always there. But you can just ignore it in your external player.

  4. Different computers and devices on my wireless LAN get drastically different recomendations from YT. Doubtful just being on the same wireless network is going to do anything to influence it for a given user. Theres so many spaces that serve many people per wireless access point it wouldn’t really make any sense to do that way anyway.

  5. Sharing my home LAN, my girlfriend, former roommate, and myself would get served similar suggested videos, even if we went off and accessed YouTube elsewhere. It seemed that it would push the most videos from the most actively watched/ interacted with profile(mine) to everybody else’s, and then I would get a little bit of their more unique content pushed back to me. It probably didn’t help/ hurt that former roommate and I had shared interests in the tech and gaming space, plus some more esoteric different interests elsewhere.

  6. Its always interesting to me the amount of videos I first see on my recomendation bar and later I saw here on Hackaday due to an article. I always wonder how is this possible given the amount of yt available videos.

    I think we need a new cultural movement called “digital vegan”, formed by people that dosnt consume algorithmically recomended stuff.

  7. Better: degooglify yourself. Adopt DuckDuckGo. Ditch gmail, calendars, maps, etc. Don’t be logged into a Google account unless you absolutely have to log in for something, then log out when you’re done. Use extensions like NoScript and uBlock Origin and Enhancer for YouTube; make sure googleanalytics.js and other google scripts are on the blacklist.

    It turns out you won’t care what their algorithm does if you never look at it.

  8. RE: jacking up youtube fake views – been done by russian bot farms since umm I dunno 2015 or so. Some accounts appear opened few days ago and go full speed watching videos non-stop, no sleep, etc.

    Second part is more subtle, gazillion comments under “watched” youtube videos. Almost always they look written by AI, though, occasional human-written stuffs, replies to other replies, appears, but usually not. Every time politix are involved, out they come in droves, AI commenters, overwhelming everything with the fake noise that makes it look like the majority of the comments contradict real life facts. They are actually quite good at what they do, mixing real events with fake events and introducing all kinds of “facts” of untraceable origins. Why “subtle”? because they do it in many languages, english being the most popular, obviously.

    I am surprised real editors at youtube are so myopic and don’t see the obvious, but I don’t work for youtube. This was quite predictable – I recall how MySpace went about the same way and gradually became unusable, drowned in too much noise (though, back then it was human-generated noise, but all the same). (BTW ,facebook is following the same MySpace trajectory – too much useless noise).

  9. The Pi is set up with scripts to endlessly play one or more select YouTube videos on a loop.

    I have doubts, as to whether this will have the intended effect. Presumably the programmers at YT know that no human is going to watch the same video (or select videos) in an endless loop, 24/7. So their programs are going to detect such activity, then somehow mark down the activity occurring on a certain account, IP address, or other method of fingerprinting the device accessing the content, with a note on it that says, “probably not a human”.

    It will be more interesting to see how YT handles providing content to those it has labeled non-human, IMO.

Leave a Reply to AJCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.