73 Computer Scientists Created A Neural Net And You Won’t Believe What Happened Next

The Internet is a strange place. The promise of cyberspace in the 1990s was nothing short of humanity’s next greatest achievement. For the first time in history, anyone could talk to anyone else in a vast, electronic communion of enlightened thought, and reasoned discourse. The Internet was intended to be the modern Library of Alexandria. It was beautiful, and it was the future. The Internet was the greatest invention of all time.

Somewhere, someone realized people have the capacity to be idiots. Turns out nobody wants to learn anything when you can gawk at the latest floundering of your most hated celebrity. Nobody wants to have a conversation, because your confirmation bias is inherently flawed and mine is much better. Politics, religion, evolution, weed, guns, abortions, Bernie Sanders and Kim Kardashian. Video games.

A funny thing happened since then. People started to complain they were being pandered to. They started to blame media bias and clickbait. People started to discover that headlines were designed to get clicks. You’ve read Moneyball, and know how the use of statistics changed baseball, right? Buzzfeed has done the same thing with journalism, and it’s working for their one goal of getting you to click that link.

Now, finally, the Buzzfeed editors may be out of a job. [Lars Eidnes] programmed a computer to generate clickbait. It’s all done using recurrent neural networks gathering millions of headlines from the likes of Buzzfeed and the Gawker network. These headlines are processed, and once every twenty minutes a new story is posted on Click-O-Tron, the only news website you won’t believe. There’s even voting, like reddit, so you know the results are populist dross.

I propose an experiment. Check out the comments below. If the majority of the comments are not about how Markov chains would be better suited in this case, clickbait works. Prove me wrong.

72 thoughts on “73 Computer Scientists Created A Neural Net And You Won’t Believe What Happened Next

  1. I’m not too impressed. The headlines seem like the same kind of computer generated nonsense we’ve been seeing for a while now.

    I don’t know much about Markov chains but I’d bet they would be better than this.

  2. So, I did a Google search on the following tags:
    “bernie sanders kim kardashian evolution politics”
    Guess whose thread came up ninth in the search results, Brian.
    Then I decided to post a reply on this thread.
    What did it feel like when you lost all hope and your soul left your body?

      1. Technically, yes, but I used to work for a company whose name is the opposite of Questions and they’re scheme and business model is phenomenally worse than Buzzfeed. For every “10 Best Something Something” article, you have to click 3 times to go through one picture. A Top 10 type slideshow requires 30 clicks to complete and they’ve built close to 25 different sites to slap this model onto. They have completely changed my definition of “clickbait” and given me a new high water mark for “douchebaggery”.

    1. Yeah, they “deliver” shit that other websites have made, largely without attribution, but with copious additional ad revenue from their ad-infested website. It would be like if I sold you a really tasty pizza by buying one from Papa John’s for $10 and convincing you to buy it off me for $15 without ever telling you where I got it from.

  3. no chains, no matter how you mark them off. this is a terrible thing. it’s only measuring the dilution level and shouldn’t be used to justify.

    keep up the [necessarily inferior but residually appreciated] work.

  4. Well, I’m glad to hear clickbait is written by computers, I’d be worried for the sanity of humans who had to write it.
    Now, can we create Bayesian filters or something for it like for spam?

  5. I think the content here is actually substantially better than what markov chains could produce. A Markov chain could, at best, produce a mashup of previous clickbait headlines, not entirely new ones.

  6. Love this post!
    Re reddit “There’s even voting, like reddit, so you know the results are populist dross.”
    Had me in tears!

    Reddit could be great but the most useful subreddits are completely dead where as you can find every kind of porn on there

    1. When I saw the headline (and opening couple of lines of text), I did wonder if they’d made the neural net to do that so you could filter out anything with a high ‘click-bait’ factor from sites/feeds/hackaday but alas we just have to rely on the crowd to carry pitchforks and see where the mob takes us.

  7. By far my favorite part of the linked article: “Ilya Sutskever and Geoff Hinton trained a character level RNN on Wikipedia, and asked it to complete the phrase “The meaning of life is”. The RNN essentially answered “human reproduction”.”

    1. And from the comments: “I fed megahal all of Lewis Carroll’s works from project Gutenberg, and after a little chat it suggested “What is the use of computers without pictures or conversation”.”

      I love this thing. Even if we’re using human pattern recognition (and confirmation bias) to cherry-pick the best examples, it’s still a wonderful little program.

  8. Markov chains will kill Obama and promote Saint Snowden to presidency. Meanwhile, NSA is busy decrypting Putin’s grocery list written on a typewritter. Kim Kardashian oh yeah.

  9. “This guy thinks his cat was drunk for five years” That actually made me laugh out loud & wake my wife. Because I had a cat that did sometimes act like that. Other than that – mostly crap.

  10. I would have definitely tried a Markov chain in this case. Previous click history is not relevant to choosing the “clickbait”, because current trends attract attention. A neural net is interesting but is it overkill in this case?

      1. Markov chains are a bit too simple. They’re a useful tool, and prove some interesting points, but they’re not meant to be implemented for anything complex. Anything past fooling post-modern journals of textual anuspection is a bit much for them.

      1. The problem with your experiment isn’t that it worked, but that humans are smart enough to catch on to bull. Maybe you fool me once or twice, but eventually classic conditioning will tell me I’m being falsly lured into another poorly written article. Well, maybe. I keep landing on these around here.

        I kid.

        Seriously though, click bait only works a few times before people realize they aren’t getting the satisfaction they expected. Maybe you made your money for the day but you may eventually run out of customers.

  11. “Turns out nobody wants to learn anything when you can gawk at the latest floundering of your most hated celebrity…”

    I dunno about anyone else… but i learn metric shit tons of stuff everyday on the internet. And i share it with the people around me, so they learn by association. Today we learned to strap a 2 x 4 to your tire perpindicular across the outter circumference to get out of mud. I learn all sorts of stuff on Hackaday everyday… and I am no hacker… well, not in the broadest sense of the term. Who know what ill learn tommorow?

    1. And THIS is inherently the problem. The goal posts for what constitutes learning have shifted.

      As opposed to learning being a process which leads to a slow but inevitable move toward mastery(even if you don’t achieve mastery), it is redefined as the consumption of tidbits people find might be useful at some point.

  12. On ‘Justin Bieber’s campaign gun laws’:

    “This week’s YouTube training offers what appears to be a controversial opportunity for everyone to think, and tells all about their history.”

Leave a Reply to SebiRCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.