LLM-Generated Newspaper Provides Ultimate In Niche Publications

... does this count as fake news?

If you’re reading this, you probably have some fondness for human-crafted language. After all, you’ve taken the time to navigate to Hackaday and read this, rather than ask your favoured LLM to trawl the web and summarize what it finds for you. Perhaps you have no such pro-biological bias, and you just don’t know how to set up the stochastic parrot feed. If that’s the case, buckle up, because [Rafael Ben-Ari] has an article on how you can replace us with a suite of LLM agents.

The AI-focused paper has a more serious aesthetic, but it’s still seriously retro.

He actually has two: a tech news feed, focused on the AI industry, and a retrocomputing paper based on SimCity 2000’s internal newspaper. Everything in both those papers is AI-generated; specifically, he’s using opencode to manage a whole dogpen of AI agents that serve as both reporters and editors, each in their own little sandbox.

Using opencode like this lets him vary the model by agent, potentially handing some tasks to small, locally-run models to save tokens for the more computationally-intensive tasks. It also allows each task to be assigned to a different model if so desired. With the right prompting, you could produce a niche publication with exactly the topics that interest you, and none of the ones that don’t.  In theory, you could take this toolkit — the implementation of which [Rafael] has shared on GitHub — to replace your daily dose of Hackaday, but we really hope you don’t. We’d miss you.

That’s news covered, and we’ve already seen the weather reported by “AI”— now we just need an agenetic sports section and some AI-generated funny papers.  That’d be the whole newspaper. If only you could trust it.

Story via reddit.

4 thoughts on “LLM-Generated Newspaper Provides Ultimate In Niche Publications

  1. This is one of the better uses of LLMs. I like it.

    I may be missing something here, but I really don’t get the point of the whole “container-ized sandboxing” for LLMs. Why are we pretending there are “reporters” and “editors”? Its the same LLM that’s doing it behind the scenes. It should instead be scrape web -> see if something is relevant -> summarise -> repeat

    1. I mean if you change the context text, message history, whatever you call it, you get access to the LLM without any previous prompts having affected the output. Its just a matter of I guess, making the LLM wear different hats “you’re an editor”, “you’re a reporter” and so on

      I am working on an python XMPP chatbot system and I will be honest, I too got really caught up with anthropomorphizing LLM run personalities. But then it got really complicated and I eventually came to my senses.

      1. LLM’s being forward-only in their generation, I’m not surprised that passing the text again as input yields some improvement. It lets the ending of the text affect the earlier parts, which is otherwise not possible.

        Whether “you are the editor” or “your job is to make the text clear and concise” as a prompt gives a better result is something worth testing.

Leave a Reply to Ian SomersCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.