Peering Down Into Talking Ant Hill

Watching an anthill brings an air of fascination. Thousands of ants are moving about and communicating with other ants as they work towards a goal as a collective whole. For us humans, we project a complex inner world for each of these tiny creatures to drive the narrative. But what if we could peer down into a miniature world and the ants spoke English? (PDF whitepaper)

Researchers at the University of Stanford and Google Research have released a paper about simulating human behavior using multiple Large Language Models (LMM). The simulation has a few dozen agents that can move across the small town, do errands, and communicate with each other. Each agent has a short description to help provide context to the LLM. In addition, they have memories of objects, other agents, and observations that they can retrieve, which allows them to create a plan for their day. The memory is a time-stamped text stream that the agent reflects on, deciding what is important. Additionally, the LLM can replan and figure out what it wants to do.

The question is, does the simulation seem life-like? One fascinating example is the paper’s authors created one agent (Isabella) intending to have a Valentine’s Day party. No other information is included. But several agents arrive at the character’s house later in the day to party. Isabella invited friends, and those agents asked some people.

A demo using recorded data from an earlier demo is web-accessible. However, it doesn’t showcase the powers that a user can exert on the world when running live. Thoughts and suggestions can be issued to an agent to steer their actions. However, you can pause the simulation to view the conversations between agents. Overall, it is incredible how life-like the simulation can be. The language of the conversation is quite formal, and running the simulation burns significant amounts of computing power. Perhaps there can be a subconscious where certain behaviors or observations can be coded in the agent instead of querying the LLM for every little thing (which sort of sounds like what people do).

There’s been an exciting trend of combining LLMs with a form of backing store, like combining Wolfram Alpha with chatGPT. Thanks [Abe] for sending this one in!

14 thoughts on “Peering Down Into Talking Ant Hill

  1. The mode of thinking they’re simulating is known as opportunistic behavior. Rather than having internal thought processes, these software agents have stored information which combines with external information and cues to produce behavior.

    It’s kinda like, when you walk into a room, and because of some distraction you forget why you went there. With your original intent missing, you stand there in the room and see a letter on your desk. You open and read the letter, you see it’s a bill, you know what to do with bills, so you open your banking app and pay the bill. You never planned or thought about doing that, but since you were there, and the letter was there… ping goes the pinball machine and the ball bounces towards the next interaction.

    The LLM uses a statistical model which tries to predict the next logical step from the present situation, so it is exactly like this. If you’re in a bathroom, brush your teeth. If you’re in the car, drive to work, etc.

    1. This is a good way to think of it, because the prompts to the LLM are always self contained “one-shots”, but they do try to account for the obvious failure mode of “I forgot why I came to this room! Oh well, I guess I’ll just “, and (hopefully) get the agents to maintain cohesive long-term personalities, plans, and routines by storing/including a self-image, goals, tasks, etc.

      It also tries to provide all “relevant” memories to the current context, ranked by a combination of the age of the memory and other details like who/what/where from the memory.

      There are also different types of small prompts that review/organize/summarize all these different internal details stored “in the agent”, which are called at different times as needed. Lots of small “introspection” updates like “summarize this conversation” or “break this goal down into tasks”, besides the main “What do you do next?” prompt.

      The full paper on arXiv is very approachable, fairly math-light (if you understand basic ideas like “key:value pairs” you’ll be fine), and absolutely fascinating to read. The overall experiment is really a test of whether any of the techniques they try actually help create agents that feels “cohesive”, and they even try removing different subsystems to see the effect on the agents’ intelligence.

Leave a Reply to DudeCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.