Procrastineering And Simulated Annealing

The software for the Supercon badge went down to the wire this year, with user-facing features being coded up as late as Thursday morning. This left “plenty of time” to flash the badges on Thursday afternoon, but they were admittedly still warm as the first attendees filed in on Friday morning. While I’ve always noted that the last minute is the best minute, this was a little close, and frankly there was an uncaught bug that we would have noticed if we had a few more hours to just play with it before showtime.

But we were by no means slacking. On the contrary, a few of us were putting in nights and full weekend days for six or eight weeks beforehand. The problem was hard, and the path to a solution was never clear, and changed depending on the immovability of the roadblocks hit along the way. This is, honestly, a pretty normal hacker development pattern.

What was interesting to me was how similar the process was to simulated annealing. This is an optimization method where you explore more of the solution space in the beginning, when the metaphorical “temperature” is hot. Later, as you’re getting closer to a good solution, you want to refine in smaller and smaller steps – it cools down. This rate of “cooling” is a tremendously important parameter in practice.

And this is exactly the way the badge development felt. We were searching in a very big solution space in the beginning, and many aspects of the firmware infrastructure were in flux. As it got closer and closer to a working solution, more and more of the code settled down, and the changes got smaller. In retrospect, this happened naturally, and you can’t always control or plan for the eureka moments, but I wonder if it’s worth thinking of a project this way. Instead of milestones, temperatures? Instead of a deadline, a freeze date.

One thought on “Procrastineering And Simulated Annealing

  1. Simulated annealing was used in the US to assign a paired digital channel during the transition from NTSC to ATV. The assignment of analog TV channels was determined by spacing of co-channel, first adjacent channel, +/-2 through +/-5 channel and +/- 14 and 15 channels in a huge grid of interlocking and interdependent geographical and spacing limits. After decades of really smart communications engineers mining the spectrum, it was grid-locked.
    They had a requirement to provide a complete 1 on 1 digital TV channel allocation overlay with minimal interference. Each analog station got a second DTV channel for simulcasting.
    No matter what, nearly any dropped-in channel created interference with the analog channels. The question was how bad was it going to be? A full 0.1 Km(!!!!!) grid analysis was done on all the terrain and existing station power, antenna pattern, location and received population centroids was done, starting with a random assignment of a channel applied as a paired starting point and then with interference “penalty” rankings, randomly flip channels, analyze again and either keep the changes or abandon them and re re re iterate.
    The task was to find a possibly not perfect but highly optimal nationwide solution to assign paired channels with the analog stations.
    It ran on Sun UltraSPARC enterprise servers and took about a month to run an optimized solution which was then entered as a “seed” for a further simulated annealing run.
    The way you know it’s getting close to an optimized solution is if the overall penalties quit lowering with further iterations. I forget how many full iterations but IIRC maybe 8-ish.

    It worked. And worked really well. A solution to approximately 2000 simultaneous and interdependent variables.

    Name of the program?
    FCC Longley Rice

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.