Better Coding Through Sketching

Back in the late 1970s and early 1980s, engineering students would take a few semesters of drafting and there would usually be a week or two of “computer-aided drafting.” In those days, that meant punching cards that said RECTANGLE 20,30 or something like that and getting the results on a plotter. Then we moved on to graphical  CAD packages, but lately, some have gone back to describing rather than drawing complex designs. Cornell University researchers are trying to provide the same options for coding. They’ve built a Juypter notebook extension called Notate that allows you to sketch and handwrite parts of programs that interact with traditional computer code. You can see a video about the work below.

The example shows quantum computing, but the idea could be applied to anything. The example has sketches that generate quantum circuits. Naturally, there is machine learning involved.

Continue reading “Better Coding Through Sketching”

Blog Title Optimizer Uses AI, But How Well Does It Work?

[Max Woolf] sometimes struggles to create ideal headlines for his blog posts, and decided to apply his experience with machine learning to the problem. He asked: could an AI be trained to optimize his blog titles? It is a fascinating application of natural language processing, and [Max] explains all about what it does and how it works.

The machine learning framework [Max] uses is GPT-3, a language model that works with natural-seeming human language that is capable of being tweaked in different ways. [Max] uses OpenAI’s GPT-3 API (which, by the way, is much easier to experiment with than one might think) and here is the basic workflow for his title optimizer:

  1. The optimizer takes as input a blog post title to optimize.
  2. OpenAI’s pre-trained GPT-3 engine is used to generate six alternate titles.
  3. For each of those alternate titles, a fine-tuned version of GPT-3 is consulted to judge how “good” they are based on custom training data. (“Good” in this context means “similar to titles of successful submissions on Hacker News“, but more on that in a moment.)
  4. Print the results.

Continue reading “Blog Title Optimizer Uses AI, But How Well Does It Work?”

A 3D-Printed Nixie Clock Powered By An Arduino Runs This Robot

While it is hard to tell with a photo, this robot looks more like a model of an old- fashioned clock than anything resembling a Nixie tube. It’s the kind of project that could have been created by anyone with a little bit of Arduino tinkering experience. In this case, the 3D printer used by the Nixie clock project is a Prusa i3 (which is the same printer used to make the original Nixie tubes).

The Nixie clock project was started by a couple of students from the University of Washington who were bored one day and decided to have a go at creating their own timepiece. After a few prototypes and tinkering around with the code , they came up with a design for the clock that was more functional than ornate.

The result is a great example of how one can create a functional and aesthetically pleasing project with a little bit of free time.

Confused yet? You should be.

If you’ve read this far then you’re probably scratching your head and wondering what has come over Hackaday. Should you not have already guessed, the paragraphs above were generated by an AI — in this case Transformer — while the header image came by the popular DALL-E Mini, now rebranded as Craiyon. Both of them were given the most Hackaday title we could think of, “A 3D-Printed Nixie Clock Powered By An Arduino Runs This Robot“, and told to get on with it. This exercise was sparked by curiosity following the viral success of AI generators, which posed the question of whether an AI could make a passable stab at a Hackaday piece. Transformer runs on a prompt model in which the operator is given a choice of several sentence fragments so the text reflects those choices, but the act of choosing could equally have followed any of the options.

The text is both reassuring as a Hackaday writer because it doesn’t manage to convey anything useful, and also slightly shocking because from just that single prompt it’s created meaningful and clear sentences which on another day might have flowed from a Hackaday keyboard as part of a real article. It’s likely that we’ve found our way into whatever corpus trained its model and it’s also likely that subject matter so Hackaday-targeted would cause it to zero in on that part of its source material, but despite that it’s unnerving to realise that a computer somewhere might just have your number. For now though, Hackaday remains safe at the keyboards of a group of meatbags.

We’ve considered the potential for AI garbage before, when we looked at GitHub Copilot.

Sharing Your Projects With The World: How?

So you just built a super-mega robot project that you want to share with the world. Super! But now you’re faced with an entirely new and different problem: documenting the process for the world to see. It’s enough to drive you back down into the lab.

  • What software should I use to create my project site?
  • How deep down the rabbit hole should I go when it comes to documenting the project?
  • What toppings do I want on my something-to-eat-while-hacking pizza?

We’re not going to get into the age old “pineapple or no pineapple” debate, but it’s important to note that the topic of how to share a project with the world has as many choices as toppings, and just as many opinions. The answer will always be simple: Do what works best for you!

The purpose of this article is to give some options to somebody considering sharing their projects online. There isn’t enough room to talk about every single option available to a hacker, so be sure to fill in your favorite options in the comments below. Let’s dive in!

Continue reading “Sharing Your Projects With The World: How?”

Author with book

Learn All About Writing A Published Technical Book, From Idea To Print

Ever wondered what, exactly, goes into creating a technical book? If you’d like to know the steps that bring a book from idea to publication, [Sara Robinson] tells all about it as she explains what went into co-authoring O’Reilly’s Machine Learning Design Patterns.

Her post was written in 2020, but don’t let that worry you, because her writeup isn’t about the book itself so much as it is about the whole book-writing process, and her experiences in going through it. (By the way, every O’Reilly book has a distinctive animal on the cover, and we learned from [Sara] that choosing the cover animal is a slightly mysterious process, and is not done by the authors.)

It turns out that there are quite a few steps that need to happen — like proposals and approvals — before the real writing even starts. The book writing itself is a process, and like most processes to which one is new, things start out slow and inefficient before they improve.

[Sara] also talks a bit about burnout, and her advice on dealing with it is as insightful as it is practical: begin by communicating honestly how you are feeling to the people involved.

Over the years I’ve learned that people will very rarely guess how you’re feeling and it’s almost always better to tell them […] I decided to tell my co-authors and my manager that I was burnt out. This went better than expected.

There is a lot of code in the book, and it has its own associated GitHub repository should you wish to check some of it out.

By the way, [Sara] celebrated publication by making a custom cake, which you can see near the bottom of her blog post. This comes as no surprise seeing as she has previously managed to combine machine learning with her love of making cakes!

The Anxiety Of Open Source: Why We Struggle With Putting It Out There

You’ve just finished your project. Well, not finished, but it works and you’ve solved all the problems worth solving, and you have a thing that works for you. Then you think about sharing your creation with the world. “This is cool” you think. “Other people might think it’s cool, too.” So you have to take pictures and video, and you wish you had documented some more of the assembly steps, and you have to do a writeup, and comment your code, and create a repository for it, maybe think about licensing. All of a sudden, the actual project was only the beginning, and now you’re stressing out about all the other things involved in telling other people about your project, because you know from past experience that there are a lot of haters out there who are going to tear it down unless it’s perfect, or even if it is, and even if people like it they are going to ask you for help or to make one for them, and now it’s 7 years later and people are STILL asking you for the source code for some quick little thing you did and threw up on YouTube when you were just out of college, and of course it won’t work anymore because that was on Windows XP when people still used Java.

Take a deep breath. We’ve all been there. This is an article about finding a good solution to sharing your work without dealing with the hassle. If you read the previous paragraph and finished with a heart rate twice what you started, you know the problem. You just want to share something with the world, but you don’t want to support that project for the rest of your life; you want to move on to new and better and more interesting projects. Here are some tips.

Continue reading “The Anxiety Of Open Source: Why We Struggle With Putting It Out There”

A Neural Network Can Now Be Your Writing Assistant

Writing is a difficult job; though, as a primarily word-based site, we may be a little biased here at Hackaday. Not only does a writer have to know the basics, like what a semicolon is and when to use one, they also need to build sentences that convey information in a manner that is pleasant to read. As many commenters like to point out, even we struggle with this on occasion (lauded and scholarly as we are).

Wouldn’t it be better if we could let our computers do the heavy lifting for us? After all, a monkey with infinite time will eventually write Shakespeare and all that. Surely, a computer can be programmed to do all that fancy word assembly while we sit back and enjoy some coffee. Well, that’s what [Robin Sloan] set out to do with a recurrent neural network-powered writing assistant.

Alright, so it doesn’t actually write completely on its own. Instead, [Robin’s] software takes advantage of [JC Johnson’s] torch-rnn project, and integrates it into Atom to autocomplete sentences. [Robin] trained his neural network on hundreds of old issues of the sci-fi magazines Galaxy and IF Magazine, which are available at the Internet Archive. Once the server and corresponding Atom package are installed, a writer can simply push the Tab key and the sentence will be completed.

The results are interesting. [Robin] himself says “it’s like writing with a deranged but very well-read parrot on your shoulder.” While it’s not likely to be used as a serious writing tool anytime soon, the potential is certainly intriguing. When trained on relevant source material, the integration into software like Atom could be very useful. If a neural network can compose music, surely it can write some silly tech articles.

[thanks to Tim Trzepacz for the tip!]

Typewriter image: LjL (Public domain).