Assessing Developer Productivity When Using AI Coding Assistants

We have all seen the advertisements and glossy flyers for coding assistants like GitHub Copilot, which promised to use ‘AI’ to make you write code and complete programming tasks faster than ever, yet how much of that has worked out since Copilot’s introduction in 2021? According to a recent report by code analysis firm Uplevel there are no significant benefits, while GitHub Copilot also introduced 41% more bugs. Commentary from development teams suggests that while the coding assistant makes for faster writing of code, debugging or maintaining the code is often not realistic.

None of this should be a surprise, of course, as this mirrors what we already found when covering this topic back in 2021. With GitHub Copilot and kin being effectively Large Language Models (LLMs) that are trained on codebases, they are best considered to be massive autocomplete systems targeting code. Much like with autocomplete on e.g. a smartphone, the experience is often jarring and full of errors. Perhaps the most fair assessment of GitHub Copilot is that it can be helpful when writing repetitive, braindead code that requires very little understanding of the code to get right, while it’s bound to helpfully carry in a bundle of sticks and a dead rodent like an overly enthusiastic dog when all you wanted was for it to grab that spanner.

Until Copilot and kin develop actual intelligence, it would seem that software developer jobs are still perfectly safe from being taken over by our robotic overlords.

Will A.I. Steal All The Code And Take All The Jobs?

New technology often brings with it a bit of controversy. When considering stem cell therapies, self-driving cars, genetically modified organisms, or nuclear power plants, fears and concerns come to mind as much as, if not more than, excitement and hope for a brighter tomorrow. New technologies force us to evolve perspectives and establish new policies in hopes that we can maximize the benefits and minimize the risks. Artificial Intelligence (AI) is certainly no exception. The stakes, including our very position as Earth’s apex intellect, seem exceedingly weighty. Mathematician Irving Good’s oft-quoted wisdom that the “first ultraintelligent machine is the last invention that man need make” describes a sword that cuts both ways. It is not entirely unreasonable to fear that the last invention we need to make might just be the last invention that we get to make.

Artificial Intelligence and Learning

Artificial intelligence is currently the hottest topic in technology. AI systems are being tasked to write prose, make art, chat, and generate code. Setting aside the horrifying notion of an AI programming or reprogramming itself, what does it mean for an AI to generate code? It should be obvious that an AI is not just a normal program whose code was written to spit out any and all other programs. Such a program would need to have all programs inside itself. Instead, an AI learns from being trained. How it is trained is raising some interesting questions.

Humans learn by reading, studying, and practicing. We learn by training our minds with collected input from the world around us. Similarly, AI and machine learning (ML) models learn through training. They must be provided with examples from which to learn. The examples that we provide to an AI are referred to as the data corpus of the training process. The robot Johnny 5 from “Short Circuit”, like any curious-minded student, needs input, more input, and more input.

Continue reading “Will A.I. Steal All The Code And Take All The Jobs?”

Hackaday Links Column Banner

Hackaday Links: August 8, 2021

Do you have burning opinions about GitHub Copilot, the AI pair programmer that Microsoft introduced a few months ago? Are you worried about the future of free and open software? The Free Software Foundation is funding a call for white papers of 3,000 or fewer words that address either Copilot itself or the subjects of copyright, machine learning, or free software as a whole. If you need more background information first, check out [Maya Posch]’s excellent article on the subject of Copilot and our disappointing AI present. Submissions are due by 10AM EDT (14:00 UTC) on Monday, August 23rd.

There are big antique books, and then there are antiphonaries — these are huge tomes full of liturgical chants and things of that nature. When one of them needs a lot of restoration work, what do you do? You build an all-in-one housing, display case, and cart that carefully holds it up and open (YouTube). Otherwise, you have to have multiple gloved people being extra careful. Jump to about the 14-minute mark to see the device, which is mostly made from extruded aluminum.

In more modern news: you may be waiting out this chip shortage like everyone else, but does it require renting out a bunch of real estate in perpetuity? We didn’t think so. Here’s an aerial photo of a stockpile of Ford Super Duty trucks that are waiting for chips at a dead stop outside the Kentucky Speedway. Thousands of brand new trucks, exposed to the elements for who knows how long. What could go wrong?

While we’re asking questions, what’s in a name? Well, that depends. We’ve all had to think of names for everything from software variables to actual children. For something like a new exoplanet survey, you might as well make the demonym remarkable, like COol COmpanions ON Ultrawide orbiTS, or COCONUTS. Hey, it’s more memorable than calling them X-14 and -15, et cetera. And it’s not like the name isn’t meaningful and descriptive. So, readers: do you think this is the worst name ever, planetary system or otherwise? Does it shake your tree? We’re on the fence.

GitHub Copilot And The Unfulfilled Promises Of An Artificial Intelligence Future

In late June of 2021, GitHub launched a ‘technical preview’ of what they termed GitHub Copilot, described as an ‘AI pair programmer which helps you write better code’. Quite predictably, responses to this announcement varied from glee at the glorious arrival of our code-generating AI overlords, to dismay and predictions of doom and gloom as before long companies would be firing software developers en-masse.

As is usually the case with such controversial topics, neither of these extremes are even remotely close to the truth. In fact, the OpenAI Codex machine learning model which underlies GitHub’s Copilot is derived from OpenAI’s GPT-3 natural language model,  and features many of the same stumbles and gaffes which GTP-3 has. So if Codex and with it Copilot isn’t everything it’s cracked up to be, what is the big deal, and why show it at all?

Continue reading “GitHub Copilot And The Unfulfilled Promises Of An Artificial Intelligence Future”