Natural Language AI In Your Next Project? It’s Easier Than You Think

Want your next project to trash talk? Dynamically rewrite boring log messages as sci-fi technobabble? Happily (or grudgingly) answer questions? Doing that sort of thing and more can be done with OpenAI’s GPT-3, a natural language prediction model with an API that is probably a lot easier to use than you might think.

In fact, if you have basic Python coding skills, or even just the ability to craft a curl statement, you have just about everything you need to add this ability to your next project. It’s not free in the long run, although initial use is free on signup, but for personal projects the costs will be very small.

Basic Concepts

OpenAI has an API that provides access to GPT-3, a machine learning model with the ability to perform just about any task that involves understanding or generating natural-sounding language.

OpenAI provides some excellent documentation as well as a web tool through which one can experiment interactively. First, however, one must create an account and receive an API key. After that is done, the doors are open.

Creating an account also gives one a number of free credits that can be used to experiment with ideas. Once the free trial is used up or expires, using the API will cost money. How much? Not a lot, frankly. Everything sent to (and received from) the API is broken into tokens, and pricing is from $0.0008 to $0.06 per thousand tokens. A thousand tokens is roughly 750 words, so small projects are really not a big financial commitment. My free trial came with 18 USD of credits, of which I have so far barely managed to spend 5%.

Let’s take a closer look at how it works, and what can be done with it!

How It Works

The API accepts requests in a variety of ways, and if you can craft a curl statement, use the command line, or write some simple Python (or node.js) code, good news! You have all you need to start trying ideas!

I will describe using the API in its most basic way, that of completion. That means one presents the API with a prompt, from which it will provide a text completion that attempts to match the prompt. All of this is done entirely in text, and formatted as natural language.

Using the API in this way can be thought of as implementing a kind of highly-capable autocomplete. The engine itself can be thought of as a sort of fantastically-complex state machine, while at the same time it is also not quite like anything else.

What Kind of Stuff is Possible?

Here’s a very simple example:

Prompt (sent to API) Completion (response from API)
Write a witty error message for a low battery condition. “You’re running on empty!”

The prompt is also how one “programs” the model, and designing a good prompt is a big part of getting good results.

Because the model is capable of understanding language structure and intent, a prompt can be worded as if one were giving a writing assignment to a middle schooler.

Here is a slightly more complex example that does exactly that:

Prompt Completion
Provide excited commentary for an event in a tabletop game about trains.

Event: A player named Rob sells a company.
Commentary:

“Rob has decided to sell their company. This is a big move and will have big repercussions on the game. Who will buy the company? What will they do with it? Only time will tell.”

There are a few things to point out in the above example. The first line of the prompt describes what is to be done. The next two lines (“Event:” and “Commentary:”) set the model up on how to the statement should be completed. Note that the “Commentary:” line is left blank in the prompt; the model will complete what was started.

The practice of crafting effective prompts is called prompt design, and it is key to getting all sorts of useful and different responses from the API. Let’s take a closer look at how prompt design affects results.

Getting Better Responses

If one is having trouble getting the model to respond in a useful way, it can be helpful to include multiple examples of what is wanted in the prompt itself to give the model a better idea of what is expected.

The API documentation has an excellent demonstration of this idea, and I will briefly restate it below:

Prompt (sent to API) Completion (response from API)
Suggest three names for a horse that is a superhero. Superman, Batman, Spiderman

This result isn’t really what we want. However, including some examples in the prompt itself yields markedly better results:

Prompt (sent to API) Completion (response from API)
Suggest three names for an animal that is a superhero.

Animal: Cat
Names: Captain Sharpclaw, Agent Fluffball, The Incredible Feline
Animal: Dog
Names: Ruff the Protector, Wonder Canine, Sir Barks-a-Lot
Animal: Horse
Names:

Mighty Steed, Blaze the Wonder Horse, Thunderhoof

Doing this increases costs — recall that one pays per token, both in the prompt as well as in the output — but providing multiple examples in the prompt can be key to getting the best results in some cases, because it gives the model a much clearer idea of what is being requested, and how it should be formatted.

Again, it is helpful to think of the prompt as a writing assignment for a middle-schooler; a middle-schooler who can in turn be thought of as a fantastically-complex and somewhat variable state machine.

Same Prompt, Different Completions

For an identical prompt, the API doesn’t necessarily return the same results. While the nature of the prompt and the data the model has been trained on play a role, diversity of responses can also be affected by the temperature setting in a request.

Temperature is a value between 0 and 1, and is an expression of how deterministic the model should be when making predictions about valid completions to a prompt. A temperature of 0 means that submitting the same prompt will result in the same (or very similar) responses each time. A temperature above zero will yield different completions each time.

Put another way, a lower temperature means the model takes fewer risks, resulting in completions that are more deterministic. This is useful when one wants completions that can be accurately predicted, such as responses that are factual in nature. On the other hand, increasing the temperature — 0.7 is a typical default value — yields more diversity in completions.

Fine Tuning the Model

The natural language model behind the API is pre-trained, but it is still possible to customize the model with a separate dataset tailored for a particular application.

This function, called fine tuning, allows one to efficiently provide the model with many more examples than it would be practical to include in each prompt. In fact, once a fine tuning dataset has been provided, one no longer needs to include examples in the prompt itself. Requests will be processed faster, as well.

This probably won’t be needed except for narrow applications, but if you find that getting solid results for your project is relying on large prompts and you’d like it to be more efficient, fine tuning is where you need to look. OpenAI provides tools to make this process as easy as possible, should you require it.

What Does The Code Look Like?

There is an interactive web tool (the playground, requires an account) in which one can use the model to test ideas without having to code something up, but it also has the handy feature of generating a code snippet upon request, for easy copy and pasting into projects.

Here is the very first example in this article, formatted as a simple curl request:

curl https://api.openai.com/v1/engines/text-davinci-002/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
  "prompt": "Write a witty error message for a low battery condition.",
  "temperature": 0.7,
  "max_tokens": 256,
  "top_p": 1,
  "frequency_penalty": 0,
  "presence_penalty": 0
}'

And the same, this time in Python:

import os
import openai

openai.api_key = os.getenv("OPENAI_API_KEY")

response = openai.Completion.create(
engine="text-davinci-002",
prompt="Write a witty error message for a low battery condition.",
temperature=0.7,
max_tokens=256,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)

Installing the python package will also install a utility that can be used directly from the command line for maximum convenience:

$ openai api completions.create -e text-davinci-002 -p "Write a simple poem about daisies." --stream -M 128

(Notes: --stream displays results as they are received, and -M 128 limits the reply to a maximum of 128 tokens.)

The prompt “write a simple poem about daisies” generated the following text for me, which would be different every time:

The daisy is a beautiful flower 
That grows in the meadow and in the pasture 
It has a yellow center and white petals 
That make it look like the sun 
The daisy is a symbol of innocence 
And purity and is loved by all

All of the above examples work the same way: they fire the prompt off to the OpenAI API (using one’s API key for access, which the above examples assume has been set as an environment variable named OPENAI_API_KEY), and receive a reply with the response.

Responsible Use

Worth highlighting is OpenAI’s commitment to responsible use, including guidance on safety best practices for applications. There is a lot of thoughtful information in that link, but the short version is to always keep in mind that this is a tool that is:

  1. Capable of making things up in a very believable way, and
  2. Capable of interacting with people.

It’s not hard to see that the combination has potential for harm if used irresponsibly. Like most tools, one should be mindful of misuse, but tools can be wonderful things as well.

Are You Getting Ideas Yet?

Using the API isn’t free in the long term, but creating an account will give you a set of free credits that can be used to play around and try a few ideas out, and using even the most expensive engine for personal projects costs a pittance. All of my enthusiastic experimentation has so far used barely two dollars USD worth of my free trial.

Need some inspiration? We’ve covered a few projects already that have waded in this direction. This robotic putting game uses natural language AI to generate trash talk, and the Deep Dreams podcast consists entirely of machine-generated fairytales as a sleep aid, and was created with the OpenAI API.

Now that you know what kinds of things are possible and how easy they are, perhaps you are already getting some ideas? Let us know about them in the comments!

25 thoughts on “Natural Language AI In Your Next Project? It’s Easier Than You Think

  1. It is worth noting that, although it is significantly less advanced, one can download and run GPT-2 (the predecessor to GPT-3) locally. There are many helpful guides on the internet.

      1. The public version is under 400 megabytes and you can find the GPT-2 reduced model at https://github.com/openai Keep in mind that you can use that model as the starting point for your own training work on a larger version based on training sets that you have acquired yourself. Look around and you will find papers on how this “jump starting” can be done. Just imagine what you could do with all of libgen and the gutenberg project texts as well as the usual sort of blather on the open web, who cares if it takes you 5 years to train it, it would still be a better investment than bitcoin mining.

    1. That is good to know, especially if one’s application can run serviceably on something older. Some applications neither really need nor benefit from running on the latest and greatest.

  2. Inside implies contained within, when this is plainly contained without.

    “Add a faddy external dependency to your project while the service lasts”

    I mean you see us hauling cloud services and server dependent IoT over the coals on the daily and think we’re dumb enough to adopt this? C’mon.

  3. It’s worth noting that there are.multiple alternatives to GPT3. GPT J[0], by Eluther AI, can be downloaded and run, or you can use their api(or Google Collab), and seems to work better with more creative prompts(i.e. the classic unicorn test), than focused prompts like the examples in this article, but can still be finetuned. GPT2 is also available to download and use.[1] If you need to do anything related to Natural Language Processing other than completing prompts(but apparently it can do that too(untested)[2]) such as generating summaries, classifying text, etc., BERT(see a specific example at [3]) can probably be used to do it, although I’ve never used it so I can’t comment on it’s effectiveness. All of these links are to Hugging Face’s documentation because their library[4] is likely the easiest way for beginners to use these tools, but there are plenty of other ways.

    If you don’t want to use a transformer based model, you can always go back to Markov chains[4], I guess. To be fair, they do have decent quality(as in, you’ll immediately know it’s an AI, but it’s mostly coherent), yet they ran fine on my smartphone all the way back in 2015.

    DISCLAIMER: I have not tested any of the transformer models locally due to my current pc’s specs, but have used online demos, such as https://6b.eleuther.ai/ for GPT-J and TalkToTransformer(which seems to have been replaced with a different model) for GPT-2

    [0]: https://huggingface.co/docs/transformers/model_doc/gptj
    [1]: https://huggingface.co/gpt2
    [2]: https://mayhewsw.github.io/2019/01/16/can-bert-generate-text/
    [3]: https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France
    [4]: https://pypi.org/project/transformers/
    [5]: https://medium.com/mlearning-ai/getting-started-with-text-generation-using-markov-chains-661800599a23

    1. @Jerry said: “…TalkToTransformer (which seems to have been replaced with a different model) for GPT-2.”

      https://talktotransformer.com/

      Now redirects to:

      https://app.inferkit.com/demo

      Which seems a bit rough to me. Your link to the ElutherAI GPT-J 6 billion parameter demo works better IMO. Then there’s this: 7 Talk to Transformer Alternatives:

      https://www.topbestalternatives.com/talk-to-transformer/

      YMMV… Still, putting this NLP stuff to work in reality costs significant money, either in facilities (hardware + software development/admin) or paying for cloud-based services. Micropayments are misleading, they add up quick in NLP.

      1. I remember TalkToTransformer working better, which is why I said I think they changed the underlying model.

        RE: Cost

        To some to some degree, yeah, but it depends on scale. If you only have one user, it probably won’t be too bad. (For GPT-3, you could generate the equilavent of Shakespeare’s collected works on a finetuned version of the largest model for less than $200, based on OpenAI’s numbers.) Also, apparently someone figured out a way to strip down GPT-2 so it would have the same performance, but maintain low latency on a current gen iPhone. I haven’t tried it, but it seems promising.

        Also, if anyone needs something targeted that, once trained, runs cheaply, check out GAN’s, which can do pretty well. Before any of the transformer models were available, someone used one to generate plausible Shakespeare dialogues. (Not full plays due to coherence issues, but I think it could do entire scenes)

        1. OK, I get the cost part. But most times these NLP services target large organizations like customer service chat bots for corporations and governments. That’s where the costs scale up. But I guess that makes sense If the chat bots eliminate enough meat bags drawing paychecks.

  4. Well, I frankly hate chatty computers!!!

    I got this :tude from watching StarTrek TOS and how Capt Kirk dispatched both The Changeling and M5, and Capt Pickard and River dispatched the Binars. I mean, “Damn, that woman was Gorgeous, with a capital G!”. LoL. but I digress… Computers should be seen not heard., so let’s reduce them ALL to their fundamental and minimalist necessary blinkenlights and be done with them!!!

    This satire brought to you from a rantingmoron7, on GMail.

    stay well, folks.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.