If you’re interested in using Large Language Models (LLM) in a project, but aren’t plugged directly into the fast-developing world of artificial intelligence (AI), knowing what tool or software to use can be daunting. Luckily, [Max Woolf] created simpleaichat, which is complete with examples and documentation and minimal code complexity.
As [Max] puts it, the main motivations behind the project are to provide useful tools while making it easier for non-engineers to peer through the breathless hyperbole and see just how AI-based apps actually work. This project was directly inspired by [Max]’s own real-world software experiences in this area, particularly his frustrations with popular and much-hyped frameworks in which “Hello World” feels a lot more like Hell World.
simpleaichat
is a Python package that provides easy and powerful ways to interface with the OpenAI API, makers of ChatGPT. Now, it is true that OpenAI’s models are not open source and access is not free, but they are easily one of the most capable and cost-effective services of their kind.
Prefer something a little more open, and a lot more private? There’s always the option to run an LLM locally on your own machine, possibly with the help of a tool like text-generation-webui or gpt4all. Running an LLM locally will not have the quality of OpenAI’s offerings, but it can still do the job. It’s also possible to give these local LLMs an interface that mimics OpenAI’s API, so there are loads of possibilities.
Are you getting ideas yet? Share them in the comments, or keep them to yourselves and submit a tip once your project is off the ground!
Langchain (or Flowise, which is a GUI for Langchain) – are two excellent options that are much more flexible and have a great community behind them.
The hype around LLM’s is completely overblown. They are not really AI, and mainly will be remembered for proving that the Turing Test isn’t good enough. Unlike humans and every animal down to nonsocial insects, the LLM does not actually have a model of its world encoded. It just takes what (presumable) humans have said and rearranges it in similar patterns to create output. It lies to you because it has no basis for even considering truth, and this is an un-feature that can’t really be fixed. You could make an argument that they are a useful for creating frameworks that a human can then check and refine, but the problem there is that they do pass the Turing Test so it is very seductive for humans to just trust their untrustworthy output.
I dont really get the whole, “just rearranging what humans have said in similar patterns” type arguments against LLMs being hype worthy.
That’s what people do all over the place. So yea we haven’t created a digital intelligence that will come up with something revolutionary like Einstein, but almost all people won’t do that either. People just parrot things they hear without any critical thinking about it. At least chatgpt attempts to reduce dissonant concepts in its answers. Like if you told it to respond to gay marriage using the teachings of Jesus as a guide, it probably wouldn’t try to convince you that gay people are evil and should suffer. Try that with your average evangelical Christian in America and you would get a different response.
Even in music, especially any type that makes use of samples, they use pieces from all over the place to create something unique out of things that people created but have just been rearranged. It is incredibly powerful and useful, especially due to the ease at which it can access the relevant info for you.
No disagreement on people trusting too much but that’s common with any computer generated result.
This. Often times we do not need “innovation” or even creativity in text and images. In fact, a lot of work people do with these is just rearranging them. Translation, summarizing topics for a different audience etc. And the fact that people can easily access those things without being a professional is huge.
For example, Musicians can make album covers without being graphical artists.
It’s like the invention of the smartphone. Sure it didn’t really change our lives fundamentally, we still work similar jobs, eat the same, have the same general standard of living. But it made a few things (like finding your way and communication) much easier.
I’d argue that creativity is a little more than humans rearranging things they’ve already seen or heard before. There’s outsider art, people that make art without the internet, and abstract concepts imbedded within art that don’t automatically translate from just the notes brush strokes that have come before. Just because similarities can be drawn between a human making a mood board for a design project and a image generator doesn’t mean that it’s a one to one equivalency. Also comparing an image generator to a smartphone is odd because one is a physical product that you pay a service fee to use while all the content on said device has fees or add revenue paying content or app creators, while image generators often have a service fee but scrape content off the internet from artists without permission and reep the benefits of the profit that was generated from the value the artists created. Because of there was no value in the art scrapped to begin with, why bother adding it into the model?
“People just parrot things they hear without any critical thinking about it.”
People who do that are usually not seen as useful, or asked for advice, or anything really. That’s why there are way too many scripts to follow, because people assume other people are stupid (and many don’t mind being exactly that, because it’s easier).
“At least chatgpt attempts to reduce dissonant concepts in its answers.” It will say a lot of self-contradictory things even in one answer, when you ask it anything of mild complexity. When it creates coherence it is fake, by just claiming things are a certain way, when they aren’t.
IOW, chat AI has reached the verbal capacity of many politicians.
You’re underestimating LLMs. They pretty much solved the NLP. High quality natural language interfaces, text summarisation, semantic search for unstructured text – it’s all quite a big deal. Of course none if this is an “AI”, but who cares? The real world use cases are very exciting, once you know the limitations.
Unfortunately it doesn’t really solve it, since it doesn’t understand many queries correctly, and will give false matches. Summaries are also not really correct. None of this can work without enough grounding in a proper concept framework of some kind.
I’ve heard these viewpoints before. But I watched an AI expert getting interviewed and the presenter was giving your viewpoint. He was saying, it isn’t really intelligent as it is just predicting the next, word doesn’t have a true model of the world, it isn’t really creative as it is just combining existing knowledge. The response to all of this from the AI expert was, do you mean, just like humans? And he is right, every criticism that you have also applies to us. Today ChatGPT has done a bunch of work for me in a fraction of the time it would have taken me, and did it better. And it took me 200,000 years to get here, while LLMs for done that in a few years. And this isn’t an end point, this is the beginning – and it outperforms us on many metrics, with all the shortcomings you mention. But besides all this, isn’t good enough, good enough?
“Do you mean just like humans?” is used so often to side-step addressing the actual criticism. Many lazy people do that, but there are many complete nonsense answers humans don’t do, not on average, and not the majority.
It’s the same excuse as they use for computer vision, where they say humans will be fooled too, but AI gets fooled in completely unpredictable and “random” ways, whereas for humans you can usually guess why they misinterpreted an image. For AI it’s often due to random “noise” that was trained as being a significant pattern/feature when it’s entirely irrelevant.
With Ollama and termux I’m running a Llama2 model on my Android Phone.
Ollama is mostly for MacUsers with M1 or M2 but also runs on Linux Machines if you compile it yourself (which I did on Android with Termux. Just some simple commands and you are good to go). It’s very easy to use.
Python again. Dependency hell. No, thank you, I’ll stick to llama.cpp…