On-click Install local AI Applications Using Pinokio

A pixellated image of pinokio

Pinokio is billed as an autonomous virtual computer, which could mean anything really, but don’t click away just yet, because this is one heck of a project. AI enthusiast [cocktail peanut] (and other undisclosed contributors) has created a browser-style application which enables a virtual Unix-like environment to be embedded, regardless of the host architecture. A discover page loads up registered applications from GitHub, allowing a one-click install process, which is ‘simply’ a JSON file describing the dependencies and execution flow. The idea is rather than manually running commands and satisfying dependencies, it’s all wrapped up for you, enabling a one-click to download and install everything needed to run the application.

But what applications? we hear you ask, AI ones. Lots of them. The main driver seems to be to use the Pinokio hosting environment to enable easy deployment of AI applications, directly onto your machine. One click to install the app, then another one to download models, and whatever is needed, from the likes of HuggingFace and friends. A final click to launch the app, and a browser window opens, giving you a web UI to control the locally running AI backend.

Many chat-type models are supported, as is stable diffusion and many other fun time sinks. Running AI applications on your hardware, with your data, and privately, is a total breeze. Unless an application needs external API access, no internet connection is needed. No sign-ups and no subscription costs! There are some obvious gotchas; AI applications need a lot of resources, so you will need plenty of RAM and CPU cores to get anything working, and for the vast majority of applications, a modern GPU with plenty of VRAM. Our testing showed that some apps needed a minimum of 4GiB of VRAM to even start, but a few ran on only the CPU. It just depends. We reckon you’ll need at least 8GiB to run the older stable diffusion 1.5 model, but that will not come as a great shock to some of you.

For a bit more of an intro to how all this works, and what you can do with it, check out the docs. The project is open source, but we haven’t located the source yet. Perhaps more testing is being performed first? Finally, there is an active discord as well, if you get stuck.

AI is not news here, here’s a little something allowing you to chat with a locally hosted LLM.  If this project isn’t self-contained enough for you, why not check out the AI in A Box?

27 thoughts on “On-click Install local AI Applications Using Pinokio

    1. Well on MacOS one of the first things the app does is to remove the xattr com.apple.quarantine from itself, so I’m going to guess no. Besides, if it’s all sandboxed how’s the AI supposed to go rogue and take over the world ;-)

  1. Has anyone else noticed how much AI smells like Ethereum at this point? Like, there’s all these tools and tutorials and arcana for you to get into, but the question remains carefully unanswered WHY you would do any of it.

    I mean, other than pumping up the hundreds of billions’ worth of rich people’s investments staked on the assertion that this stuff is valuable somehow.

    1. My wife is a cake decorator. Recently she needed an image modified to make it taller and said “Help!” well I started to try and imitate the style and colors, but I’m not great at image manipulation… After about 5 minutes I said, wait a second… Two minutes later I had a new image that took the existing image and expanded it in the original style and it was by far better than anything I could have done.

      Day to day, I’m a programmer. When I have technical questions I’ll often start at Copilot which 95% of the time gets me to the right path. Sometimes it’s just blatantly wrong, but I’ve gotten pretty good at detecting that, especially since I always check the references. I’d say it saves me a few hours a week.

      Is it 100% there yet? No. Is it making a difference for me personally, yes.

      1. But it’s not like, if your wife had the same problem three years ago, she’d have just given up and quit the cake industry. Perhaps you’d have spent a few hours growing your Pixelmator or Inkscape skills, which would have been a good investment (in yourself, rather than in Gulf states’ sovereign wealth funds).

        AI tools do add value. The AI tools in Photoshop add at least as much real value as the clone tool did in the nineties. But the thing is, that clone tool didn’t take hundreds of billions of dollars just to get to an early beta stage, and it doesn’t have stupendous running costs per click. It would have been less impactful if it did.

        If you’re getting use out of AI right now, that’s good. But if it’s a question of becoming dependent on AI, it’s relevant that (1) this is a service someone else has to provide for you – “personal computing” doesn’t exist in this future –  and (2) you haven’t begun to pay what it really costs, let alone what it will cost to turn a profit for those providers.

        I kina feel like we’re talking about a $25,000 donut, and the only question that gets asked is “do you like donuts?”

        (also the donut may be radioactive)

        1. The image manipulation is an ancillary function that is rarely used in my life. Spending a few hours of time learning something like that is a rabbit hole I didn’t want to spend time on. I guess I could have gone on Fiver and had someone else do it, but the AI tool did the job quickly and cost me less time and money.

          As for programming, I understand core concepts and the questions to ask, but I don’t always know the syntax or foibles. Finding syntax is generally easy, finding foibles is not always so easy. I am willing to do deep dives on that, but for establishing a baseline it’s very helpful.

          And as for $25,000 donuts, everything starts out that way. Things will get cheaper.

          America is all about not paying the real cost for anything.

          And as for programmers… well, I think are jobs are going to change MASSIVELY in the next 10 years. It’s going to be a hard adjustment and a lot of people are going to lose their jobs.

        2. You asked “why” but then rejected the answer. People can derive value from AI. That’s the answer. You’re stretching the question from “why do people use AI” do “what about the future cost, what about investing in yourself” like my dad telling me why it’s morally superior to write on paper.

    2. I’ve had over 2500 conversations with ChatGPT…the only LLM I can easily count conversations on…and thousands more on other LLMs.

      Literally all of them about software engineering, science, philosophy and other technical stuff. I’ve increased my abilities and knowledge at easily twice the speed in the past 2 years than in the previous 2 years before the widespread intro of LLMs.

      If you’re not getting anything out of “AI” then that’s on you.

      1. I’m in the same boat. Access to ChatGPT 4 has vastly accelerated my research, programming and helped speed up debugging immeasurably. I hasten to add, I don’t use it for writing, I don’t need that. There is a skill to spotting when it’s talking nonsense, but that does seem rare, and kicking it up the @ss usually gets you what you need.

    3. Data Scientist here.
      We use llm’s to classify texts and it’s better than any XG-Boost we previously used.
      So there alone is a use case that would lead any decently sized company to happily shell out millions if that means it’s even a little bit better than what came before.

    4. “Why would anyone want to remove the background from a video?”

      “Why would anyone want to generate captions for images to assist the blind?”

      “Why would anyone anyone want to compress videos more efficiently?”

      “Why would anyone want to accelerate physics simulation?”

      Yeah, I can’t imagine why anyone would want to do any of these things, and these open source projects are totally just something rich people are hyping up so they can make money off FOSS somehow.

  2. Maybe I’m not sufficiently informed but… sometimes I try to use perplexity and microsoft copilot, because they don’t require accounts or other login information, it feels like talking to idiots, and as the months go by there is no improvement, it is totally useless to reformulate the questions and it is totally useless to try to explain to him that he didn’t understand or is repeating for the umpteenth time the nonsense he reported previously, and what is even more incredible, he often tells you that he cannot have access to what you ask for so he invites you to find it personally, I’ll skip talking about the links he randomly suggests…a real delirium.
    I don’t have much knowledge in this field but I imagine that what is offered for public use is just a decoy and the potential is kept confidentia and they are certainly much more surprising; in any case what is pretentiously passed off as artificial intelligence is far from being

    1. I’ve had minor success. Github Copilot and ChatGPT can make decent code snippets for common problems. No success beyond a few lines though. Copilot at least has context to the rest of the code.

      For general questions or pouring over data, sometimes too much context is needed, which might be fixable by fine tuning (additional document training) on a pre-trained model. I’ve seen fine-tuning work very well for image recognition. Haven’t tried for docs yet.

      Some complex questions simply simply because the LLM is too eager to spit out the most likely answer. I’ve also seen it get confused on conflicting names – ask Bing when a song was released and you better hope that a more famous band doesn’t have the same song name! In these cases it may be possible to squeeze the right answer with a lot of prompt engineering but it’s rarely worth my effort.

      Sometimes I break the problem into pieces, have it solve small bits, then figure out where the LLM is choking. Like I was writing a joke poem, I wanted city names from a particular region that rhymed with other words. It’s great at listing cities, mediocre at rhyming, and utterly fails when asked for both at once. But it’s decent when you list the cities first, then filter for the rhymes. That’s entering “not worth my effort” territory when I’m writing a stupid poem but can still be accessible when processing GBs of text.

      Yeah the updates do very little. There’s a noticeable diff between ChatGPT-3.5 and 4 but that’s a big model (and cost) difference. Updates are typically tuning params, or hidden prompts so that Bing AI self-censors and says “have a nice day”.

    2. “He”

      You’re not talking to a person. You’re talking to an AI inference engine. Under the hood it is basically the Curve Fit function in Excel extended into N dimensions.

      You’re mad at a spreadsheet for not understanding you.

    1. because it’s obscure, doesn’t hint about its abilities, requires perfectly typed commands, it’s easy to blow up the whole system … shall I continue? From the user perspective it’s an entirely different computer, they know nothing.

    2. I’m happy to work with terminal applications.

      AI is a problem because getting CUDA installed and the right combination of Python, Cython, Conda, Django, etc. installed and working takes hours and the “simple” installer bash script is never compatible with my configuration. I once spent an afternoon modifying the original Style Transfer model runner because it assumed two separate partitions on the host system.

      I’ll take nice and tidy installer with a GUI for apps and stick to the terminal for my own stuff, thanks.

  3. Lots of software projects showing up on Hackaday eh? For some reason I thought it was hardware-hacks only. If that rule has changed though, let me know as I have PLENTY of Raspberry Pi related projects to submit.

  4. Hhhhhhhhmmmmm, it almost tell you to run it. Smellz like an rootkit/worm/virus fooling you into i stalling it. The fact that it bypasses Mac sandbox makes it all too clear.
    So I imagine dat at one point the ocean will be full with laptops running uninstalable AIs.
    And the last giveaway … its name. Classic liar!

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.