In the old days of the Internet, FTP was sufficient for downloading the occasional file. But with the widespread use of computer audio and video, it was easy to swamp an FTP server so — eventually — BitTorrent was born. The idea was you would download bits and pieces of a file from different places and, in theory, people would download bits and pieces that you have if they need them. Now Petals wants to use this same method with language models. These AI language models are all the rage, but they take significant computer resources. The idea behind Petals is like BitTorrent. You handle a small part of the model (about 8 gigabytes which is small compared to the 352 gigabytes required), and other people have other parts.
Of course, if you are privacy-minded, that means that some amount of your data is going out to the public, but for your latest chatbot experiments, that might not be a big problem. You can install Petals in an Anaconda environment or run a Docker image if you don’t want to set up anything. If you just want to access the distributed network’s chatbot based on BLOOMZ-176B, you can do that online.