Maybe instead of selling advertising websites could start selling process cycles on their visitor’s computers.
I run adblock because hell if most ads aren’t just annoying eyesores that I wouldn’t click on anyway. But I’d certainly be willing to let sites snag some of my background cycles for their render farms, computer simulations, etc.
Now to make it so that people can’t damage your computer with this.
while this might be an interesting conceptual demonstration, java is generally very slow when compared to other high-level languages like c/c++ — and certainly when executed in the confines of a browser. it doesn’t look like they have any performance benchmarks in their article, but my guess is that the java running in a browser is probably anywhere from 10 to 1000x slower than similar code written in C and executed in a shell. (it’s been a while since I’ve compared java/c speeds, so maybe this has increased a little in the last few years, but i’d imagine the gap is still quite large).
this would essentially mean that you’d have to have anywhere between 10 to 1000 machines running a java client to do the work of a single machine written using a high-performance computing language and libraries. that being said, i’m certain there are ways you could get more performance out of java — maybe by implementing a special set of libraries that do certain computations particularly efficiently and natively on a machine — but it’s still likely a very large source of inefficiency in the current model.
still, a neat idea! i especially like the whole ‘let me compute a tiny bit for you using idle cycles in a very controlled way instead of showing me tons of ads’ idea a commenter above just suggested. i suppose in any case, even if the efficiency was only 1%, that’s potentially a lot of cycles that were otherwise unused.
This might gain a few more users than a traditional distributed computing client, by avoiding the need for users to install additional software.
@silic0re & angus
And the benefit here is that rather than having people download a client, all they need to do now is navigate to a website to join the computing cloud.
I also think a conventional application which runs in the background and starts automatically is likely to contribute a lot more CPU time than a web page which the user has to deliberately load and leave open.
The speed difference is absolutely irrelevant. We are not talking about substituting this for natively run code, we are talking about reaching a different demographic. If the code is not run this way that doesn’t mean it will be run in C, it means that it will not be run at all.
It is like arguing that there is no point for the Salvation Army to stand on the sidewalk ringing their bells because it would be much more efficient if people just mailed in checks.
Imagine if for example, someplace like Facebook or Addicting Games implemented something like this behind the scenes. (Disregard any legal issues/ToS issues/ etc for now.) While none of the users would actually contribute enough to notice a slowdown of the site, each and every user would provide some small amount of processing power. This by itself would not be much, but now compare the demographic of users on that site to the demographic of users running any major distributed computing client. I think you will find an exponentially higher number of users behind the website.
This is definitely an interesting idea, and I can see it taking off it it is approached in the right way.
I like this idea. I’m gonna go look for source code to see if I can rewrite this in Python. Pointless, yes, and I realize that a standar browser won’y run python, but I’m going to try to rebuild the distribution portion of this. Just for the hell of it :D I’m also into networking with python.
If only *all* of the major browsers supported web workers and did so in a way that they were schelduled properly so they don’t make browsing take a back seat to computing the universe.
Maybe this is what Google’s Native client is about?
I think if you embedded a small applet in all pages on the net then that would help to smooth the processing requirements out. You said to expect weird spike to the processor but if implemented the right way that would never happen. It would only need to share a small portion from each computer connected. Eventually with all pages exhibiting this it would be a continuous sharing of resources, if that makes sense to anyone else but me.
Uhm, how about loading enough work units to consume 10 seconds of time on a “standard” pc that would have to be computed to post a comment on a blog?
No worries about security and exploitability.
And if it calls a trusted math library in ActiveX form, all the better!
Aww damn, I got beaten to it :P
Methinks the way to make it fully invisible to the user and not hog their system is, if it’s implemented using setInterval, start at an interval of 0 between iterations and increase the interval ’til it’s higher than the average time it takes to run the function.
“Uhm, how about loading enough work units to consume 10 seconds of time on a “standard” pc that would have to be computed to post a comment on a blog?”
I like this idea and might just implement it on the registration form of the forums I’m developing ;)
There’s an interesting moral(?) question to this though – if sites could sell processor time of their visitors computer for someone elses computational needs, unless they make this very clear from the outset they are effectively stealing both bandwidth and processor performance from the end user. It’s analogous to a viral folding client.
i really don’t see this being all that useful, that is unless the computation being done takes significantly more time than a couple of ms. for most problems, it would be faster to compute locally on one decently fast machine. if the problem is not embarrassingly parallel or even if there is a lot of data to move around there will be too much overhead. for this to work you’d need problems that are very simple to specify and whose answers are also really simple (read: brute-force crypto & genetic algorithms).
nonetheless, very fascinating research.
this was actually implemented in the svn testing version of beef (bindshell.net) about 3 years ago. Proof of concept was good but unfeasible as gpu’s and standard cpu’s are a lot faster. It was estimated you’d need 2 million+ hosts, to get any benefit, and then your limited by connectvity. If a host goes down you lose their solution.
Nice try though
That must be a glad news to hear. I am wondering whether it will work really
Think what could be achieved if Google or Facebook would ask their users to contribute some of their browser power for the benefit of some data processing projects.
There are many academic projects that could use this IMO, mainly in the field of bioinformatics.
Hi, all you have described is present in Ciurmy platform, in particular the map, the reduce is instead done from the user.
Look at http://www.ciurmy.com, you can sell the computing power of your devices, using the browser for the calculation.
Ciurmy is a open platform, where the coders can implement javascipt code for analysts and makers, who need it.
As reward the coders will earn part of the commissions.
Please be kind and respectful to help make the comments section excellent. (Comment Policy)