I imagine that someday any citizen will be able to buy a small computer and connect it to the Internet just to rent CPU time to the public. It will be similar to the way power utilities allow customers to sell solar power back to the grid whenever homes produce more energy than they use.

I realize that something like this is already being done for music file sharing services. And the SETI project can access your unused CPU time to search for ET. I'm talking about an expansion of what already exists. The business model and the legal hurdles are probably bigger obstacles than the technology.

Imagine buying a computer and plugging it into your Internet connection at home. The first menu that comes up allows you to choose between private computing (just you) or public, meaning the world can use your computing power on demand. And you get a discount on your own "computing utility" bill when your CPU is used by others. Depending on pricing and demand, you might get a positive investment return on your capital expense for the computer.

In California, solar customers can reduce their energy bill by the amount of power they "sell" back to the grid. But consumers can't legally sell any excess energy they produce above their own billing level. I assume lobbyists are to blame for this ridiculous situation. For now, let's happily imagine that our hypothetical computing grid doesn't have that limitation.

Someday all of your important files will be stored in the cloud. For many of us, that's already the case. It's time to move our CPU needs to the cloud too. In the future, if you can't afford a computer, you can pay a low monthly fee to have access to spare computing power on the Internet. I'm guessing that might cost $5 per month for the basic package, with a premium subscription service that offers higher speeds. The service should be cheap because most computing power on the planet sits idle most of the day.

With this business model, everyone on earth would have access to the equivalent of a supercomputer in the cloud for a few bucks per month plus whatever they pay for basic Internet access. You'll never have to upgrade your computer, upgrade your software, install anti-virus software, or worry about any of the headaches of computer ownership.

Citizens would need little more than a smart screen with a browser that can connect to the computing grid. That's still a computer, but it can be fairly basic. It just needs a browser.

For this model to work on a large scale you'd need to have WiFi in airplanes and everywhere else citizens need to access the Internet, but we're well on the way to that world.

It's not clear to me that a large company or even a government needs to be involved in building the system I'm describing. You could probably get there with an open software project. In fact, it's probably the only way to get there because large companies have a stranglehold on the status quo.

Data privacy is a huge issue with this sort of business model, obviously. But I wonder if spreading your data and CPU usage across multiple processors and servers might actually give you better security than your current system in which all of your private stuff is conveniently organized on one computer so hackers can easily find it. Instead of having your credit card number stored in one location, the number might be broken up across several servers. If one server gets hacked, the thieves only get a partial number. And they wouldn't have any way to know which servers have the rest of your digits.

By analogy, no one would try to steal your car if they knew it was disassembled and the parts were hidden all over your home. The analogy breaks down because crooks could steal and sell car parts. But if a hacker had only two digits of your credit card number it wouldn't be worth much.

You may now commence shredding this idea.

Rank Up Rank Down Votes:  +8
  • Print
  • Share


Sort By:
+2 Rank Up Rank Down
Mar 3, 2014
You've ignored Moore's law - computer hardware always gets cheaper. A Raspberry Pi (credit card sized computer) sells for $35. It can surf the web, watch YouTube videos, run office software, etc. Why would you bother to pay monthly for those resources?

[Well, there's the supercomputer power, no viruses, no software hassles, no physical security risks, no upgrading, no backing up files, no rebooting. Sold yet? -- Scott]
Mar 3, 2014
For the most part, this model works okay for CPU-intensive tasks that require very little data and RAM. I can't think of many applications for that.
+6 Rank Up Rank Down
Mar 3, 2014
Frankly I don't see what problem this idea solves. There are tons of barriers in realizing this:

- universal internet access is required (very far from true right now)
- standardization of CPU cycle outsourcing protocols
- people preferring one-time costs over recurring costs
- even at light speed network delivery, not fast enough for local CPU tasks
- cannibalizes many businesses

In the unlikely event that you solve all of the above, you still haven't solved a real-world problem. Common users do not have a shortage of CPU power, nor is there a problem in access to affordable computing.

I personally like the current mixed situation. Everything that can efficiently be outsourced to the cloud efficiently will be outsourced, everything that makes more sense locally, stays locally.

Also note that client hardware is not as static as you somewhat suggest. A new wifi standard requires new hardware. A new resolution (4K) requires new hardware. And so on...
Mar 3, 2014
Someday, in the magical STNG world we seem to think we are moving towards, all the data and CPU will be in the cloud.

Until then, companies go out of business, raise their rates, lower their service programs, shift their business plans, etc etc, and people all seem to have their own leanings on this kind of thing.

Because of that, at minimum, people like me will resist putting our data somewhere else for some start-up to lose for all eternity.

Which leaves me with: will people like me ever cease to exist, and as such, will the magical future necessarily mean all data and cpu will be on the cloud?
Mar 3, 2014
The car analogy reminds me of some pranks I've heard about.

Some pranksters used to take cars apart, and re-assemble them in places they would normally be unable to fit, such as a Living room, or inside a public building, through normal doorways, no loading dock.

Another thing that needs to change for this idea to work well, is desktop programmer performance considerations.
In the current state, computers have more power than they need, so software programers do not always optimize or utilize their resources as well as they could. New patches come out, and your computer runs slower.
With enough patches and upgrades, a small computer that is "Just a browser" will have trouble running the browser.

Hackers might only be able to get part of a number from a server. But depending on the system, there may be clues left on where to find the other numbers, or what machine(s) log in to fetch the full number and what to look for there.
New obstacles just mean new challenges to overcome. Hopefully the new obstacles don't make it easier than before those obstacles were in place.

Another potential issue with network computing would be performance. It could be difficult to get response times gamers are used to on their own machines when they have to wait for the network to handle more than it already does. (Of course a gamer like that would probably opt to continue upgrading their own machine rather than use the super low budget pubic computing for those very same reasons, and this group would likely help serve the public computing network, while not actively gaming.)

Another thing currently in place that is part of the suggested public networking is Remote Desktop services. There are apps and websites that allow you to control your PC from anywhere on the net, through a browser. There are Terminal Servers that host software and store data that can be accessed from anywhere on the internet. At least once the proper credentials are given. Those are private services though, not public.

I am not sure though, how much easier or faster it would be to split up work from the current Remote Desktop model to utilize multiple machines. That really depends on the task you are computing.
Mar 3, 2014
Shocking that nobody mentioned bitcoin. This is the underlying mechanism for new bitcoins being created -- people pitch in CPU time to hash transactions in the block chain, and the reward is a lottery for new bitcoins. Of course... examining the pitfalls for this existing CPU-for-rent paradigm makes it pretty clear that a more-general CPU rental paradigm has a lot of potential abuse to wade through.
Mar 3, 2014
There's no market because wholesale computing power is so cheap that those who need it can just buy it (or rent from the cloud, where the costs are less than maintaining a SETI-like infrastructure).

The idea of pushing your heavy compute tasks up to the cloud seamlessly already happens whenever you ask your phone to do voice recognition.


[So...there is no market for it...and it is already being done for voice. Got it. -- Scott]
Mar 3, 2014
This is one of your better ideas, actually, but it needs to go a step further to make it viable.

As I mentioned in an earlier post, I worked in AI computing for a number of years. The AI computing vision of the future was a little different than what you propose, as it went into software as well as hardware (CPU cycles).

I'll try to give a brief outline of how it was supposed to work, and hopefully not bore anyone.

The concept revolves around a piece of middleware called an "Object Request Broker (ORB)." But first, you need access to "objects," which are tiny pieces of code that publish what they do to the ORB.

Then, programs need to be built that call the objects they need via the ORB. So let's say you are building a program that needs, say, a sort routine. All you have to code is a request to the ORB for that object at that point in the process. The ORB knows where those objects reside, and calls them.

So where does the object run? That's similar to your shared CPU cycles idea. The ORB not only knows where objects are; the ORB also knows where there are CPU cycles available on its network. The ORB 'instantiates' (places a copy of the object on a computer) and runs it when its needed, and then deletes it when it's no longer needed.

The billing for this would be based on the number of objects you used and for how long you used them. Instead of paying for software, you'd be paying for usage. You'd also get money back when your computer was used for instantiating the objects. That is a very simplified overview of what was envisioned in Object Oriented Programming (OOP).

So your idea was a good one, Scott. But it never came to pass as a widespread utility available to everyone, although it does exist in some object-oriented programs used within companies.

Why not, you ask? One word: standardization.

For this to work, you need to have an accepted standard ORB with an accepted way of both publishing and calling objects. You need the ORB also to handle security, since it would be gaining access to your computer (not an easy task).

Put one hundred geeks into a room and you'll get 200 different versions of what such a system would look like. Things like what the ORB should and shouldn't do, and how it should do it; how billing would work; how security would be ensured, etc. etc.

And then you'd have to get all the software manufacturers to utilize whatever standard method you came up with - when all of them think they have a much better way to do all this than what the standards committee came up with. Look at all the dead ends Microsoft foisted on an innocent development community on their way to SOAP.

Plus, it's a very tough thing to do. The ORB is being asked to do a LOT. It's not an easy piece of software to build, above and beyond the difficulty of trying to agree on what the ORB should do and how it should do it.

Moreover, there are performance issues. Running software on your computer happens rapidly because your bus is able to transfer information rapidly over short distances. Hook your computing into the Internet, where programs run and communicate piecemeal, and the Internet becomes the 405 freeway at rush hour.

So we still have a long way to go before your vision becomes a reality, Scott. But I do think it will happen someday, although we're going to have to wait a while.
+2 Rank Up Rank Down
Mar 3, 2014
"But consumers can't legally sell any excess energy they produce above their own billing level."

That used to be true - but isn't any more. If a solar customer produces more energy than they consume across a year, then the excess is bought by the utility at an average wholesale rate (at least for PG&E customers in California).
+4 Rank Up Rank Down
Mar 3, 2014
Would it be irony to point out that computers were first dumb terminals connected to a central mainframe where everyone bought or reserved time? When personal computing arrived it was hailed as a great freedom and innovation that removed the issues of central planning that mainframes caused. The modern internet cloud is becoming something of a hybrid between the two models.

Ultimately however I believe until there is need for such raw CPU power it's doubtful that the conditions for this distributed CPU model that you describe will arrive. The Internet model covers just about everything an average user needs - one doesn't need vast CPU power to surf the web, the Internet protocol already assumes you are leveraging multiple CPUs around the world to get what you want. I already have the worlds computers and their CPUs dancing at my fingertips - and I need pay is a bit of suffering to the advertising world.
+15 Rank Up Rank Down
Mar 3, 2014
Scott, you're right that this is all technically feasible. I looked into starting a business like this ten years ago, basically a for-profit version of SETI@Home. You have correctly identified that there are idle CPU cycles out there, and that it would be technically possible to offer them for sale.

The problem is that there are no buyers. Most problems that regular people want solved, are bound by latency (speed of light communication issues), not by CPU power. (CPUs are so cheap, and communication is so expensive, that you really just want to put the extra CPU power physically right next to where you want the computation done.) And most valuable problems are serial, not easily decomposed into massive parallelism.

Yes, you can think of a tiny fraction of all problems that might want access to such a CPU cloud (decryption, SETI, bitcoins). But the reality is that the market of customers wanting to buy this service is tiny. The CPUs are idle, not because they haven't yet been made available, but simply because they aren't worth anything.

[I think the latency issue is solvable. For starters, you could always default to the nearest computers to shorten distances and minimize hardware hops. Second, you could intelligently push data in anticipation of the user's next move. Third, if your own computer is on the network, it can be your default for simple stuff that needs a fast response. I'll bet there are a bunch of ways to attack latency in this model. -- Scott]
Mar 3, 2014
There is a book - can't remember which one - which talks about Google et al creating the next utilities. It describes how soon computing power will be on tap and just as expected as that we get electricity when we flick a switch or water when we turn on a tap.

These companies are building massive data centres. Lots of them. If you put all the PCs in the world together they probably wouldn't have as much power or storage capacity as some of them.

That is both the enabler and the killer of your idea.

The phone in our pocket now has more computing power than we had in our PCs a decade ago. But soon it won't need it. It can just be a "thin-client" (remember those) device with the computing done elsewhere.

But the internet of things is about to be bigger than every cellphone, PC and tablet ever built. Our traffic lights connected to our advertising signs, connected to (how does the song go?)...

So our pitiful little bit won't matter enough to set up all the systems to reward us. Like the Golgafrinchans choosing leaves as currency then discovering it had no value because there was too much of it around.

But perhaps I'm wrong. In the internet of things, we can all have a fuel cell and there will be no need for powerstations. We can all share computing power and have no need for datacentres.

The age of the megalithic corporation, providing for mass markets is dead. We can all connect, share everything from ideas to services and sort things out for ourselves.

But the big corporations - and governments, the ultimate big corporation - will do everything they can to stop it. We're dismantling their power base and they won't like that.

So it "can't be done for national security reasons". It can't be done "because it is cheaper to do it our way". It can't be done because "we have to provide power/healthcare/democracy for everyone and only we can be trusted to do it".

It will get chipped away at over time. But it will take several technology revolutions and perhaps a real one before the change can be completed. Perhaps the cyborgs might manage it.
Mar 3, 2014
Your idea seems to contain something of a contradiction.

If we already have a situation where most computing resources sit idle and therefore could be "outsourced" to the cloud for a remarkably low price, then it wouldn't be worth it for most people to buy a computer for the purpose of "outsourcing" it to the cloud to lower their bill or whatever.

Consider bitcoin mining. Is it possible for someone to mine BTC on their own PC using their spare computing resources? Sure. Are you likely to make any significant money whatsoever doing that? No. Economies of scale kick in. The people who make money mining BTC are operating vast farms of machines focused entirely on BTC mining.
Mar 3, 2014
I'm struggling to understand the use case as the consumer. When do I want this mega-boost of CPU? What am I doing that needs so much CPU that my $200 (and tomorrow $50) CPU can't do?
I don't think users care about CPU power, per se, they care about getting things done.

To wit:

[Speed is just one of the benefits, and I think more apps would be built to use the extra speed if consumers had an option to use it. But the more immediate benefits involve avoiding viruses, avoiding software upgrades, having every software app immediately available on demand, no tech support, no rebooting, no frozen screens, no trips to the Apple Genius bar with your laptop (I have been there six times in the past year), and no backing up data. There's a question of whether the system I described will work, given latency issues, but there is no question that it's an immense potential improvement for the consumer, and cheaper too. -- Scott]
Mar 3, 2014
Scott: the $$ savings you are suggesting are so trivial it's not worth the risk - not in the climate of very skilled free market hackers that exists today - and government hackers who are even scarier because you can at least prosecute the free market guys in the rare case where they are within the reach of the law. The government guys - instead of getting prosecuted for doing immoral things - the government gives their hackers and leakers promotions - and they're not within the reach of the law any more than a hacker in Russia is because the government hackers have been given immunity behind a veil of secrecy. As long as you hack and leak for the right political party - you're immune.

What you are suggesting seems like an incredibly bad and naïve idea.

In a world where everyone was a good guy and engineers were just making things better for everyone by making everything more efficient: your idea is great.

In the world we live in it isn't. Too many bad guys.
Mar 3, 2014
On the security issue ...
Over time, fewer and fewer humans will be involved as bots take over the daily support of the cloud. Think we are headed for a time when cloud transactions are the MOST secure, because AI's with full access and police authority continuously, dispassionately, watch over it and each other for us. Hello, Kurzweil ...
Mar 3, 2014
I don't foresee this happening. The first question is - what is it that the user will be "renting" from you?

1. Specific software applications and the computing power support them? - This is already offered by salesforce.com, netsuite and a million app vendors

2. Software applications that currently are typically run within your own computer? - There are fewer and fewer of these around, but two I can think of are Microsoft Office (word, excel, etc.) and Adobe Creative Suite (InDesign, Photoshop, etc.). These software vendors are already offering hosted versions of these programs

3. Non-application software, i.e. your desktop environment, operating system, security programs, etc.? - Again, there are already companies out there providing this service on a hosted basis. It is called Desktop as a Service and/or Infrastructure as a Service?

As a result, basically pretty much all packaged software along with the associated computing power is available on a hosted basis from the software vendors. They do charge for it, though the fees have mostly been shrinking over time. Nevertheless, individuals could not offer to provide these hosted packages to other individuals without violating copyrights. Sure, the software vendors could become Napster-ized, but I am not sure the risk/reward equation works from either the consumer or provider side.

So, if it isn’t packaged software, what would people be renting my computer power for – custom programs? How many people are out there writing custom programs that need more computing power than they have at their disposal? Are they going to trust the security of their crown jewels to a million unknown users around the globe? Are those users going to feel safe renting out their CPU to somebody wanting to run some unproven code on it?

Throw in the low cost of getting hosted CPU power from commercial vendors, and I just don’t see this happening.
+10 Rank Up Rank Down
Mar 3, 2014
There is a certain class of computational problems that can benefit from massive distributed processing. They are "time to solution" problems such as cryptography, SETI, some system simulations, etc. For the things most people do on their PC, the communication latency (the speed of light is a !$%*!$ and the partitioning and synchronization overhead would dominate and slow down the results on a distributed computing model.

I think its a given that the idle compute cycles on your desk (or even your phone) will be a commodity which can be sold to people and organizations with huge computing needs, but it won't be of much benefit for web surfing, digital media or doing your taxes.
Mar 3, 2014
Progress isn't linear -- it goes in waves (booms and busts.) There are times when computing power outpaces applications, and then the demand (and profit) flows to the people innovating new apps. Then there will be times when the potential for apps outstrips computing power. This idea would flourish in that environment.
Mar 3, 2014
A better analogy is coal burning electricity, which is centralized because it benefits from economies of scale that solar panels do not. i.e. the cost of deploying and managing one solar cell on my roof is roughly the same as putting that same solar cell along side a grid of many others in a centrally managed spot. Whereas coal burning is significantly more efficient with a bigger furnace and requires large fixed costs to manage the purchase and delivery of the coal, etc.

CPUs require care and feeding in the form of high-speed, reliable networks, power, and cooling. Except for power, these are all significantly more efficient when done at scale. Without all of that, the value you would extract from a single CPU is so low as not to be worth it.

Get the new Dilbert app!
Old Dilbert Blog