Will Cloud Computing Ever Be Up to the Challenge?
As the news broke yesterday about Microsoft’s failure to protect the data of some Sidekick mobile phone users, the temptation to blame the whole concept of cloud computing was too great for many. Chief information officers tend anyway to fall into the camp of “if you want something run right, run it yourself.” For many of them, the notion that you would let someone else provide your critical business technology infrastructure “as a service” is mind-boggling. You can’t see it. You can’t operate it. You may not know who provided the component technologies from which it’s built. And you can’t always do much to manage it.
So why would you venture into cloud computing? Answer: Because it’s cheaper to operate. It’s easier to scale. It’s faster to provision. And you don’t have to find, hire, manage, and retain increasingly expensive technical talent to run it. It really is the better model — until, that is, that critical infrastructure suddenly isn’t there any more.
Today’s infrastructure-as-a-service providers know that they have to be continuously available. I’ve read my share of service level agreements and all of them give guarantees that they will always be there (although the fine print often admits that they will merely do their best). Their engineering teams use well proven, high-availability designs. Their operations teams monitor the infrastructure continuously and try to anticipate situations that would take them off the air. They have redundancy, excess capacity, backups, and backups for the backups. Their reputation depends on it.
So why do we have a continuing pattern of service outages in “the cloud”? This week’s Sidekick meltdown is no outlier — given the glitches already experienced with Google’s Gmail, PayPal, SalesForce.com, it’s safe to assume no one is immune.
I see four reasons:
- It’s new. This approach to infrastructure is still evolving quickly, with early providers still discovering and working out bugs.
- It’s software dependent. We still depend a lot on software — which is far less reliable in practice than the hardware it runs on, requires frequent patching and updating, and is a bear to manage at scale and high levels of complexity.
- It’s experimental. We are all, to some extent, dabbling in infrastructure as a service and (remember this) experiments sometimes fail.
- It’s big. Our existing tools were built for much smaller scale of operation in much less heavily tenanted environments.
The first three conditions wouldn’t be so troubling if it weren’t for the fourth. Consider that if you do something dumb in your in-house infrastructure, you only hurt yourself. If you make the same error in the cloud, you can take down a lot of your invisible co-tenants. Application designers haven’t learnt to think that way yet.
Cloud computing is still the shape of the future. The infrastructure as a service model has too much going for it to retreat, and momentum will only build as more companies spot the clear economic potential.
For the moment, however, early adopters should keep their eyes open about the risks. Understand that you’re sharing a journey of discovery with the providers — and plan accordingly.
John Parkinson serves as chief technology officer for one of the world’s largest credit bureaus, and writes frequently on the effective management of information technology.
0 Comments