The high-profile Amazon Web Services outage during last year's Sydney Storm serves as a good reminder of the importance of managed services.

Many people, including those in the media, were in shock last year when the de facto standard in cloud computing, Amazon Web Services, suffered a two hour outage during a fierce Sydney storm. With a range of online services – from food delivery to financial Web sites – taken down with Amazon, outages like these show why businesses need additional data protection and application availability strategies for mission-critical services.

The cloud is good, but it will go down. 

Being the market leader, Amazon tends to cop the wrath of the industry whenever there is a problem, but in fairness to it outages like this are not common and have plagued all the main industry players. The reliability of clouds is generally very good, but being a multi-tenant architecture there’s only so much reliability engineering that can be done for a single customer.

Customers must accept this as a fundamental risk with public clouds and add their own layer of reliability engineering to compensate. And, of course, have data backup strategies in place to ensure data is not stored solely on one public cloud.

Enter Managed Services Providers

What’s the answer to better reliability for organisations without the in-house IT capability to support it? Take a look at the thriving managed service provider (MSP) industry which support clients on their journey to cloud, but also offer highly available hosting and disaster recovery services for business-critical applications.

MSPs care about your application’s uptime and have a proactive approach to business continuity. If a problem did occur it will be rectified in minutes, not hours.

Talk to an MSP about their hosting facilities and how their expertise can help improve service reliability, even across public clouds.

Posted on January 17, 2016

in ICT solutions, Managed Services

Share the Story