Gone are the days when CIOs and their teams focused on managing their data centres and obsessed about high availability, disaster recovery and latency across their wide area network. But data centres still exist for many people, whether their management is outsourced or they are shrinking. Some technology teams have a private cloud strategy where they use DevOps type tools to automate environment provisioning and harness infrastructure as code in their datacenters, and others are experimenting with public cloud offerings for sandboxed development or have gone all in and committed critical production services to their chosen cloud partner. Not everyone has just one partner.
The terms multicloud and hybrid cloud can be confusing as they are often used interchangeably. BMC differentiates between them by defining hybrid cloud as a combination of private and public cloud (which could include multicloud) and multicloud as an environment where multiple public clouds are used.
In a recent Gartner survey of public cloud users, 81% of respondents said they are working with two or more providers. But why would technology teams choose to increase the complexity of their operations by having multiple locations to change and run their services? According to Gartner, there are a number of reasons an organisation may purposefully choose to pursue a multicloud strategy:
- They operate in multiple geographies and have complex challenges to solve around availability, performance, data sovereignty, regulatory requirements and labor costs
- They are concerned about the dominance of mega vendors (i.e. AWS, Azure, GCP)
- They want to, or their regulators guide them to, minimize vendor lock in
- They want to take advantage of the best-of-breed capabilities for their particular platform or market needs
Modern application architectural practices, particularly around modular containerized and microservice-based applications, make it possible to have multiple services connected to a single customer journey in different places; ‘applications’ themselves are becoming increasingly fragmented and the pieces shared rarely exist on a single ‘machine’ in today’s new development world.
Top benefits of going multicloud
Splunk list their 6 top benefits of going multicloud:
- Improved reliability
- Cost savings
- Performance optimization
- The ability to avoid vendor lock in
- The adoption of best of breed products
- Lower risk of DDoS attacks
But there are challenges too: what about data governance and compliance? What about having all the requisite staff having all the requisite skills for each of the chosen clouds? What if the DevOps toolchain is varied from platform to platform or from team to team? How do we keep control of all of these costs? And perhaps, most serious of all, how do we ensure our stuff is safe? What does DevSecOps look like in multicloud?
Whilst you may question the assertion that multicloud is inherently more reliable, Splunk are making the point from a security perspective. They say: “It can make it more difficult for hackers to take down all an organization’s services if they are distributed across multiple clouds.” I assume this is asserted because there are effectively multiple ‘walls’ created by the need for different credentials. They go further though, and also explain where some of the cost savings may be found:
“In a multicloud strategy a passive cloud can be the fallback solution when a primary cloud is taken down or has performance issues. This can help reduce or eliminate downtime until the primary cloud is brought back online. Improved reliability and less downtime also lead to more cost savings for businesses.”
"A multicloud strategy can help reduce the risk of DDoS attacks"
I question whether performance optimization would not be easier when it’s simpler, i.e. fewer endpoints in different places to manage but agree, as with Gartner, that the avoidance of vendor lock in and access to the best tool for the job are good reasons to spread the risk and adopt multicloud. Splunk explains how a multicloud can help in a Distributed Denial of Service (DDoS) incident:
“A multicloud strategy can help reduce the risk of DDoS attacks by spreading traffic over multiple clouds. A company that has it services distributed across multiple clouds becomes harder to fall victim to a devastating DDoS attack because they are less reliant on one cloud.”
But they are still losing that service? And it’s the same with failover; the maintenance of high availability to a secondary or tertiary cloud seems higher than a simpler cloud architecture and there are other solutions for DDoS, such as Akamai, which would prevent loss of any service in this kind of attack.
Cloud security is a huge topic and it’s important to remember that having a multicloud strategy does not mean you’re outsourcing security to your vendors. You should expect and verify that they have the best possible protocols and tools in place, but, ultimately, security is the job of the technology teams (Note: not the security teams: DevSecOps demands that everyone considers themselves accountable for security every day).
Tale of 3 Clouds eBook: How Venafi Creates Digital Transformation
Zero Trust and cloud
Adam Stern, CEO and Founder at Infinitely Virtual recommends incorporating zero trust security models, including least privilege access. He says:
“Cloud risk management is especially critical in multicloud environments. Multi-factor authentication needs to be mandated as a matter of course. That is, engineers who manage every component within a given infrastructure (routers, firewalls, storage systems, etc.) must be subject to multifactor authentication.”
But TripWire go further than this in their Multicloud Security Practices Guide:
“Usually, cloud providers are responsible for the security of their own infrastructure, and they should be able to provide your organization with some of the capabilities you need in order to protect your data while it’s in their infrastructure. Those capabilities include multi-factor authentication vectors, encryption technologies, and identity and access management.
Your organization will usually be responsible for how you use your data in their infrastructure. Any software that your organization develops or acquires from a third party should be patched and otherwise security hardened by your organization.”
Consolidating your cloud security services
Your engineers are having to consider new tools and solutions to allow their code to run anywhere so that their applications are portable and able to run in and across multiple clouds. You now have fragmented, ephemeral, immutable services and infrastructure and you need to make sure your chosen cloud providers can support your identity and access management policies—particularly from a machine identity perspective now that you have all these containers and microservices running across different clouds.
Fujitsu’s EMEIA Cloud Security Offerings Manager, Enterprise & Cyber Security, John Wilson, quotes Andras Cser, a principal analyst covering security and risk at Forrester Research in his article, ‘Mastering the Art of Multicloud Security’ as saying that to securely manage multicloud environments, organizations need a different set of tools such as cloud console and configuration monitors and identity data integration and access managers.
Whilst all the major cloud providers are aggressively launching more sophisticated solutions, Wilson warns that adopting a specific set of vendor security controls is not always the best approach:
“Organizations need to be aware that cloud providers will actively push their native controls with a view to tying a customer into their cloud environment and charging them for access to those controls. Sometimes that might be the most economical solution, but in a multicloud world it’s often better to use central controls to ensure security policies can be consistently applied across the estate.”
Are your TLS certificates covered across all clouds?
DevOps-centric solutions for cloud management, like OpenStack, Terraform and Cloud Foundry, support a cloud operating model which is multicloud and cloud agnostic. However, all too often, these approaches may overlook SSL and TLS certificates which serve as machine identities to enable machine-to-machine authentication and secure communication. So whilst your AWS Certificate Manager (assuming that’s part of your multicloud platform) will provision, manage, and deploy public and private SSL/TLS certificates for use with supported AWS services and your internal connected resources, what about your other clouds? And your public certificates?
It’s important to think about TLS certificates as a key component to securing cloud workloads and certificate processes (sources, APIs, life cycle management for renewals) must also support cloud portability of applications. In addition to portability, there are other reasons why you would want to centrally control the certificate authorities in use rather than rely on what cloud providers provide without an abstraction layer. For instance, security teams need a way to enforce and ensure that policy is being followed and that weak certificates are not being used on business-critical applications running in the cloud. Data privacy in today’s world is one of the most important aspects that security is aimed at protecting but without visibility to the certificates being used to secure cloud workloads, security teams can not only be blamed for outages but they can’t adequately inspect traffic.
As cloud and DevOps mature, organizations should research options upfront before adopting any one cloud provider’s solutions and take the time to make the right choices to ensure that applications are portable, to provide the requisite business agility in the future.
John Morgan, VP/GM Security Business Unit at F5, speaks on the importance machine identities in deploying and scaling applications securely from the data center to the cloud.
Get a 30 Day Free Trial of TLS Protect Cloud, Automated Certificate Management.
Related posts