The acceptance of public cloud has grown significantly over the last few years. This is not surprising, given the cost effectiveness, high availability and on-demand scalability offered by public cloud services. For start-ups and smaller companies in particular, moving to public cloud means that they don’t need to invest in expensive infrastructure. The public cloud takes care of all infrastructure worries and also offers an extensive digital library of applications and services that such smaller companies can use in developing their own applications. When you add to this the reliability and flexibility offered by the public cloud, it’s not hard to see what makes this such a compelling proposition for many companies today. All of these factors as well as the growing confidence in public cloud over the last few years, especially with regard to security and privacy concerns have led to a steady increase in its adoption.
However, despite the benefits enumerated above, there are still a couple of constraints holding back the adoption of public cloud. One of the most obvious of these is the infrastructure overhaul required in moving to the public cloud. Moving already established business processes to the cloud often involves a re-architecture of your entire eco-system, along with all the associated costs in terms of time and money. It is in order to get over such issues that many organizations set up their own private cloud. This basically means that the hardware is maintained by organizations within their environment but there is a layer of software which offers cloud services so that the work, in effect, stays within the boundaries of the organization.
Another reason holding back organizations from moving to public cloud platforms are the compliance and regulatory requirements in industries such as healthcare. Many industries also need to meet certain privacy standards, especially with regard to customer data due to which they need to keep a certain part of their processes and data within the bounds of their organization and out of the public domain. This means either hosting such processes on servers in a tightly controlled environment that is protected by firewalls from outside traffic or using private cloud services.
Yet another reason is the latency requirements associated with certain high-priority processes. Such processes typically require real-time decision making and action and are thus hosted locally. For instance: at any given point of time, a typical manufacturing plant is usually generating many different types of data. Not all of this data is equally critical and a lot of this data is meant only for long-term retention or to serve as a basis for strategic or long-term decision making. However, at least a part of this data is real critical and usually requires an immediate response. For a manufacturing plant, this might be anything to do with the shop floor, for instance. For such processes, moving to a public cloud brings with it a risk of latency that might be completely unacceptable.
Thus, every organization has certain processes which can benefit immensely by using public cloud services as well as certain processes which due to either regulatory or latency concerns need to be hosted either in the local environment or on a private cloud. It is this mix of imperatives that explains the increasing adoption of hybrid cloud services over the last few years. Hybrid cloud models allow organizations to combine data and services from different cloud models to create a seamlessly integrated computing environment that provides the desired balance between flexibility and control.
Hybrid clouds, while being great in theory have often suffered from one great drawback. This was the fact, that till very recently, we had different players in the private and public cloud arenas. Such a system meant that companies historically have had a tough time trying to integrate their public and private cloud eco-systems. This is because in such a scenario, companies moving from a private to a public cloud environment need to rework their entire cloud native architecture. This has historically been a very big problem for organizations, preventing them from taking many important decisions related to the cloud. This is because every such decision, whether it be the decision to move to a private cloud or the decision to move from private to public cloud implied a certain cost of re-architecture.
However, with the introduction of Microsoft Azure Stack, all of this has changed. Microsoft Azure Stack is an extension of Azure that brings Azure to on-premises environment. Azure stack allows you to install a private cloud that looks exactly similar to Azure and offers the same services, tools and applications that are offered on Azure. This makes it almost impossible for a developer or an operations person to make out if they are working on a public or a private cloud dashboard so they get a uniform view and can operate in the same way no matter what sort of environment they are working in. The decision of whether a certain application/process should be on the private or public cloud can be made based on business, technical and regulatory requirements. So basically, you can move seamlessly between private and public cloud. Thus, there is no need to re-architect applications for the public cloud as there is a consistent API surface area between Azure and Azure Stack. This seamless integration also means that experiences, tools, operations, deployment and configuration can be common across the public and private clouds, thus increasing efficiency. Azure stack can also be used in disconnected mode for implementations that are not always connected to Azure.
Azure Stack has unlocked new hybrid use cases like edge and disconnected solutions, applications that need to meet policy requirements, etc. It has also enabled cloud application model on premises. This has opened up possibilities for a massive growth in the adoption of hybrid cloud in the near future.
About Apurva Vaidya
Apurva Vaidya is a Principal Architect at Sogeti, specializing in Data Center, Cloud and Endpoint computing domains. He works as part of the solutions team and focuses on Hewlett Packard projects as a CTO, to support teams deliver better software solutions. Apurva is graduated in Information Technology and Post Graduated in Software Engineering. Apurva began his career 13 years back and has experience with technologies that add business value for customers. Apurva has worked in diverse areas like Product Design & Development, R&D for Emerging Technologies, and Architecture & Design of Complex IT solutions. Apurva has been speaker at multiple conferences and has deep knowledge of storage and cloud technologies.
More on Apurva Vaidya.