Most organisations are busy with digital transformation and in most cases, for the same underpinning reasons, with transition to cloud or using cloud functionality, and have a cloud strategy. Unfortunately, most organisations will use “traditional” thinking in their cloud strategy having an impact on their success.
In the traditional days, there was a data centre with four walls and a minimum amount of entry points, if possible one or maybe two. All traffic coming in was going through the firewall, etc. Things from outside were dangerous and from within the walls was safe.
Things from inside were trusted and things from outside were checked, filtered, etc. and was considered not trusted.
Some organisations were having dual data centres or twin data centres or some other concept involving more than one location. In that case, the network connections would be protected, or better to say, the walls were moved from only the data centre to cover the whole area and create the new safe inside and not safe outside world.
Of course in the past, this already led to discussion and therefore in some cases, network-segments were created, VLANs were introduced and the physical network was segmented in smaller chunks. Traffic was restricted between these segments, in general, these chunks were still quite large.
With the coming of use of public cloud, a lot of organisations are still using the above and extend their network “into the cloud”, which is a very bad idea. They use things like “direct connect” and “express route” to do this, thinking this is handy and the walls are moved but the applications and functionality in the cloud can easily connect with those on-premises and the other way around.
But the goals they had with the cloud strategy are highly undermined by this approach. The first problems with this traditional approach:
- Attackers in most cases come from “inside”, an employee, contractor, subcontractor
- There is no “one or two points of access” anymore, there are multiple proxies, VPN’s, networks and there might even be a “weak protected WiFi”-connection involved that somehow has access to the network to use an application or to access a resource
- When being “in”, no new big boundaries for access
- Scaling is difficult with a multi-cloud strategy pulling in a new “segment” like a new cloud service provider is not a walk in the park and asks for additional expenses
- Apart from the previous point, scaling is difficult if all your traffic needs to go over “private routes” like an “express route” or a “direct connect” connection, soon it will be too much because the trend is that the amount of data being exchanged between functions, applications and other endpoints is exponentially growing
- Agility and flexibility were primary goals, setting up or moving these “outside walls” takes serious effort and time
Needless to say, providing access to a subcontractor to do some maintenance or change on your application that is reached through a public network is less of an on-boarding process than providing someone access to your whole network.
Digital business innovates at the speed of software, but networking has innovated at the speed of hardware (Gartner, 2019). The digital transformation is about speed, the autonomy of teams, reduction of cost, and maybe most importantly new business models and new business designs. New interactions between businesses, people, and last but not least, “things”. There will be more traffic to the public cloud than to the on-premises data centre, including more sensitive data in the public cloud than on-premises.
The data centre is not the pivot in the middle anymore for your organisation. It is just another ecosystem used by the organisation. Using public networks scaling is easier, no extra steps are involved. It does mean that the whole traditional model being used up till now, needs to change. From trusting your own network, to a zero-trust policy and that is also not a walk in the park. The zero-trust model needs proper identification (IP address or physical location is not enough) and encryption of (sensitive) data. Identification counts for everything: users, applications and devices.
After taking this step, it does not matter anymore if something is routed over public network, the full flexibility of the cloud can be used, ranging from SaaS providers to features provided by cloud service providers. Using the Microsoft Graph API? No problem, these kind of things already have a public endpoint and can be safely used over public networks, that kind of flexibility is wanted for everything.
APIs can not only be used for exposing business- and application functions to the outside world but can also be used as a safe way to provide application- and business functions to other ecosystems (multi-cloud approach, one cloud environment having the functionality, being used both from on-premises and different cloud provider environment in use).
This way the full flexibility, speed, agility and probably many more of the cloud goals are achieved and the full potential is released. The heavy maintenance on the “traditional” model and way of working is being eliminated. Implementing zero trust is a challenge, but the bonus is huge.