Gone are the days of monolithic applications, microservices architecture is the current industry trend which enables us to program applications as a collection of loosely coupled services. Microservices architecture enables the continuous delivery and deployment of the large complex applications. Also, as against hardware abstraction using virtual machines, containers provide another layer of abstraction which abstracts the OS from the applications. Containerization is new software development process which consists of application or service, its dependencies and configuration as container image. This container images can be deployed on any infrastructure with very few changes. Packed with other such benefits, containers are proving to be the need of the hour today.
The process of deploying and managing such microservices includes identifying the hosts, linking the services using agreed interfaces, rescheduling the failed services, scaling the instances and load balancing the requests between the running instances. This proves to be a challenging task if an application consists of multiple such microservices. There are various software platforms which provide a facility for clustering, orchestration and scheduling such Kubernetes, Docker Swarm, Mesosphere DC/OS.
Azure Service Fabric is a distributed systems platform that enables to build and manage scalable and reliable microservices and containers. Service Fabric also addresses the significant challenges in developing and managing cloud-native applications. Microservices could be deployed as containers or as processes and SF can orchestrate both. Service fabric is well mature for windows as well as Linux. Due to its “Any Cloud, Any OS” concept it’s very well suited for Hybrid Cloud solutions.
Service fabric provides a cluster of virtual machines to deploy containers on it. These virtual machines can be from the local datacenter, Azure, AWS or any other cloud vendor. These virtual machines act as nodes which form a service fabric cluster. Application/Container hosted on service fabric cluster run on these nodes and are managed by internal load balancer provided by service fabric environment. Such a cluster can be created using the Azure portal or standalone packages.
To test the apps/containers or to get a feel of service fabric, Azure provides party clusters which can run your applications for an hour. Microsoft Azure also provides an open-source service fabric SDK which integrates with various Microsoft development tools and helps to expedite the application development. It also enables us to run a local service fabric cluster which behaves almost the same as of service fabric cluster on Azure cloud, this expedites the testing of the applications before they are deployed in the production environment. The clusters connection endpoint is used to connect and publish applications to the cluster.
In Azure SF there are 2 methods to deploy anything on to the cluster, they are:
Compose Deployment (instance definition):
A docker-compose.yml file describes a deployable set of containers, including their properties and configurations. For example, the file can contain environment variables and ports. You can also specify deployment parameters, such as placement constraints, resource limits, and DNS names, in the docker-compose.yml file. It is currently in a preview state and will be in production soon.
Azure SF tools like PowerShell cmdlets, az and sfctl are used to connect to the ASF cluster and deploy applications using the YAML file. For e.g. to deploy helloworld container on cluster following is to be used:
Connect-ServiceFabricCluster -ConnectionEndpoint “ServiceFabric01.ContosoCloudApp.net:19000”
New-ServiceFabricComposeDeployment -DeploymentName hellodeployment -Compose helloworld.yml
Service Fabric app model (type definition)
The Service Fabric application model uses service types and application types, where you can have many application instances of the same type.
Cluster resource manager takes care of scaling and load balancing the applications.
Scaling in Azure Service Fabric:
- Based on Instance Count of the services. ASF dynamically scales the services if instance count is “-1”
- Auto-scaling based on performance counters monitored from the VMs in the scale sets.
- Manually adding nodes(VMs) to the cluster.
- Mentioned above things can be achieved either by adding code to monitor scaling requirements in the service itself OR manually using ASF tools like PowerShell cmdlets.
- Load balancing triggers can be defined using timers during the cluster creation.
- Load balancing thresholds can also be defined for a cluster.
- When threshold exceeds the defined value, the load balancing in triggered on timer expiration.
- Cluster balancing can also be controlled by defining “Activity Thresholds”
Apart from clustering and deployment, service Fabric provides additional Service Fabric programming models like:
- Reliable services – Reliable Services is a light-weight framework for writing services that integrate with the Service Fabric platform and benefit from the full set of platform features.
- Reliable actors – Framework based on actor design pattern to implement virtual actors. This framework is built on top of the reliable services and is particularly useful if you need the runtime to instantiate large numbers of (preferably small) objects.
Both programming models support stateless and stateful services wherein the stateful version uses persistent storage implemented directly in the service to maintain the state where-as stateless does not.
Using service fabric cluster enables high availability, reliability and scalability of the services running on it. Service Fabric dissociates the application from its infrastructure and enables developers to focus on the business and bring value to customers at a faster pace.
*contributed by Amit Jain and Ankit Todankar.
About Apurva Vaidya
Apurva Vaidya is a Principal Architect at Sogeti, specializing in Data Center, Cloud and Endpoint computing domains. He works as part of the solutions team and focuses on Hewlett Packard projects as a CTO, to support teams deliver better software solutions. Apurva is graduated in Information Technology and Post Graduated in Software Engineering. Apurva began his career 13 years back and has experience with technologies that add business value for customers. Apurva has worked in diverse areas like Product Design & Development, R&D for Emerging Technologies, and Architecture & Design of Complex IT solutions. Apurva has been speaker at multiple conferences and has deep knowledge of storage and cloud technologies.
More on Apurva Vaidya.