Amr Alashaal, Regional Vice President – Middle East at A10 Networks, talks about easy access to cloud-native applications in a hybrid cloud.
To stay ahead of competition, organizations are constantly looking for ways to drive innovation with speed and agility, while maximizing operational and economic efficiency at the same time. To that end, they have been migrating their applications to multi-cloud and hybrid cloud environments for quite some time.
Initially, these applications were moved to the cloud using a “lift-and-shift” approach, retaining their original monolithic architecture. However, such monolithic applications are unable to fully exploit the benefits offered by cloud, such as elasticity and distributed computing, and are also difficult to maintain and scale.
Consequently, as the next evolutionary step, organizations have started to rearchitect their existing applications or develop new ones as cloud-native applications.
A related aspect to deploying cloud-native applications is that of using automation to do so. Organizations deploying such applications have had success in automating the deployment of the underlying infrastructure on which these cloud-native applications run, as well with the initial deployment of these applications. They have, however, struggled with the subsequent steps, such as making these applications accessible to end users, scaling the applications up and down, or moving them from one cloud to another.
The main reason for this is that load balancers, which are used to front-end these applications, and make them accessible to end users, were designed with monolithic applications in mind, and hence are unable to keep pace with the agile manner in which these cloud-native applications are deployed.
These load balancers were designed for a deployment process in which network resources for the applications are provisioned manually by network and security teams, a process that could take days if not weeks, and then manually configured on the load balancer. This process clearly inhibits achieving the goal of automation to which organizations aspire.
Further compounding this problem is the fact that when deploying applications in multi-cloud and hybrid cloud environments, each public cloud provider has its own custom load balancer and management system. For example, AWS has its own Elastic Load Balancing solution, which is different from Microsoft’s Azure Load Balancer. This makes the task of automating application deployment even more complex and time consuming. It also makes the task of applying a consistent set of policies across the different cloud environments more error prone as each load balancer has its own separate configuration.
To keep pace with cloud-native applications, one needs an application access solution that enables the load balancer to dynamically manage new cloud-native applications as they are deployed and scaled.
One way to achieve this is by deploying a connector agent that connects the load balancer to the cloud-native applications. Such a connector could monitor the lifecycle of the cloud-native applications, and automatically update the load balancer with information to route traffic to these applications. This would help eliminate the delays associated with the manual process.
This process would work when deployed in a single cloud, but to truly make it work in multi-cloud and hybrid cloud environments, the solution would need to available in different form factors, such as physical, virtual and container, so it can be deployed in both public and private clouds. Having a solution that works consistently across the different cloud environments also provides the associated benefit of being able to apply a consistent set of policies for accessing the application, irrespective of the cloud in which it is running.
Finally, the solution should have deep integration with automation tools such as Terraform, Ansible and Helm, so that the whole application deployment process can be automated.