(Archived) Scalable software deployment in the cloud

You have tried to access an archived page. Please go to the new

Knowledge Base to find more documents.

What is special about deployment in cloud environments?

Cloud environments, as developed by root360, are no longer a static setup of servers, but code ("Infrastructure as Code"). This code determines how the individual components of the environment interact. The components are services such as databases, web servers, caches, firewalls or CDNs. These components are integrated via APIs and should remain interchangeable. Some services - e.g. web servers - should also be able to scale automatically. These requirements on flexible and scalable cloud hosting environments also require a rethinking of deployment processes, since there is no longer "the one" (web) server, but any number of virtualized (web) servers or instances (“multi-server”), which can also be created and disappear automatically.

What requirements result from this?

We have identified the following requirements from our experience:

  • Support (auto)scalability
    In automatically scaling cloud hosting environments, either more or fewer instances are available as the load increases or decreases (for example, due to the amount of visitors). The deployment system must support this scaling so that new instances can be quickly provisioned with the correct application software version and instances that are no longer needed can be removed cleanly.

  • Support of common protocols
    The application code needs to get to the instances, whether from a version control system like Git or SVN, via downloads via HTTP and FTP, a complete file copy via rsync, or from a continuous integration system like Jenkins. Ideally, a deployment system should be able to support these sources and more.

  • Customizable deployment processes
    Every application has its own subtleties. Whether it involves the subsequent installation of modules, special configuration in the database or clearing certain caches: the deployment system must be able to support such (application-specific) adjustments.

  • Independence from the cloud provider
    A deployment system should function independently of a specific cloud provider such as AWS (Amazon Web Services). This makes it possible to deploy the system with other cloud providers or even on premises if necessary.

  • Independence from external sources
    A malfunction or failure of external systems (e.g. github, customer server) on which the application source code is stored must not affect the stability of the application and the scalability of the cloud environment. For this reason, the deployment must create a high degree of independence from third-party systems.

  • Stability of the process
    In general, deployment processes must be highly stable, as they are essential to the stability of the cloud environment. It must be clear to the user how he triggers the process and what result they can expect.

  • High speed of deployment
    Time is money - also in deployment. The faster the deployment system operates, the faster a new application version goes online or an additional web instance is added.

  • Source code security
    Independent of whether deploying to the cloud or to conventional systems, a deployment system must ensure that no unauthorized access can occur when transferring and storing source code.

What does the root360 solution for cloud deployment look like?

With our scalable deployment, we provide a method that can be integrated into traditional deployment mechanisms. In this generic approach we do not make any difference whether it is a Magento, Shopware or a PHP/Symfony/Java application.

Let's take a look at the workflow of a deployment for scalable environments:

  1. Download and prepare the source code (e.g. Git via SSH)

  2. Cache the source code in the form of an archive in a central storage (e.g. AWS S3)

  3. Upload the source code to the application instances

  4. Start specific installation routines and activate the new source code

  5. Adding the completely installed instances to the load balancer (e.g. AWS ELB)

This workflow is started via a script on a so-called Jump Server. A Jump Server is the access point to its root360 cloud environment and is accessible via SSH.

Technically, the process runs through a modular system of several self-contained steps, working with defined input data and generating defined output data. This allows the entire process to be adapted to specific conditions (further process steps, application sources, installation processes). With this flexibility it is possible, for example, to replace services such as AWS S3 or the AWS Loadbalancer with other mechanisms by adapting the corresponding step.

This entire process is shown in the following illustration. Behind the process is a collection of Python-based tools and the configuration management system Saltstack. Especially the Saltstack Returner system is used extensively by a special Python library.

By caching the application at a central highly available and secure storage location, (auto)scalability is guaranteed at all times and independent of the external data source (e.g. GitHub). In addition to the system-related proximity, performance is also supported by data compression.

Due to the modular structure of the deployment system, various protocols can be supported and it can be extended similar to a plug-in system. Application-specific installation routines can also be implemented. Furthermore, the deployment system is decoupled from components of the cloud environment in production by modules that function largely independently. A disturbance of the deployment system therefore does not necessarily pose a threat to the operation of the application in the cloud environment and its scalability. The stability of the processes is guaranteed by the use of standard libraries and highly available configuration management systems (Saltstack). Independence from cloud provider-specific systems is thus also possible and even offers the possibility of applying the procedure in on-premise data centers or with other (cloud/hosting) providers than Amazon AWS.

In addition to a completely encrypted data transfer (e.g. Git via SSH, HTTPS and ZeroMQ), Amazon AWS technologies also support the security of the source code of the software. Extensive firewalling, authentication and authorization systems form the basis of a comprehensive security concept for all components and data.

Examples

Below are two scenarios in which we refer to Amazon Web Services (AWS) as an example. The first example shows a simple workflow with Git, while the second example shows a more complex scenario with a Continuous Integration System like Jenkins.

Workflow for a simple setup with Git

 

  1. Developer checks code into Github

  2. Developer starts deployment process on Jump Server

  3. Deployment system downloads source code from Github

  4. Deployment system prepares source code for deployment and uploades it to central storage (e.g. AWS S3)

  5. Deployment system transfers the prepared source code onto the application instances, but not yet activated

  6. A customer-specific installation routine is started and if successful, the new source code is activated

  7. If the application instance was just created and this is its initial deployment, a functioning deployment triggers its activation in the load balancer (e.g. AWS ELB)

Workflow for a more complex setup with Jenkins

  1. Developer checks code into Github

  2. Git-Hook launches Jenkinsjob to test and build the application

  3. Jenkins successfully assembles the application and makes it available for download

  4. Jenkins starts deployment process on Jump Server

  5. Deployment system downloades from Jenkins

  6. Deployment system prepares source code for deployment and uploades it to central storage (e.g. AWS S3)

  7. Deployment system transfers the prepared source code onto the application instances, but not yet activated

  8. A customer-specific installation routine is started and if successful, the new source code is activated

  9. If the application instance was just created and this is its initial deployment, a functioning deployment triggers its activation in the load balancer (e.g. AWS ELB)

root360 Knowledge Base - This portal is hosted by Atlassian (atlassian.com | Privacy Policy)