This guide provides supporting information to use the root360 cloud platform.

Prepare secure access by SSH

In order to access an environment by https://root360.atlassian.net/wiki/spaces/KB/pages/2014353011 you need to provide us with your OpenSSH public key. You can send this key to service@root360.de.
If you don't already have an OpenSSH key, you can create one according to our instructions at https://root360.atlassian.net/wiki/spaces/KB/pages/2014353044

Software deployment is usually realized through a publicly accessible git repository (for example on Github). For this purpose you need to store the deploykey (provided by root360) with read-only privileges in the repository. We discuss further details with you during your project migration.


Root360 provides the environments as required. The following abbreviations are used:

  • PROD = production environment, live environment

  • TEST = test environment or also staging / acceptance system

Network of an environment

An environment always consists of 3 zones, Public DMZ, Application Network and Gateway DMZ.

1. Public Network (DMZ)

In this zone, load balancers are provided for public access via HTTP and / or HTTPS. The used ports are preconfigured by root360. Optionally SSL offload (translation of HTTPS to HTTP) can be activated on the load balancers.

2. Application Network (private)

All services are provided within this zone. In addition, application instances and database instances are separated by independent subnets. Communication to public services on the Internet is not restricted and is routed via the (NAT) gateway.

The public IP address is therefore always the public IP address of the (NAT) gateway or the AWS gateway for all instances.

3. Gateway Network (DMZ)

The gateway in this zone takes the routing into the application zone for SSH access and regulates Internet traffic from the application instances.

Access an environment

Access to an environment is via SSH over the bastion host - either via a console (Linux / OSX) or via Putty. The necessary configuration is documented in the following tutorial https://root360.atlassian.net/wiki/spaces/KB/pages/2014353011 .

The most important commands are:

  • r3 instance list => lists all running EC2 instances according to their role with names, IP and other metadata such as instance type, start time

  • r3 deploy => starts deployment to source distribution (Currently, Docker Container are deployed by a separate command r3 container deploy!)

For more information about the cli-suite visit https://root360.atlassian.net/wiki/spaces/KB/pages/227672104

If the agentforwarding is configured correctly, you can switch directly to an instance from the gateway using ssh XX.XX.XX.XX.

A SSH portforward (tunnel) can be set up to access services in the Application Network, such as Database, ElasticCache, SOLR, or similar by follow this FAQ https://root360.atlassian.net/wiki/spaces/KB/pages/2014351825 This allows each user whose OpenSSH key is present on the gateway to connect from their workstations to these services.


Deployment is controlled by the https://root360.atlassian.net/wiki/spaces/KB/pages/2014350252 tool on the bastion host.
Note that all environments that use auto-scaling groups use this deployment process. Environments with single instances can deploy their application also without the use of the root360 cli suite, if so desired.

Deployment always runs in the following steps:

  1. Updating the software sources via GIT, Github or similar (configured by root360)

  2. Save the  new release in the root360 environment, but not yet on the application instances

  3. Optional: https://root360.atlassian.net/wiki/spaces/KB/pages/2014352844 to run pre-deploy-hooks

  4. Distribution of the new release to all running application instances of the selected roles e.g. "web" or "backend". The production path (e.g. /var/www/) on the application instances does not yet point to that release.

  5. Install the software using the bash script install.sh and learn how to use install.sh at https://root360.atlassian.net/wiki/spaces/KB/pages/2014352817. Possible applications are e.g. 
    - adding the DB configuration to certain configuration files,
    - deleting local cache directories,
    - Run Composer
    - or the installation of CRON jobs.

  6. Point the production path (e.g. /var/www/) on each instance to the new, installed release via a symlink

  7. Optional: https://root360.atlassian.net/wiki/spaces/KB/pages/2014352844 to run post-deploy-hooks

CRON jobs are installed on the instances in step 5 through the install.sh. The implementation in Bash is shown in the example at https://root360.atlassian.net/wiki/spaces/KB/pages/2014352817 . The CRON jobs are started parallel on all instances. If this is not intended or leads to problems, a separate role e.g. "cron" can be provided. The configuration of this role allows only one active instance, which avoids parallel execution.


In the standard configuration, most logs, such as Apache2 or NGINX, are stored centrally on the (NAT) gateway by all instances. The logs are stored on the instance in 

1 /var/log/remote/..

available. The logs are transmitted in real time by the instances. All logs are kept as standard for 30 days. The https://root360.atlassian.net/wiki/spaces/KB/pages/2014350570 contains further details on the handling of the logs.

Other interesting links:

  1. https://root360.atlassian.net/wiki/spaces/KB/pages/2014352589

  2. https://root360.atlassian.net/wiki/spaces/KB/pages/2014352884

  3. https://root360.atlassian.net/wiki/spaces/KB/pages/2014353481

  4. https://root360.atlassian.net/wiki/spaces/KB/pages/2014351632

Related tutorials

Related components