Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Excerpt
hiddentrue

This guide provides supporting information to use the root360 cloud platform.

This guide provides supporting information to use the root360 cloud platform.


Prepare secure access by SSH

In order to access an environment by Access an environment via OpenSSH your or Putty you need to provide us with your OpenSSH public key. You can send this key to service@root360.de.
If you don't already have an OpenSSH key, you can create one according to our instructions at Create an OpenSSH SSH Key

Software deployment is usually realized through a publicly accessible git repository (for example on Github). For this purpose you need to store the deploykey (provided by root360) with read-only privileges in the repository. We discuss further details with you during your project migration.

Environments

Root360 provides the environments as required. The following abbreviations are used:

  • PROD = production environment, live environment

  • TEST = test environment or also staging / acceptance system

Network of an environment

An environment always consists of 3 zones, Public DMZ, Application Network and Gateway DMZ.

1. Public Network (DMZ)

In this zone, load balancers are provided for public access via HTTP and / or HTTPS. The used ports are preconfigured by root360. Optionally SSL offload (translation of HTTPS to HTTP) can be activated on the load balancers.

2. Application Network (private)

All services are provided within this zone. In addition, application instances and database instances are separated by independent subnets. Communication to public services on the Internet is not restricted and is routed via the (NAT) gateway.

Note

The public IP address is therefore always the public IP address of the (NAT) gateway or the AWS gateway for all instances.

3. Gateway Network (DMZ)

The gateway in this zone takes the routing into the application zone for SSH access and regulates Internet traffic from the application instances.


Access an environment

Access to an environment is via SSH over the bastion host - either via a console (Linux / OSX) or via Putty. The necessary configuration is documented in the following tutorial Access an environment via OpenSSH or Putty .

The most important commands are:

  • r3 instance list => lists all running EC2 instances according to their role with names, IP and other metadata such as instance type, start time

  • r3 deploy => starts deployment to source distribution (Currently, Docker Container are deployed by a separate command r3 container deploy!)

For more information about the cli-suite visit (Archived) root360 Cloud Management CLI Suite (r3)

If the agentforwarding is configured correctly, you can switch directly to an instance from the gateway using ssh XX.XX.XX.XX.

A SSH portforward (tunnel) can be set up to access services in the Application Network, such as Database, ElasticCache, SOLR, or similar by follow this FAQ How to access a database via SSH tunnel? This allows each user whose OpenSSH key is present on the gateway to connect from their workstations to these services.

Deployment

Deployment is controlled by the CLI-Suite (root360) tool on the bastion host.
Note that all environments that use auto-scaling groups use this deployment process. Environments with single instances can deploy their application also without the use of the root360 cli suite, if so desired.

Deployment always runs in the following steps:

  1. Updating the software sources via GIT, Github or similar (configured by root360)

  2. Save the  new release in the root360 environment, but not yet on the application instances

  3. Optional: Using deployment hooks to run pre-deploy-hooks

  4. Distribution of the new release to all running application instances of the selected roles e.g. "web" or "backend". The production path (e.g. /var/www/) on the application instances does not yet point to that release.

  5. Install the software using the bash script install.sh and learn how to use install.sh at Using install.sh during deployments. Possible applications are e.g. 
    - adding the DB configuration to certain configuration files,
    - deleting local cache directories,
    - Run Composer
    - or the installation of CRON jobs.

  6. Point the production path (e.g. /var/www/) on each instance to the new, installed release via a symlink

  7. Optional: Using deployment hooks to run post-deploy-hooks

CRON jobs are installed on the instances in step 5 through the install.sh. The implementation in Bash is shown in the example at Using install.sh during deployments . The CRON jobs are started parallel on all instances. If this is not intended or leads to problems, a separate role e.g. "cron" can be provided. The configuration of this role allows only one active instance, which avoids parallel execution.

Logging

In the standard configuration, most logs, such as Apache2 or NGINX, are stored centrally on the (NAT) gateway by all instances. The logs are stored on the instance in 

Code Block
/var/log/remote/..

available. The logs are transmitted in real time by the instances. All logs are kept as standard for 30 days. The Standard Logging (root360) contains further details on the handling of the logs.

Other interesting links:

  1. Backup coverage and data retention

  2. Configure own domain via DNS

  3. Script Snippets

  4. Why use www-data user?

Related tutorials

Content by Label
showLabelsfalse
max10
sorttitle
showSpacefalse
cqllabel in ( "platform" , "quickstart" , "logging" , "backup" , "deployment" , "cli-suite" , "ssh" , "security" ) and ancestor = "2014352487" and space = currentSpace ( )

Related components

Content by Label
showLabelsfalse
max10
sorttitle
showSpacefalse
cqllabel in ( "platform" , "quickstart" , "deployment" , "logging" , "backup" , "ssh" , "security" ) and ancestor = "2014350220" and space = currentSpace ( )

Status
colourGreen
titleBasic

Table of Contents
exclude(Related * | Recommended * |Table of contents).*


Content by Label
showLabelsfalse
max10
sorttitle
showSpacefalse
titleRelated questions
cqllabel in ( "platform" , "quickstart" , "deployment" , "ssh" , "security" , "backup" ) and ancestor = "2014351598" and space = currentSpace ( )