Script Snippets
Preconditions
You need access to jumphost with you personal Openssh access key. Access an environment via OpenSSH or Putty
root360 platform offers handy script snippets to be used for specific use cases within root360 environment. Following scripts are available
asg-protection
Description
For scripts with a long runtime, it may happen that the execution server is deleted during execution by our automatic scaling. This script activates or deactivates a protection mechanism for the executing server that prevents this deletion Code. If this is not equal to 0 (number zero), an error occurred and the protection was not activated / deactivated. Possible reasons are:
The server is currently down
The hardware of the server is currently being modified (enlarged, reduced, restarted, etc.)
The script was called too often in a short period of time (> 50 times per second) and gets throttled
Usage
Enable protection:
if asg-protection on; then
# Commands if protection could not be activated
# Ex: exit
fi
Disable protection:
if asg-protection off; then
# Commands if protection could not be disabled
# Ex: exit
fi
Get current state:
asg-protection get
return code is 0, when ASG protection is enabled
return code is 1, when ASG protection is disabled
return code is 5, when the server is a single server
check-log-registration
Description
There are cases where the application can not be customized so the log is registered by the application. In these cases, this script can be used. It recursively finds all textfiles ending with ".log" or “.json” in a given path and registers them if they have not been registered yet. It is optimized for use as cronjob. The script has to be run as www-data, which will be done automatically by using the command in the install.sh. If you want to test the script manually you have to add the prefix sudo -u www-data
The script register-log
which will be called by check-log-registration
checks if a log file exists to avoid unnecessary configs. This will lead to unregistered logs during the installation/deployment/autoscaling process, as the log file paths may not exist yet. We therefore recommend touching the log file to succeed the register log action.
Example - just one logfile:
touch "${PWD}"/<logfile>
Example - multiple logfiles:
touch "${PWD}"/{<logfile>,<logfile2>,<logfile3>}
NOTE: ${PWD}
is the current checkout directory during deployments, please see Using install.sh during deployments
Usage
Log log files in a path:
Register logs for specific roles:
get-application-env
Description
If the server is used to host an application, the script returns all environment parameters (eg database access credentials, memcache URL, etc.). It supports output in text and json format.
You need to run this script directly on the application instance. The script will not return environment parameters on a natgw because no application runs on a natgw.
This command is not usable on our docker setups.
Usage
Return environment parameters for all server roles:
Return environment parameters only for role "role2":
Return environment parameters in JSON format:
Return environment parameters in as Bash commands:
Can be used to setup test environment variables for install.sh testing, e.g. like this run as www-data:
For additional options, see Command Help:
get-instances-by-role
Description
This script returns a list of all servers in a project that belong to a role. It supports output in text and json format.
Usage
Return all servers with the same role, such as:
Return all servers of the specified role:
Return all servers of the specified role in the text format:
Return all servers of the specified role and project:
hook-memcache-flush.sh
Description
This script can be used as a deploy hook to delete all data from a memcache cluster after a project deployment. It automatically detects all existing memcache clusters. If only one cluster exists, this is deleted. If more clusters exist, the cluster to be cleared is queried via a selection.
Usage
Delete based on automatic determination:
Deletion under the specification of the cluster:
register-log
Description
Upon request, we provide a logging system that collects log files to a central log server and stores them for analysis for a defined period of time. This script can be used to log log files dynamically. This is necessary, for example, for log files with a dynamic path (eg date or partner ID).
The logfile must end with ".log" or “.json” and contain text.
Currently up to 6600 log files can be registered.
Usage
register logs:
After the registration of a log file the system reads the full file and sends it to the logserver.
The option '-k' prevents our cleanup routines to remove the configuration for the logfile when the log file does not exist.
The script register-log
checks if a log file exists to avoid unnecessary configs. This will lead to unregistered logs during the installation/deployment/autoscaling process, as the log file paths may not exist yet. We therefore recommend touching the log file to succeed the register log action.
Example - just one logfile:
touch "${PWD}"/<logfile>
Example - multiple logfiles:
touch "${PWD}"/{<logfile>,<logfile2>,<logfile3>}
NOTE: ${PWD}
is the current checkout directory during deployments, please see Using install.sh during deployments
usage at install.sh
run-once-per-role.sh
Description
This script is intended to run on all servers with the same role (e.g. Auto-Scaling Group) and ensure that the given command is only run once.
This is done by collecting all server with the given role, sorting them by their IP address and only run CMD when the local IP address matches the first address in the generated list.
Usage
The first argument of the script has to be the role name that is deployed.
After the role name you can add any command you want to run.
Be aware of the escaping of special characters in Bash (&;|$ and so on) necessary if they are part of the command to run.
Use-Case install.sh
Using run-once-per-role.sh you can run a command via install.sh that should only run once per deployment, e.g. flush caches.
This example code in the install.sh run the clear-cache function of 'artisan':
install.sh:
Use-Case Cronjob
With run-once-per-role.sh you can install a cronjob that should run only once on all systems and it still will be running on only one system.
The following line can be copied into crontab and will run cron-start function of 'artisan' only once:
varnish-flush.sh
Description
This script deletes data from a Varnish cache.
Usage
Script using the URL to be deleted on the Jump server.
Delete a dedicated URL, eg image or HTML page
Delete all elements below a URL, eg foobar / *
Delete all elements with unknown URL depth, eg foobar / * / bla / *
Dedicated object for all domains
Delete all elements below a URL, e.g. foobar/* from a dedicated role, e.g. varnish01
Related tutorials
Related components
- 1 Preconditions
- 2 asg-protection
- 2.1 Description
- 2.2 Usage
- 2.2.1 Enable protection:
- 2.2.2 Disable protection:
- 2.2.3 Get current state:
- 3 check-log-registration
- 3.1 Description
- 3.2 Usage
- 4 get-application-env
- 4.1 Description
- 4.2 Usage
- 5 get-instances-by-role
- 5.1 Description
- 5.2 Usage
- 6 hook-memcache-flush.sh
- 6.1 Description
- 6.2 Usage
- 7 register-log
- 7.1 Description
- 7.2 Usage
- 7.2.1 usage at install.sh
- 8 run-once-per-role.sh
- 8.1 Description
- 8.2 Usage
- 8.2.1 Use-Case install.sh
- 8.2.2 Use-Case Cronjob
- 9 varnish-flush.sh
- 9.1 Description
- 9.2 Usage
root360 Knowledge Base - This portal is hosted by Atlassian (atlassian.com | Privacy Policy)