root360 platform offers handy script snippets to be used for specific use cases within root360 environment. |
You need access to jumphost with you personal Openssh access key. Access an environment via OpenSSH or Putty
root360 platform offers handy script snippets to be used for specific use cases within root360 environment. Following scripts are available
For scripts with a long runtime, it may happen that the execution server is deleted during execution by our automatic scaling. This script activates or deactivates a protection mechanism for the executing server that prevents this deletion Code. If this is not equal to 0 (number zero), an error occurred and the protection was not activated / deactivated. Possible reasons are:
The server is currently down
The hardware of the server is currently being modified (enlarged, reduced, restarted, etc.)
The script was called too often in a short period of time (> 50 times per second) and gets throttled
if asg-protection on; then # Commands if protection could not be activated # Ex: exit fi |
if asg-protection off; then # Commands if protection could not be disabled # Ex: exit fi |
asg-protection get |
return code is 0, when ASG protection is enabled
return code is 1, when ASG protection is disabled
return code is 5, when the server is a single server
There are cases where the application can not be customized so the log is registered by the application. In these cases, this script can be used. It recursively finds all textfiles ending with ".log" or “.json” in a given path and registers them if they have not been registered yet. It is optimized for use as cronjob. The script has to be run as www-data, which will be done automatically by using the command in the install.sh. If you want to test the script manually you have to add the prefix sudo -u www-data
The script Example - just one logfile:
Example - multiple logfiles:
NOTE: |
Log log files in a path:
/usr/local/bin/check-log-registration /full/path/to/log-folder/ [role1 [role2 [roleN]]] |
Register logs for specific roles:
/usr/local/bin/check-log-registration /full/path/to/log-folder/ web backend |
If the server is used to host an application, the script returns all environment parameters (eg database access credentials, memcache URL, etc.). It supports output in text and json format.
You need to run this script directly on the application instance. The script will not return environment parameters on a natgw because no application runs on a natgw. This command is not usable on our docker setups. |
Return environment parameters for all server roles:
$ sudo get-application-env Environment for role "rolle1" variable value GIT_SSH /usr/local/bin/r3-deploy/custom-git-ssh.sh DATABASE_NAME db REDIS_SESSION_PORT 6379 REDIS_SESSION_HOST redis.server.0001.euc1.cache.amazonaws.com DATABASE_PASSWORD password DATABASE_HOST db.server.eu-central-1.rds.amazonaws.com DATABASE_USER dbuser Environment for role "rolle2" variable value GIT_SSH /usr/local/bin/r3-deploy/custom-git-ssh.sh DATABASE_NAME db2 REDIS_SESSION_PORT 6379 REDIS_SESSION_HOST redis2.server.0001.euc1.cache.amazonaws.com DATABASE_PASSWORD password DATABASE_HOST db2.server.eu-central-1.rds.amazonaws.com DATABASE_USER dbuser |
Return environment parameters only for role "role2":
$ sudo get-application-env rolle2 Environment for role "rolle2" variable value GIT_SSH /usr/local/bin/r3-deploy/custom-git-ssh.sh DATABASE_NAME db2 REDIS_SESSION_PORT 6379 REDIS_SESSION_HOST redis2.server.0001.euc1.cache.amazonaws.com DATABASE_PASSWORD Passwort DATABASE_HOST db2.server.eu-central-1.rds.amazonaws.com DATABASE_USER dbuser |
Return environment parameters in JSON format:
$ sudo get-application-env --output json {"rolle1": [["GIT_SSH", "/usr/local/bin/r3-deploy/custom-git-ssh.sh"], ["DATABASE_NAME", "db"], ["REDIS_SESSION_PORT", 6379], ["REDIS_SESSION_HOST", "redis.server.0001.euc1.cache.amazonaws.com"], ["DATABASE_PASSWORD", "Passwort"], ["DATABASE_HOST", "db.server.eu-central-1.rds.amazonaws.com"], ["DATABASE_USER", "dbuser"]], "rolle2": [["GIT_SSH", "/usr/local/bin/r3-deploy/custom-git-ssh.sh"], ["DATABASE_NAME", "db2"], ["REDIS_SESSION_PORT", 6379], ["REDIS_SESSION_HOST", "redis2.server.0001.euc1.cache.amazonaws.com"], ["DATABASE_PASSWORD", "Passwort"], ["DATABASE_HOST", "db2.server.eu-central-1.rds.amazonaws.com"], ["DATABASE_USER", "dbuser"]]} |
Return environment parameters in as Bash commands:
$ sudo get-application-env rolle2 --output bash echo "Environment for role rolle2" export DATABASE_NAME='db2' export DATABASE_HOST='db2.server.eu-central-1.rds.amazonaws.com' export DATABASE_PASWORD='xxxxxxxxxxxxxxxx' export DATABASE_USER='dbuser' export GIT_SSH='/usr/local/bin/r3-deploy/custom-git-ssh.sh' export REDIS_SESSION_PORT='6379' export REDIS_SESSION_HOST='redis2.server.0001.euc1.cache.amazonaws.com' |
Can be used to setup test environment variables for install.sh testing, e.g. like this run as www-data:
$ eval "$(sudo get-application-env rolle2 --output bash)" Environment for role rolle2 $ cd /var/www/rolle2/ $ sudo -Eu www-data bash install.sh |
For additional options, see Command Help:
$ get-application-env -h |
This script returns a list of all servers in a project that belong to a role. It supports output in text and json format.
Return all servers with the same role, such as:
get-instances-by-role # output [{"ip": "10.xxx.xxx.xxx", "name": "host1", "id": "i-yyyyyy"}, {"ip": "10.xxx.xxx.xxx", "name": "host2", "id": "i-yyyyyy"}, {"ip": "10.xxx.xxx.xxx", "name": "host3", "id": "i-yyyyyy"}] |
Return all servers of the specified role:
get-instances-by-role role # e.g. get-instances-by-role web |
Return all servers of the specified role in the text format:
get-instances-by-role --output text role # e.g. get-instances-by-role --output text web # output nameipid hostname10.xxx.xxx.xxxi-xxxxxxxxxx |
Return all servers of the specified role and project:
get-instances-by-role --project project role # e.g. get-instances-by-role --project proj2 web |
This script can be used as a deploy hook to delete all data from a memcache cluster after a project deployment. It automatically detects all existing memcache clusters. If only one cluster exists, this is deleted. If more clusters exist, the cluster to be cleared is queried via a selection.
Delete based on automatic determination:
hook-memcache-flush.sh -e environment -p project # Beispiel hook-memcache-flush.sh -e test -p portal |
Deletion under the specification of the cluster:
hook-memcache-flush.sh -u cluster-url # Beispiel hook-memcache-flush.sh -u test-r3-45git3.00y7nk.cfg.euc1.cache.amazonaws.com:11211 |
Upon request, we provide a logging system that collects log files to a central log server and stores them for analysis for a defined period of time. This script can be used to log log files dynamically. This is necessary, for example, for log files with a dynamic path (eg date or partner ID).
The logfile must end with ".log" or “.json” and contain text.
Currently up to 6600 log files can be registered.
register logs:
register-log -h /usr/local/bin/register-log [-h] [-k] [path_to_log] [role] -h : show this help -k : prevents our cleanup routines to remove the configuration for the logfile when the log file does not exist [role] : for instances with multiple roles, please use this option and define a role |
After the registration of a log file the system reads the full file and sends it to the logserver.
The option '-k' prevents our cleanup routines to remove the configuration for the logfile when the log file does not exist.
The script Example - just one logfile:
Example - multiple logfiles:
NOTE: |
register-log -k /full/path/to/logfile [role] # you should incorporate pwd |
This script is intended to run on all servers with the same role (e.g. Auto-Scaling Group) and ensure that the given command is only run once.
This is done by collecting all server with the given role, sorting them by their IP address and only run CMD when the local IP address matches the first address in the generated list.
The first argument of the script has to be the role name that is deployed.
After the role name you can add any command you want to run.
Be aware of the escaping of special characters in Bash (&;|$ and so on) necessary if they are part of the command to run.
Using run-once-per-role.sh you can run a command via install.sh that should only run once per deployment, e.g. flush caches.
This example code in the install.sh run the clear-cache function of 'artisan':
install.sh:
#!/bin/bash [...] run-once-per-role.sh "${ROLE}" /var/www/web/public/bin/artisan cache:clear [...] |
With run-once-per-role.sh you can install a cronjob that should run only once on all systems and it still will be running on only one system.
The following line can be copied into crontab and will run cron-start function of 'artisan' only once:
* * * * * /usr/local/bin/run-once-per-role.sh 'web' /var/www/web/public/bin/artisan cron:start |
This script deletes data from a Varnish cache.
Script using the URL to be deleted on the Jump server.
Delete a dedicated URL, eg image or HTML page
varnish-flush.sh http://example.com/test.html http://example.com/foobar.png |
Delete all elements below a URL, eg foobar / *
varnish-flush.sh http://example.com/foobar/* |
Delete all elements with unknown URL depth, eg foobar / * / bla / *
varnish-flush.sh http://example.com/foobar/*/bla/* |
Dedicated object for all domains
varnish-flush.sh http://.*/test.html http://example.com/foobar.png |
Delete all elements below a URL, e.g. foobar/* from a dedicated role, e.g. varnish01
varnish-flush.sh -r varnish01 http://example.com/foobar/* |
Pick one level and also add as tag to page
INTERMEDIATE