Using install.sh during deployments

When does the install.sh gets executed?

The install.sh is run to install the projects on each application instance to complete either the deployment of a new release or the provisioning of a new instance. Since the configuration of the necessary steps is to be done by the project, the install.sh must be part of the source code.

To install by install.sh include the following steps:

  1. Configure the application with the appropriate endpoints of the environment

  2. Installation of crons

  3. Execute necessary PHP commands

The steps can be adapted as desired to the requirements of individual projects.

Background information

  1. The install.sh will be executed as www-data user. All file operations are therefore executed in the context of the web server in order to avoid access problems.

  2. The install.sh must exit successfully (exitcode 0), otherwise the deployment will be canceled for the affected instance. This is why you should either add set --eo pipefail in the very beginning of your script or append || exit $? for all commands that are critical to the deployment (usually all of them), thereby exiting the deployment with the error-code of the failed command. If this is not specified, the deployment or the provisioning continues even in the event of an unsuccessful command, making it much harder to debug.

  3. The install.sh always takes place in the root directory of the current repository checkout.

Environment variables

install.sh is executed using bash and is given a number of environment variables (“envs”). You can see all environment variables that are viewable by install.sh by executing sudo get-application-env on an application instance.

There are three classes of envs:

  1. Envs you set via r3 secret. See .

  2. Envs that we customize for your roles with you and that are placed by us into our configuration management. They have precedence over envs from r3 secret in case their names are conflicting.

  3. Default Envs that are always set (they overwrite envs from 2 and 1):

    • COMPOSER_HOME: Required by composer.

    • R3_ENV: prod/stage/test/... Use this one instead of $ENV.

    • ENV: deprecated as in some cases $ENV is being overwritten by the shell. Use $R3_ENV.

    • GIT_SSH: Only used if specially configured by us in case you need to clone git repos with install.sh. Sometimes required by e.g. composer.

    • HOME: Some deployment tools that may be used by install.sh-scripts require a $HOME variable. It is set to /tmp/{{ role }}

    • npm_config_cache: Required by npm.

    • REGION: AWS-region the current instance runs in.

    • ROLE: Role that is being deployed.

    • ROLES: Space separated list of roles running on the current instance.

What to avoid in your install.sh

Clearing you central cache

Each new instance will run the install.sh and flush your cache. So if autoscaling is starting new instance each of them will flush the cache, which works against the autoscaling which was triggered by high cpu load. Instead use deployment hooks for flushing caches.

Creating database dumps/snapshots

Each instance during deployment will create a dump which will result in multiple dumps at the same time and or several dumps during autoscaling. Instead use deployment hooks for creating snapshots/dumps.

Creating cronjobs without run-once-per-role on autoscaling systems

The install.sh is run on each instance during deployment and on each new instance that is created via autoscaling. If you create cronjobs without “run-once-per-role” each instance will have the same cronjob. So your cronjob will be executed multiple times which might cause harm to your application.

Examples

In all examples you will notice that we use some form of template (.dist) and replace the application config with this file after all placeholders have been replaced in the template. This is most resilient way to create application configs. As you can have a config file for local testing in your release and the Root360 deployment just overwrites it via install.sh. So you can use the same release for local testing and for the Root360 Platform.

Generic example

An example install.sh might look like this

#! /bin/bash # store current working directy (repository checkout) for later install_dir="${PWD}" # inject environment variables (e.g. db/redis/ses endpoints) to .env envsubst < .env.dist > .env || exit $? # install CRON if role "backend" is installed if [[ "${ROLE}" == "backend" ]] then # cron example echo "* * * * * cd /var/www/ && /usr/bin/php cron.php > /dev/null 2>&1" >> project-crontab # cron example with custom logging echo "* * * * * date >> /var/log/application/cron.log && cd /var/www/ && /usr/bin/php cron.php >> /var/log/application/cron.log 2>&1" >> project-crontab # cron example to register dynamic application log files stored in a dedicated log folder echo "* * * * * /usr/local/bin/check-log-registration /var/www/${ROLE}/logs/" >> project-crontab # cron example to run a command on only one instance per role echo "* * * * * /usr/local/bin/run-once-per-role.sh 'web' /var/www/web/public/bin/artisan cron:start" >> project-crontab crontab project-crontab || exit $? rm project-crontab fi # register custom application log file register-log -k "${install_dir}/log_dir/custom_application.log"

 

Shopware6 example for .env.dist and install.sh

.env.dist

## Database Config DB_USER=$DATABASE_USER DB_PASSWORD=$DATABASE_PASSWORD DB_HOST=$DATABASE_HOST DB_NAME=$DATABASE_NAME DB_PORT=$DATABASE_PORT ## Elasticsearch Config ES_ENABLED=true ES_AWS=true ES_VERSION=6.8.0 ES_CLUSTER=MASTER ES_MASTER_HOST=https://$ELASTICSEARCH_HOST:%ELASTICSEARCH_PORT% ## Redis Config REDIS=true REDIS_SESSION_HOST=$REDIS_HOST REDIS_SESSION_PORT=$REDIS_PORT

install.sh

#! /bin/bash # inject environment variables (e.g. db/redis/ses endpoints) to .env envsubst < .env.aws > public/.env || exit $? if [[ "${ROLE}" == "backend" ]] then echo "*/15 * * * * /usr/local/bin/check-log-registration /var/www/backend/public/var/log backend" >> project-crontab crontab project-crontab || exit $? rm project-crontab fi if [[ "${ROLE}" == "frontend" ]] then echo "*/15 * * * * /usr/local/bin/check-log-registration /var/www/frontend/public/var/log frontend" >> project-crontab crontab project-crontab || exit $? rm project-crontab fi cd ./public || exit $? # password protection for stage environments including ip whitelisting if [[ "${ENV}" == "prod" ]] && [[ "${ROLE}" == "backend" ]] then echo " <RequireAny> AuthType Basic AuthName \"Protected Area\" AuthUserFile /var/www/${ROLE}/public/.htpasswd AuthType Basic # Whitelisted IP Require ip [insert IP Adress] Require valid-user </RequireAny> " >> .htaccess || exit $? fi

 

 

Magento2 Example for dist.env.php and install.sh

dist.env.php

install.sh

 

 

TYPO3 example for dist.env.php and install.sh

dist.env.php

install.sh

 

See Scripts Snippets for an explanation of the used scripts register-logcheck-log-registration and run-once-per-role.sh.

Explanation of envsubst

The envsubst program will replace placeholder variables with the values from the appropriate environment variables for the role. The result is written into the configuration file used. In the example, a template configuration file .env.dist is used to provide the placeholders. The environment variables for each project can be taken by executing sudo get-application-env on the respective application instance.

Explanation of "Installing CRONs"

The example shows the possibility to install a CRON. This installation is installed in the example only for the "backend" role. With this methodology, they allow all commands to be applied to different roles.

Explanation of "Running Required PHP Commands"

For Shopware, for example, you may want the attributes to be regenerated. The following command should be added:

Related tutorials

Related components

 

 


root360 Knowledge Base - This portal is hosted by Atlassian (atlassian.com | Privacy Policy)