This script ets up the etcd daemon on the master nodes. It starts by copying across a script etcd_setup.sh to each master node and then executing it on the master node. This script performs the following functions. A .variables file is also copied across to each controller, which contains instance specific variables created from the environment details created during the first step. These variables are the number of master nodes in the cluster, and the kubernetes cluster subnet address. NOTE: The following steps run on each controller node serially (i.e. one after another) The Kubernetes yum repository is set up, and docker, etcd, and kubectl are installed. The docker daemon is started and enabled at this point. SELinux is set to permissive mode - this is bad practice ... and at a later date I'll look to create policies that allow SELinux to remain enabled. The key material for etcd is then copied to the /etc/etcd/ directory. The /etc/etcd/etcd.conf file is then updated to include the correct details for the Kubernetes cluster. Some of this is simply filled in with local environment parameters (e.g. hostname or local IP address). The ETCD_INITIAL_CLUSTER parameter must include all the IP addresses of the members of the etcd cluster. This is why we must pass the number of master nodes in the cluster as a variable to each master node. The etcd package from the EPEL repository doesn't automatically enable most of the settings we require. Having updated the /etc/etcd/etcd.conf file, the /etc/systemd/system/etcd.service file is created with the correct startup parameters enabled that we need. The /var/lib/etcd/default.etcd directory (if it exists) is removed so that when we start etcd it is using the brand new configuration we've created. The etcd daemon is then started and enabled - after performing a systemctl daemon-reload. A final check is performed to ensure that the etcd member nodes are working. This will only be correct on the final master node as this installation runs serially on the master nodes.