SaltStack is part of the next evolution of infrastructure management tools that System Administrators have in their toolbox for provisioning and managing an ever growing fleet of servers.
The SaltStack project launched in 2011. We first posted about SaltStack in August 2013 — the same year that GitHub’s Octoverse ranked the saltstack/salt repository third out of all public repositories with the most “issues closed” and “merged pull requests” categories.
On Nov. 8, 2013, Salt Cloud was merged into the main Salt repository and is included as part of the SaltStack 2014.1.0 Hydrogen release.
Salt Cloud is a tool for provisioning and managing cloud servers within and across supported cloud providers. For example, a system administrator can provision five new web servers within an AWS U.S. West Coast region and 3 new application servers within a Rackspace London region from a single node that is configured with Salt Cloud.
This post describes how to provision Amazon EC2 instances with Salt Cloud. I also describe how to provision several instances in parallel with a single command using Salt Cloud’s Map feature.
The distribution used as part of this post is CentOS although, apart from some minor installation details, the details covered can be applied to any distribution that is available on EC2 and can run a current version of SaltStack.
Besides AWS EC2, SaltStack supports other Cloud Providers such as Digital Ocean, GoGrid, Google Compute Engine, OpenStack and Rackspace. A Feature Matrix provides a table of features supported for each Cloud Provider.
All interactions from the instance that runs salt-cloud, the Salt Cloud command line tool, and the instances that are provisioned occurs over SSH. A Salt Master is not required for Salt Cloud. If you’d like to manage provisioned instances using Salt states and modules, you will need to setup a Salt Master, which is not covered by this post.
Installation
The salt-cloud command line tool is shipped with the salt-master 2014.1.0 RPM package available as part of EPEL. It should be installed on an instance within EC2.
$ yum install salt-master
The ‘SaltStack’ team manages an Ubuntu Personal Package Archive which covers all current versions of Ubuntu. Salt is also available in the standard openSUSE 13.1 release, too. The excellent documentation found at docs.saltstack.com contains instructions on how to install Salt for other distributions and platforms.
salt-cloud does depend on Apache libcloud which is a python library that interacts with more than 30 cloud service providers. Use pip to install the stable version of apache-libcloud.
$ pip install apache-libcloud
If pip is not available, you may need to install the python-pip package first. If you’d like to have apache-libcloud installed in an isolated Python environment, first check out virtualenv.
EC2 Security Groups
Each instance provisioned by salt-cloud needs to belong to at least one AWS EC2 Security Group which allows incoming traffic from port 22/tcp originating from the instance running salt-cloud. I have described how to create Security Groups in a previous post using the awscli tool.
$ aws ec2 create-security-group
--group-name MySecurityGroupSaltCloudInstances
--description "The Security Group applied to all salt-cloud instances"
$ aws ec2 authorize-security-group-ingress
--group-name MySecurityGroupSaltCloudInstances
--source-group MySecurityGroupSaltCloud
--protocol tcp --port 22
The authorize-security-group-ingress command allows any EC2 node within the MySecurityGroupSaltCloud Security Group to access any other EC2 node within the MySecurityGroupSaltCloudInstances on port 22/tcp. In my setup, the instance running salt-cloud belongs to the MySecurityGroupSaltCloud Security Group. You will need to create a Security Group, which the instance that is running salt-cloud will belong to.
EC2 Keypairs
salt-cloud relies on SSH to upload and apply salt-bootstrap. An SSH public and private key will need to be generated on the instance that runs salt-cloud. The public key will also need to be uploaded to AWS EC2 as a keypair. I’ve also described how to do this in a previous post.
To create a private and public SSH key:
$ ssh-keygen -f /etc/salt/my_salt_cloud_key -t rsa -b 4096
$ aws ec2 import-key-pair --key-name my_salt_cloud_key
--public-key-material file:///etc/salt/my_salt_cloud_key.pub
Salt Cloud Profiles
Salt Cloud Profiles define some general configuration items for a group of salt minions that will be provisioned and managed by salt-cloud.
Within the /etc/salt/cloud.profiles file below, I’ve created a profile titled base_ec2_private which uses the my_ec2_ap_southeast_2_private_ips provider that I will define next. The only other option I need to specify is the AMI ID of the image that the minions will be running. ami-e7138ddd is the AMI ID of the CentOS 6.5 image released by CentOS.org available within the AWS ap-southeast-2 region.
base_ec2_private:
provider: my_ec2_ap_southeast_2_private_ips
image: ami-e7138ddd
Salt Cloud Providers
The salt-cloud provider defines a set of attributes which are used by an AWS EC2 instance.
Below, is the /etc/salt/cloud.providers file which I have used to define a my_ec2_ap_southeast_2_private_ips provider. This provider is used by my base_ec2_private profile.
my_ec2_ap_southeast_2_private_ips:
# ip address salt-cloud should connect to
ssh_interface: private_ips
# aws credentials
id: @AWS_ACCESS_KEY_ID@
key: '@AWS_SECRET_ACCESS_KEY@'
# ssh key
keyname: my_salt_cloud_key
private_key: /etc/salt/my_salt_cloud_key
# aws location
location: ap-southeast-2
availability_zone: ap-southeast-2a
# aws security group
securitygroup: MySecurityGroupSaltCloudInstances
# aws ami
size: Micro Instance
# delete aws root volume when minion is destroyed
del_root_vol_on_destroy: True
# local user
ssh_username: root
# rename on destroy
rename_on_destroy: True
provider: ec2
I have defined a few attributes wrapped in @ symbols that need to be updated to suit your environment
-
@AWS_ACCESS_KEY_ID@: The AWS Access Key ID which belongs to an IAM account that has enough EC2 privileges to provision new instances. Although salt-cloud does support AWS IAM roles, they are only applied to provisioned EC2 minions. Static AWS access and secret keys are still used by salt-cloud to deploy minions.
-
@AWS_SECRET_ACCESS_KEY@: The AWS secret key that belongs to the AWS Access Key ID.
Creating your first salt-cloud minion
First, you may want to set up your SSH key within your SSH agent.
$ eval `ssh-agent`
$ ssh-add /etc/salt/my_salt_cloud_key
Next, call salt-cloud passing in the name of the profile which matches what you have configured within /etc/salt/cloud.profiles and the final argument being the name of your new minion.
$ salt-cloud --profile=base_ec2_private my_first_minion
salt-cloud uses your SSH agent to pull down salt-bootstrap which will safely detect the minions distribution, install the salt-minion package and pre-seed the salt-master with the minion’s key if you’ve setup salt-master.
If successful, we can query the instance with salt-cloud:
$ salt-cloud --action=show_instance my_first_minion
salt-cloud also supports other actions such as querying and setting AWS EC2 tags:
$ salt-cloud --action=get_tags my_first_minion
$ salt-cloud --action=set_tags my_first_minion environment=devel
role=webserver
We can enable and disable EC2 Termination Protection:
$ salt-cloud --action=show_term_protect my_first_minion
$ salt-cloud --action=enable_term_protect my_first_minion
$ salt-cloud --action=disable_term_protect my_first_minion
We can also reboot the minion:
$ salt-cloud --action=reboot my_first_minion
If you have setup a salt-master you should be able to run standard salt modules via the salt command line:
$ salt my_first_minion cmd.run '/sbin/ip address show'
And of course, you could apply state.highstate if your salt-master states have been set-up:
$ salt my_first_minion state.highstate
Finally, we can destroy the instance with the —destroy option:
$ salt-cloud --destroy my_first_minion
Salt Cloud Maps
We have covered provisioning a single EC2 instance with salt-cloud. We can now scale this out to create multiple instances with a single salt-cloud command by using Salt Cloud Maps.
Within the /etc/salt/cloud.map file, I’ve defined 3 web servers that all inherit the base_ec2_private profile.
base_ec2_private:
- web1_prod
- web2_prod
- web3_prod
To provision all three instances, I would simply pass the —map option with the location of the map file. By also including —parallel, all instances within the map will be provisioned at the same time.
$ salt-cloud --map=/etc/salt/cloud.map --parallel
Once provisioned, we can query all the instances within the map with salt-cloud.
$ salt-cloud --map=/etc/salt/cloud.map --query
To terminate all servers within the map we pass the —destroy option.
$ salt-cloud --map=/etc/salt/cloud.map --destroy