New Ansible Playbook with Roles & Vault to create an instance in AWS from Anywhere!

Rangaswamy P V
15 min readMay 6, 2020

After the first tutorial Ansible101 for installing and setting up Ansible on to an Ubuntu Box , let us now write a playbook to create an instance in AWS . In this tutorial your Ansible could be in any cloud GCP / AWS /Azure /Alibaba Cloud or even your local laptop and a new instance will be created in your AWS account .

Ansible File Structure

Ansible will have the following file structure . Create the following directories under “/etc” . You may need sudo permission to do so

 ansible,group_vars,group_vars/all,roles 

You also need the following files

ansible.cfg and hosts

After manually creating the above , the “/etc/ansible” file structure will look similar to

 /etc/ansible
f hosts
f ansible.cfg
d group_vars
d all
d roles

If you followed the pip install method as mentioned in Ansible1o1 for Ansible install ; then download the repo as shown . Copy “ansible.cfg” file from the folder “confighost” and modify accordingly to your needs and save it in the “/etc/ansible” folder

$ git pull https://github.com/rangapv/Ansible.git
$ cd ./confighost
$ ls
ansible.cfg hosts var_aws.yml
$ sudo cp ./confighost/ansible.cfg /etc/ansible/

The “ansible.cfg” is the configuration file which Ansible uses to perform all the operations . Here I will list the minimal entries that is required , feel free to add as necessary..

“sudo_user” is the current user on the Ansible box who is used to do ssh on the remote client

“remote_user” is the login on to the remote client

[defaults]
.
.
sudo_user = rangapv07
.
.
# uncomment this to disable SSH key host checking
host_key_checking = False
# default user to use for playbooks if user is not specified
remote_user = rangapv08

Also copy the “hosts” file from the folder “confighost” of the above downloaded repo and modify accordingly to your needs and save it to the “/etc/ansible” folder . The “hosts” file will have the IP , hostname of the remote machines that you want Ansible to ping and ssh into for installing or configuring the required software / client box as mentioned in a playbook .

$ sudo cp ./confighost/hosts /etc/ansible/

Make sure the “hosts” file has the format as shown below . The name in “[]” is the group-name for the list of servers that you want your Ansible Playbook tasks to apply .

[kubeadm]
kubeadm1 ansible_ssh_host=35.35.35.35
[aws]
aws1 ansible_ssh_host=52.34.195.122
aws2 ansible_ssh_host=54.149.189.56

group_vars directory will have all the variable definitions that the playbook uses to execute the tasks . Typically the access_key values goes in here , instance_type values go in here etc.. . Since it contains sensitive information about the account , I would recommend you to secure the file in an encrypted format using “ansible-vault” . The steps will be outlined in the next section .

Securing the Variables file using Ansible Vault:

Let’s see how to use ansible-vault to safe-guard the values in the “group_vars/all” directory . In order to create an encrypted file use the keyword create as shown

$ cd /etc/ansible/group_vars/all
$ sudo ansible-vault create var_aws.yml

You will be prompted for a password ; enter your new password and confirm it . This will create a new file with name var_aws.yml ; store your sensitive data and save the file .

Every time you need to change the values in it you need to do edit

$ sudo ansible-vault edit var_aws.yml

The console will ask for the password , enter the password you used at the time of creation above and do variable addition or deletion as per your needs…

A typical "var_aws.yml" file will look similar to this$ sudo ansible-vault view var_aws.yml
Vault password:
ami_image: ami-00fda5ddc3bc9876a
key_name: AldoCloudKEY
aws_access_key: A***********88F
aws_secret_key: x0000****************00Uu/BTK
.
.
partial list

The next section talks about roles and definition , so let’s jump-in

Ansible Roles:

Any repeated tasks on few instances/nodes remotely can be executed via roles . You can apply roles to a sub set of few instances . In this example we will write two roles . First “iptable” role which inputs a value to the iptables in the newly created instance and Second “restart” role which restarts the instances .

Roles directory structure : Make a directory with your role name . In this case it is “iptable” and “restart” . Under the “iptable” directory make a sub directory named “tasks” . “tasks” directory will have two files . One is “main.yml” which will include the file name that has the tasks defined ; in this case “ipt.yml” . You may need sudo permission to do so . Similarly repeat for the second role “restart” .

First Role:
$ mkdir /etc/ansible/roles ---- directory "roles" name is mandatory
$ mkdir /etc/ansible/roles/iptable --- name of "iptable" is user defined
$ mkdir /etc/ansible/roles/iptable/tasks -- sub-folder "tasks" name is mandatory
$ vi main.yml --- filename "main.yml" is mandatory
$ vi ipt.yml --- filename "ipt.yml" is user defined
Second Role:
$ mkdir /etc/ansible/roles/restart --- name of "restart" is user defined
$ mkdir /etc/ansible/roles/restart/tasks -- sub-folder "tasks" name is mandatory
$ vi main.yml --- filename "main.yml" is mandatory
$ vi rest.yml --- filename "rest.yml" is user defined

The roles folder will have the following structure once you execute the above steps…

/etc/ansible/roles/
iptable
tasks
main.yml
ipt.yml
restarts
tasks
main.yml
rest.yml

Before we start writing the main Ansible Playbook let’s look at the AWS requirements for Ansible in the next section .

AWS pre-requisites:

Ansible needs the following parameters to talk to AWS .

aws access key
aws secret key
subnet id
ami id
security group
ssh key to login into the instance
..etc at a minimal

All these values need to be populated directly in the playbook or in the “group_vars/all” folder . The following steps will detail out the ways to configure these parameters …

boto3 configuration Steps:

If you had followed Ansible1o1 for Ansible creation then you will have both Ansible and python with boto3 installed correctly .

In order for Ansible to dynamically talk to AWS we will use the boto3 library of python .

To make life simpler I have downloaded the ec2.py and ec2.ini files form the web and stored it in my personal repo ( make sure you are using the latest version) . Download the repo . The sub-folder “awsboto” will have ec2.py and ec2.ini files

$ git pull https://github.com/rangapv/Ansible.git
$ cd ./awsboto
$ ls
ec2.py ec2.ini
$ chmod +x ec2.py

The ec2.py need not be modified, you can use it as-is when you download it from the above step . The ec2.ini file needs the key-value pair that is generated in AWS console as shown below

You need to generate and keep in handy the following key-value pair…

aws_access_key_id
aws_secret_access_key

Generate AWS access keys:

Go to your AWS console . Under service select/click on IAM service . Once in this tab, go to “Access management” Option on the left panel . After selecting this select/click Users option . On to the right you will have the choice for users .

The summary will have security credentials tab . Select/click it . You will see the “Create access key” Button . Click it and the system will generate the AccessKey and the SecretAccess Key values which you could make a note of or download as an excel sheet to store it safely .

These are the values that go into the following key-value pair in your ec2.ini file and as well as the /etc/ansible/group_vars/all/var_aws.yml file .

aws_access_key_id = AKI*****************A
aws_secret_access_key = TX1*******************3

when you run the ec2.py file as show below you should see some output and not an error . An error means that the access key pair values are wrong . In some cases you will see the list of hostnames currently in the AWS account .

rangapv08@instance1$ ./ec2.py --list
{
"_meta": {
"hostvars": {}
}
}

*ec2.py and ec2.ini can be placed in “/etc/ansible” folder for dynamic inventory management . You can use this file instead of the hosts file discussed above . Although in this example it is optional and one can omit both the ec2 files . One still needs the access keys….

So far we have created the Ansible File Structure, created AWS access keys and downloaded and modified the ec2.py and ec2.ini files .

We now need the ami_image , vpc_subnet_id , security_group and also the ssh-key to create and log-into the new instance on AWS .

ssh-key for instance logging in: Create a ssh key using ssh-key gen command available on command line or from the AWS console left panel for key-pair generator option and keep a copy(*.pem) of it in your Ansible Playbook box , so that it can be transferred to the newly created AWS instance at the time of creation .

-rw — — — — 1 rangapv08 rangapv08 1692 May 4 11:01 AldoCloudKEY.pem

The “ssh_key” variable used in the playbook (aws.yml) will reference this and the definition be present in your “group_vars/all/var_aws.yml” file with a value pointing to the key location in the local Ansible host box .

All the variables used in the playbook needs to be present in the file under “group_vars/all” directory or you need to provide it on command line at the time of executing the playbook .

Log-into your AWS console and make a note of the ami_image , key_name , vpc_subnet_id , security_group

In this example for my AWS account ; “var_aws.yml” file has these key-values; “key_name: AldoCloudKEY” is the private ssh key that was created in AWS and the “image: ami-00fda5ddc3bc9876a” is the “ami_image” that is present in my AWS region . This ami should have all the prerequisites like python/awscli etc or any other requirement that you may need for the new instance . The “vpc_subnet_id” and “security_group” must exist in the AWS account/region .

The template for “var_aws.yml” can be found in the sub-folder “confighost” of the below repo .

$ git pull https://github.com/rangapv/Ansible.git
$ cd ./confighost
$ ls
var_aws.yml hosts ansible.cfg
$ sudo cp ./var_aws.yml /etc/ansible/group_vars/all/

You may want to use it as a template and modify the values of “var_aws.yml” file , like say ansible_ssh_private_key_file , keyname , image , regions , zone etc … as per your requirements .

The “group_vars/all/var_aws.yml” file will look as shown after populating it with the AWS variables value matching your AWS account

$ sudo ansible-vault view var_aws.yml
Vault password:
ami_image: ami-00fda5ddc3bc9876a
key_name: AldoCloudKEY
aws_access_key: A***********88F
aws_secret_key: x0000****************00Uu/BTK
wait: yes
group: launch-wizard-1
count: 1
aws_zone: us-west-2c
instance_type: t2.medium
vpc_subnet_id: subnet-e44865bc
ssh_key: /home/rangapv08/ans/AldoCloudKEY.pem
assign_public_ip: yes
region: us-west-2

Let’s Write the main Ansible Playbook:

This is the yaml definition for the main Ansible Playbook for AWS instance creation

#aws.yaml
- name: Create instance(s)
hosts: localhost
connection: local
gather_facts: no
vars:
nodes: "['{{ adm1 }}']"
tasks:
- name: Launch instances
ec2:
aws_secret_key: "{{ aws_secret_key }}"
aws_access_key: "{{ aws_access_key }}"
key_name: "{{ key_name }}"
instance_type: "{{ instance_type }}"
image: "{{ ami_image }}"
volumes:
- device_name: /dev/sda1
volume_size: 20
volume_name: "{{ item }}"
volume_type: gp2
Name: "{{ item }}"
delete_on_termination: true
wait: yes
group: "{{ group }}"
exact_count: "{{ count }}"
count_tag: { Name: "{{ item }}" }
vpc_subnet_id: "{{ vpc_subnet_id }}"
assign_public_ip: yes
instance_tags:
Name: "{{ item }}"
instance_type: "{{ instance_type }}"
region: "{{ region }}"
zone: us-west-2c
with_items:
- "{{ nodes }}"
# when: tr.stdout == "[]"
ignore_errors: yes
register: ec2
- name: ec2 facts
ec2_instance_facts:
aws_secret_key: "{{ aws_secret_key }}"
aws_access_key: "{{ aws_access_key }}"
region: "{{ region }}"
filters:
"tag:Name": "{{ nodes }}"
register: efacts
- name: start
ec2:
aws_secret_key: "{{ aws_secret_key }}"
aws_access_key: "{{ aws_access_key }}"
instance_tags:
Name: "{{ item }}"
assign_public_ip: yes
# exact_count: 1
region: "{{ region }}"
instance_type: "{{ instance_type }}"
state: running
with_items:
- "{{ nodes }}"

- name: Wait for SSH to come up
wait_for: host={{ item.public_ip_address }} port=22 delay=60 time out=320
with_items: "{{ efacts.instances }}"
- name: Add host to groupname
add_host: hostname={{ item.public_ip_address }} groupname=ec2_instances ansible_hostname={{ item.public_ip_address }}
with_items: "{{ efacts.instances }}"
- name: Manage
hosts: ec2_instances
connection: ssh
gather_facts: yes
become: true
become_user: ubuntu
become_method: sudo
remote_user: ubuntu
vars:
tag1:
ansible_ssh_private_key_file: "{{ ssh_key }}"
ansible_user: ubuntu
tasks:
roles:
- { role: iptable}
post_tasks:
- name: Restart Instances
include_role:
name: restart

*Any variable hard coded in the playbook takes the highest precedence over the group_vars folder.

To invoke the playbook

$ sudo ansible-playbook ./aws.yaml --extra-vars '{"adm1":"KubeAdm42"}' --ask-vault-pass -vvv

I am using “sudo” bcos we have the “var_aws.yml” file encrypted and is stored in /etc/ansible/group_vars folder which is owned by the root .

The extra-vars is the command line argument that Ansible accepts to create the number of instances , right now I am creating One instance named KubeAdm42 on AWS .

— ask-vault-pass: will prompt for the password that we used to create the var_aws.yml file .

-vvv : is for ansible-playbook output verbose debug level to investigate .

The next section talks about the roles defined in the above playbook .

Roles Definition: The file structure of “iptable” role is as shown

rangapv08@instance1:/etc/ansible/roles/iptable/tasks$ ls
main.yml ipt.yml

The yaml definition for main.yml is

---
- name: my ports
import_tasks: ipt.yml

The yaml definition for ipt.yml is

# - name:create the iptables include file to be run at reboot 
---
- stat:
path: $HOME/iptab.sh
register: iptm
- file:
path: $HOME/iptab.sh
state: touch
mode: 0777
when:
- iptm.stat.exists == False
- blockinfile:
dest: $HOME/iptab.sh
mode: u+rwx
block: |
sudo iptables -P FORWARD ACCEPT
when:
- iptm.stat.exists == False
- stat:
path: /etc/init.d/iptab.sh
become: sudo
register: itm
- command: sudo cp $HOME/iptab.sh /etc/init.d/
when: itm.stat.exists == False
- command: sudo update-rc.d iptab.sh defaults
when: itm.stat.exists == False

Post Tasks: After applying the roles , if one needs to do certain other tasks on the instances , then we can add post tasks . The following is the roles file for restarting the server in the post tasks section of the playbook “aws.yml” .

rangapv08@instance1:/etc/ansible/roles/restart/tasks$ ls
main.yml restart.yml

The yaml definition for main.yml

---
- name: Call for restarts
import_tasks: restart.yml

The yaml definition for restart.yml

---
- local_action:
module: ec2
aws_secret_key: "{{ aws_secret_key }}"
aws_access_key: "{{ aws_access_key }}"
instance_type: "{{ instance_type }}"
instance_tags:
Name: "{{ tag1 }}"
region: "{{ region }}"
assign_public_ip: yes
state: restarted
- wait_for_connection:
delay: 60
timeout: 320

I have provided a personal repo link which has the yaml definition for roles — and instance creation playbook “aws.yml” below (in the “createaws” subfolder of the repo) . The roles folder could be used as is after transferring them to “/etc/ansible” folder . The “aws.yml” can also be used after making very minimal changes relating to your AWS account

$ git pull https://github.com/rangapv/Ansible.git
$ ls
createaws confighost awsboto
$ cd ./createaws
$ ls
aws.yml iptable restart

Ansible Playbook execution …

Once we are ready to execute the playbook make sure there are no local copies of ansible.cfg , hosts , roles folder in your current directory ; after the git pull and transferring them to “/etc/ansible” folder as mentioned from the beginning of this article . Since Ansible will check the current directory for these files before checking the “/etc/ansible” folder unless you would like it that way . Even better to have “aws.yaml” in a seperate folder as a standalone file before executing

rangapv08@instance-63:~/ans$ sudo ansible-playbook ./aws.yaml --extra-vars '{"adm1":"KubeAdm44"}' --ask-vault-pass -vvv
ansible-playbook 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/rangapv08/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.5/dist-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.5.2 (default, Apr 16 2020, 17:47:17) [GCC 5.4.0 20160609]
Using /etc/ansible/ansible.cfg as config file
Vault password:
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
[WARNING]: Found both group and host with same name: l1
[WARNING]: While constructing a mapping from /home/rangapv08/ans/aws.yaml, line 11, column 8, found a duplicate
dict key (instance_type). Using last defined value only.
[DEPRECATION WARNING]: ec2_instance_facts is kept for backwards compatibility but usage is discouraged. The module
documentation details page may explain more about this rationale.. This feature will be removed in a future
release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
statically imported: /etc/ansible/roles/iptable/tasks/ipt.yml
PLAYBOOK: aws.yaml *************************************************************************************************
2 plays in ./aws.yaml
PLAY [Create instance(s)] ******************************************************************************************
META: ran handlers
TASK [Launch instances] ********************************************************************************************
task path: /home/rangapv08/ans/aws.yaml:9
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: rangapv08
<127.0.0.1> EXEC /bin/sh -c 'echo ~rangapv08 && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/rangapv08/.ansible/tmp `"&& mkdir /home/rangapv08/.ansible/tmp/ansible-tmp-1588653685.2242649-2648-183740267817422 && echo ansible-tmp-1588653685.2242649-2648-183740267817422="` echo /home/rangapv08/.ansible/tmp/ansible-tmp-1588653685.2242649-2648-183740267817422 `" ) && sleep 0'
Using module file /usr/local/lib/python3.5/dist-packages/ansible/modules/cloud/amazon/ec2.py
<127.0.0.1> PUT /home/rangapv08/.ansible/tmp/ansible-local-2631h4insvgo/tmpxp5j0kjw TO /home/rangapv08/.ansible/tmp/ansible-tmp-1588653685.2242649-2648-183740267817422/AnsiballZ_ec2.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/rangapv08/.ansible/tmp/ansible-tmp-1588653685.2242649-2648-183740267817422/ /home/rangapv08/.ansible/tmp/ansible-tmp-1588653685.2242649-2648-183740267817422/AnsiballZ_ec2.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /home/rangapv08/.ansible/tmp/ansible-tmp-1588653685.2242649-2648-183740267817422/AnsiballZ_ec2.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/rangapv08/.ansible/tmp/ansible-tmp-1588653685.2242649-2648-183740267817422/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => (item=KubeAdm44) => {
"ansible_loop_var": "item",
"changed": true,
"instance_ids": [
"i-0c640276f00ad6f93"
],
"instances": [
{
"ami_launch_index": "0",
"architecture": "x86_64",
"block_device_mapping": {
"/dev/sda1": {
"delete_on_termination": true,
"status": "attached",
"volume_id": "vol-052ebfdda59e65360"
}
"volume_id": "vol-052ebfdda59e65360"
}
},
"dns_name": "ec2-54-245-76-167.us-west-2.compute.amazonaws.com",
"ebs_optimized": false,
"groups": {
"sg-eaf37d92": "launch-wizard-1"
},
"hypervisor": "xen",
"id": "i-0c640276f00ad6f93",
"image_id": "ami-00fda5ddc3bc9876a",
"instance_type": "t2.medium",
"kernel": null,
"key_name": "AldoCloudKEY",
"launch_time": "2020-05-05T04:41:29.000Z",
"placement": "us-west-2c",
"private_dns_name": "ip-172-0-175-104.us-west-2.compute.internal",
"private_ip": "172.0.175.104",
"public_dns_name": "ec2-54-245-76-167.us-west-2.compute.amazonaws.com",
"public_ip": "54.245.76.167",
"ramdisk": null,
"region": "us-west-2",
"root_device_name": "/dev/sda1",
"root_device_type": "ebs",
"state": "running",
"state_code": 16,
"tags": {
"Name": "KubeAdm44"
},
"tenancy": "default",
.
.
.
.
.
.
TASK [iptable : command] *******************************************************************************************
task path: /etc/ansible/roles/iptable/tasks/ipt.yml:24
<54.245.76.167> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<54.245.76.167> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/rangapv08/ans/AldoCloudKEY.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o ControlPath=/home/rangapv08/.ansible/cp/6094280c23 54.245.76.167 '/bin/sh -c '"'"'echo ~ubuntu && sleep 0'"'"''
<54.245.76.167> (0, b'/home/ubuntu\n', b'')
<54.245.76.167> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<54.245.76.167> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/rangapv08/ans/AldoCloudKEY.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o ControlPath=/home/rangapv08/.ansible/cp/6094280c23 54.245.76.167 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/ubuntu/.ansible/tmp `"&& mkdir /home/ubuntu/.ansible/tmp/ansible-tmp-1588653793.4196217-2839-211445028710762 && echo ansible-tmp-1588653793.4196217-2839-211445028710762="` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1588653793.4196217-2839-211445028710762 `" ) && sleep 0'"'"''
<54.245.76.167> (0, b'ansible-tmp-1588653793.4196217-2839-211445028710762=/home/ubuntu/.ansible/tmp/ansible-tmp-1588653793.4196217-2839-211445028710762\n', b'')
Using module file /usr/local/lib/python3.5/dist-packages/ansible/modules/commands/command.py
<54.245.76.167> PUT /home/rangapv08/.ansible/tmp/ansible-local-2631h4insvgo/tmpmbd4bnf3 TO /home/ubuntu/.ansible/tmp/ansible-tmp-1588653793.4196217-2839-211445028710762/AnsiballZ_command.py
<54.245.76.167> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/rangapv08/ans/AldoCloudKEY.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o ControlPath=/home/rangapv08/.ansible/cp/6094280c23 '[54.245.76.167]'
<54.245.76.167> (0, b'sftp> put /home/rangapv08/.ansible/tmp/ansible-local-2631h4insvgo/tmpmbd4bnf3 /home/ubuntu/.ansible/tmp/ansible-tmp-1588653793.4196217-2839-211445028710762/AnsiballZ_command.py\n', b'')
<54.245.76.167> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<54.245.76.167> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/rangapv08/ans/AldoCloudKEY.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o ControlPath=/home/rangapv08/.ansible/cp/6094280c23 54.245.76.167 '/bin/sh -c '"'"'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1588653793.4196217-2839-211445028710762/ /home/ubuntu/.ansible/tmp/ansible-tmp-1588653793.4196217-2839-211445028710762/AnsiballZ_command.py && sleep 0'"'"''
<54.245.76.167> (0, b'', b'')
<54.245.76.167> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<54.245.76.167> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/rangapv08/ans/AldoCloudKEY.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-
with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o ControlPath=/home/rangapv08/.ansible/cp/6094280c23 -tt 54.245.76.167 '/bin/sh -c '"'"'/usr/bin/python /home/ubuntu/.ansible/tmp/ansible-tmp-1588653793.4196217-2839-211445028710762/AnsiballZ_command.py && sleep 0'"'"''
<54.245.76.167> (0, b'\r\n{"changed": true, "end": "2020-05-05 04:43:14.601192", "stdout": "", "cmd": ["sudo", "update-rc.d", "iptab.sh", "defaults"], "rc": 0, "start": "2020-05-05 04:43:14.469222", "stderr": "insserv: warning: script \'iptab.sh\' missing LSB tags and overrides", "delta": "0:00:00.131970", "invocation": {"module_args": {"creates": null, "executable": null, "_uses_shell": false, "strip_empty_ends": true, "_raw_params": "sudo update-rc.d iptab.sh defaults", "removes": null, "argv": null, "warn": true, "chdir": null, "stdin_add_newline": true, "stdin": null}}, "warnings": ["Consider using \'become\', \'become_method\', and \'become_user\' rather than running sudo"]}\r\n', b'Shared connection to 54.245.76.167 closed.\r\n')
<54.245.76.167> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<54.245.76.167> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/rangapv08/ans/AldoCloudKEY.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o ControlPath=/home/rangapv08/.ansible/cp/6094280c23 54.245.76.167 '/bin/sh -c '"'"'rm -f -r /home/ubuntu/.ansible/tmp/ansible-tmp-1588653793.4196217-2839-211445028710762/ > /dev/null 2>&1 && sleep 0'"'"''
<54.245.76.167> (0, b'', b'')
changed: [54.245.76.167] => {
"changed": true,
"cmd": [
"sudo",
"update-rc.d",
"iptab.sh",
"defaults"
],
"delta": "0:00:00.131970",
"end": "2020-05-05 04:43:14.601192",
"invocation": {
"module_args": {
"_raw_params": "sudo update-rc.d iptab.sh defaults",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"rc": 0,
"start": "2020-05-05 04:43:14.469222",
"stderr": "insserv: warning: script 'iptab.sh' missing LSB tags and overrides",
"stderr_lines": [
"insserv: warning: script 'iptab.sh' missing LSB tags and overrides"
],
"stdout": "",
"stdout_lines": []
}
META: ran handlers
TASK [restart : wait_for_connection] *********************************************************************************************************************************
task path: /etc/ansible/roles/restart/tasks/rest.yml:13
wait_for_connection: attempting ping module test
<54.245.76.167> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<54.245.76.167> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/rangapv08/ans/AldoCloudKEY.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o ControlPath=/home/rangapv08/.ansible/cp/6094280c23 54.245.76.167 '/bin/sh -c '"'"'echo ~ubuntu && sleep 0'"'"''
<54.245.76.167> (0, b'/home/ubuntu\n', b'')
.
.
.
PLAY RECAP *********************************************************************************************************
54.245.76.167 : ok=7 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=5 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

There you go ! You now have a working Ansible Playbook to create a new AWS instance .

--

--

Rangaswamy P V

Works on Devops in Startups, reach me @rangapv on X/twitter or email: rangapv@gmail.com