Friday 29 March 2019

How to persist root volume of a running EBS-backed EC2 instance By Raj Gupta

By default, the DeleteOnTermination attribute for the root volume of an instance is set to true, but it is set to false for all other volume types. 



So to persist root volume of a running EBS-backed EC2 instance use below step:-

1. You can see in below picture for my production running EC2 server Delete on termination value is True



But I want to make value Delete on termination = false,So that if our EC2 server delete then it will not going to delete root volume.


2. Now take any Linux or Window system in which AWS CLI is already installed..
In my case I am taking Linux system 

Save the below content in the file mapping.json


[root@ip-172-31-85-11 ~]# vi mapping.json

[
  {
    "DeviceName": "/dev/xvda",     This is your root device type of my production server
    "Ebs": {
      "DeleteOnTermination": false      Here we are setting value as false for our production server
    }
  }
]

After saving the file mapping.json run the below AWS CLI command


[root@ip-172-31-85-11 ~]# aws ec2 modify-instance-attribute --instance-id i-0694f343c5515ce78 --block-device-mappings file://mapping.json

i-0694f343c5515ce78 -----This is instance id of our production server 


3. Now you are able to see Delete on termination = false  in the below picture for our production server,






Note:- 

If you want to preserve your root volume at the time of EC2 creation then just in the Add Storage section during creation of EC2 server unchecked the box of  Delete on termination  as shown in below pic ..in my case it is checked, just you need to unchecked it.




Thursday 28 March 2019

How to make all Objects in AWS S3 bucket public by default by Raj Gupta



We can use the AWS Policy Generator to generate a bucket policy for our bucket.
Select the option as per below then click on "Add Statement"


The above example allows (Effect: Allow) anyone (Principal: *) to access (Action: s3:GetObject) any object in the bucket (Resource: arn:aws:s3:::<bucket-name>/*).


Then select "Generate Policy"



Then We will get policy like below
this policy will allow anyone to read every object in our S3 bucket(raj03282019) (just replace <bucket-name> with the name of your bucket):

{
  "Id": "Policy1553770384193",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1553769495927",
      "Action": [
        "s3:GetObject"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::raj03282019/*",
      "Principal": "*"
    }
  ]
}


Now go to your AWS S3 console, At the bucket level(raj03282019), click on Permissions, then Select Bucket Policy. Paste the above generated code into the editor and hit save.




Now all your items in the bucket(raj03282019 in my case) will be public by default.

Wednesday 27 March 2019

How to download an entire S3 bucket to your local system by Raj Gupta

We've basically many options to do that, but the best one is using AWS CLI





  • Step 1
Download and install AWS CLI in your local machine as per operating system
In my case local operating system is Linux
[root@ip-172-31-88-124 ~]# sudo pip install --upgrade awscli
If you are using widow system then follow the below link


  • Step 2
Configure AWS CLI
[root@ip-172-31-88-124 ~]# aws configure
AWS Access Key ID [None]: AKIAJCJBWU2IWDKGCAMA    ----Enter your Access Key
AWS Secret Access Key [None]: sp6F/OOod2stGLmWi22hPTWDZ1dYLzWq/S2mMliu  ---Enter your Secret Access Key
Default region name [None]:   -----Press enter for default value 
Default output format [None]:  ----------Press enter for default value 

  • Step 3
In my S3 bucket(raj03272019) 2 files(pic1.png  pic2.png) are there and all files are going to download in  my local system at same time.




  • Step 4
Sync s3 bucket with following command

[root@ip-172-31-88-124 ~]# ls     
[root@ip-172-31-88-124 ~]#  aws s3 sync s3://raj03272019 /root
download: s3://raj03272019/pic1.png to ./pic1.png
download: s3://raj03272019/pic2.png to ./pic2.png
[root@ip-172-31-88-124 ~]# ls
pic1.png  pic2.png           ------both pic are downloaded to local system


s3://raj03272019 >> your s3 bucket that you want to download

/root >> path in your local system where you want to download all the files in my case I have downloaded all file under root but you can download any where you want.





Tuesday 26 March 2019

How to upload docker logs from EC2 instance to CloudWatch before shutdown by Raj Gupta

When you are using an auto scaling group (ASG) on AWS, and sometimes a docker container running in an EC2 instance exits due to some ambiguous reason and the instance may get removed from the ASG. This makes debugging the failure difficult since the ASG terminates the instance and therefore erasing all the evidence of what went wrong. So, below are the way to write docker logs to CloudWatch before it exits.



1. First you attach a role to ec2 server in which docker are running, So that it has permission to write the log to CloudWatch



2. Now create a Log Group from cloudwatch dashboard



3. Now logging into your EC2 server and enter your credentials in file /etc/init/docker.override


[root@ip-172-31-46-121 ~]# vi /etc/init/docker.override

env AWS_ACCESS_KEY_ID=AKIAIUMPX5TCNGRG5RXA

env AWS_SECRET_ACCESS_KEY=qOjKwGQBxwOmZy/yVY/UcROUsVIcMw8pn1RBJLBB

after the save and close the file.


4. Now run the below command to write your docker log to cloudwatch log Group Raj

[root@ip-172-31-46-121 ~]# docker run -it --log-driver="awslogs" --log-opt awslogs-region="us-east-1" --log-opt awslogs-group="Raj" --log-opt awslogs-stream="log-stream" ubuntu:14.04 bash
root@4adc2e0120e6:/#

5. initially if do not have any activity to docker then log group in cloudwatch have no any data like below

root@4adc2e0120e6:/#



6. Once we start any activity to docker then all the log of docker go to log group in cloudwatch like below let say in our case we want to come out of docker then it log will be go to cloudwatch

root@4adc2e0120e6:/# exit
exit



like that all the activity perform on docker will record to cloudwatch log

Saturday 23 March 2019

Devops Project-4- By using Git, Jenkins, Ansible, DockerHub, Docker to DEPLOY on a docker container by Raj Gupta



In part-01 we create Docker image on ansible server through Jenkins job and pushed it onto DockerHub.

  1. Docker should be installed on ansible server
  2. Should login to "docker hub" on ansible server
  3. Docker admin user should be part of docker group
In Part-02 we create create_docker_container.yml playbook. this get intiated by jenkins job, run by ansible and exected on dokcer_host

In Part-03 we try to improvise to store docker images previous version
So for we used latest docker image to build a container, but what happens if latest version is not working? One easiest solution is, maintaining version for each build. This can be achieved by using environment variables.

Take 3 EC2 REHL server as below for Jenkins server, Ansible Server and Docker host





On Jenkins server: -  Do the same setup for Jenkins server as project 1. (install jenkins, git,, maven and java on REHL Jenkins server)

On Ansible Server install ansible as below:-

yum install ansible
ansible –version

adding client to ansible master

cd /etc/ansible
vi hosts

then add all clients private ip in this file on top and save it

[web]
172.31.38.44      ...like this

then to check the master and client past below in master

[root@ip-172-31-18-161 ~]# ansible -m ping all

On Dokcer_host  server: -  Do the same setup as project 3

To install Docker on RHEL server give the below command

docker --version
service docker start
service docker status
usermod -aG docker dockeradmin     ----- Docker admin user should be part of docker group




Note: --Docker must be install in both Ansible Server(master) and Docker host Server(client) also below command must be run in both the server(master and client)

[root@ip-172-31-89-34 docker]# docker login --username=rajguptaaws --password=aurangabad    

Install below 2 plugging in Jenkins
publish over ssh and Deploy to container

then create a job in Jenkins and give the below in different tab: -
Source Code Management:
  • Repository : https://github.com/rajkumargupta14/hello-world.git
  • Branches to build : */master
Build:
  • Root POM:pom.xml
  • Goals and options : clean install package

Before doing below step create the entry for ansible_server in Manage Jenkins-àConfigure System-à Publish over SSH-à then click on ADD and fill the below
For more details check the project 3

Note crate below in ansible server
[root@ip-172-31-40-233 ~]# mkdir /opt/docker
Post Steps
  • Send files or execute commands over SSH
    • Name: ansible_server
    • Source files : webapp/target/*.war
    • Remove prefix : webapp/target
    • Remote directory : //opt//docker
  • Send files or execute commands over SSH
    • Name: ansible_server
    • Source files : Dockerfile
    • Remote directory : //opt//docker
    • Exec Command:
cd /opt/docker
docker build -t raj_demo4 .
docker tag raj_demo4 rajguptaaws/raj_demo4
docker push rajguptaaws/raj_demo4
docker rmi raj_demo4 rajguptaaws/raj_demo4


1.    Login to Docker host and check images and containers. (no images and containers)
[dockeradmin@ip-172-31-40-233 ~]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
[dockeradmin@ip-172-31-40-233 ~]$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
2.    login to docker hub and check. shouldn't find images with for raj_demo4
3.    Execute Jenkins job
4.    check images in Docker hub. Now you could able to see new images pushed to rajguptaaws Docker_Hub
Note:- if you get access issue then run the below command

[ec2-user@ip-172-31-35-164 ~]$ sudo -i
[root@ip-172-31-35-164 ~]# cd /opt/docker
[root@ip-172-31-35-164 docker]# chown -R dockeradmin:dockeradmin /opt/docker
[root@ip-172-31-35-164 docker]#

Part-02 : Deploy Containers

In the ansible server

[root@ip-172-31-35-164 ~]# cd /opt
[root@ip-172-31-35-164 opt]# mkdir playbooks
[root@ip-172-31-35-164 opt]# ls
containerd  docker  playbooks
[root@ip-172-31-35-164 opt]# cd playbooks/
[root@ip-172-31-35-164 playbooks]# pwd
/opt/playbooks
[root@ip-172-31-35-164 playbooks]# su - dockeradmin
Last login: Fri Feb  1 11:59:39 UTC 2019 on pts/1
[dockeradmin@ip-172-31-35-164 ~]$ cd /opt
[dockeradmin@ip-172-31-35-164 opt]$ ls
containerd  docker  playbooks
[dockeradmin@ip-172-31-35-164 opt]$ sudo chown dockeradmin: dockeradmin playbooks
[dockeradmin@ip-172-31-35-164 opt]$ cd playbooks/
[dockeradmin@ip-172-31-35-164 playbooks]$ sudo vi create_docker_container.yml

Then copy the below code

- hosts: all
  tasks:
  - name: stop previous version docker
    shell: docker stop raj_demo4
  - name: remove stopped container
    shell: docker rm -f raj_demo4
  - name: remove docker images
    shell: docker image rm -f rajguptaaws/raj_demo4
  - name: create docker image
    shell: docker run -d --name raj_demo4 -p 8090:8080 rajguptaaws/raj_demo4


if you get the permission issue then run the below command in ansible server

[dockeradmin@ip-172-31-35-164 opt]$ sudo chown dockeradmin: dockeradmin playbooks
[dockeradmin@ip-172-31-35-164 opt]$ cd playbooks/
[dockeradmin@ip-172-31-35-164 playbooks]$ ls -ld
drwxr-xr-x. 2 dockeradmin dockeradmin 78 Feb  1 12:37 .

to run playbook manually use below command
[dockeradmin@ip-172-31-35-164 playbooks]$ ansible-playbook -v create_docker_container.yml

Now Add this script to Jenkins job.
  • Chose "configure" to modify your jenkins job.
    • Under post build actions
      • Send files or execute commands over SSH
        • Exec Command:
cd /opt/playbooks
ansible-playbook -v create_docker_container.yml
1.    Execute Jenkins job.
2.    You could see a new container on your docker host. can able access it from browser on port 8090
http://54.152.73.16:8090/webapp/      -----Take the public ip of docker host







Part-03 : Deploy with Version Control Containers

we use 2 variables
  • BUILD_ID - The current build id of Jenkins (every time you click on build now it will create new build id like 1,2,3,4,5,6……..)
  • JOB_NAME - Name of the project of this build. This is the name you gave your job when you first set it up like in our case hellow-world
add the below part:-

Send files or execute commands over SSH

Name: ansible_server
Source files    : Dockerfile
Remote directory      : //opt//docker

cd /opt/docker
docker build -t $JOB_NAME:v1.$BUILD_ID .
docker tag $JOB_NAME:v1.$BUILD_ID rajguptaaws/$JOB_NAME:v1.$BUILD_ID
docker tag $JOB_NAME:v1.$BUILD_ID rajguptaaws/$JOB_NAME:latest
docker push rajguptaaws/$JOB_NAME:v1.$BUILD_ID
docker push rajguptaaws/$JOB_NAME:latest
docker rmi $JOB_NAME:v1.$BUILD_ID rajguptaaws/$JOB_NAME:v1.$BUILD_ID
rajguptaaws/$JOB_NAME:latest




Now do the changes in ansible code as per below yellow part:

- hosts: all
  tasks:
  - name: stop previous version docker
    shell: docker stop raj_demo4
  - name: remove stopped container
    shell: docker rm -f raj_demo4
  - name: remove docker images
    shell: docker image rm -f rajguptaaws/hellow-world:latest
  - name: create docker image

    shell: docker run -d --name raj_demo4 -p 8090:8080 rajguptaaws/hellow-world:latest

Devops Project-3 - By using Git, Jenkins and Docker on AWS | CICD on containers by Raj Gupta


So both CI(build) and CD(deploy) part are done by using Jenkins on Docker containers





For this project we require two EC2 server

11. Jenkins server( take REHL ec2)
  2. Docker_Host server (take Amazon Linux ec2)
Do the same setup for Jenkins server as project 1.( install jenkins, git, ,maven and java on REHL Jenkins server)
Now create second Docker_Host ec2 server for this take the Amazon Linux ec2 server and do the below on Docker server.
  1. Launch an Amazon Linux  EC2 instance for Docker host
  2. Install docker on EC2 instance and start services
[root@ip-172-31-89-34 ~]# yum install docker
[root@ip-172-31-89-34 ~]# service docker start
  1. create a new user for Docker management and add him to Docker (default) group
[root@ip-172-31-89-34 ~]# useradd dockeradmin
[root@ip-172-31-89-34 ~]# passwd dockeradmin
[root@ip-172-31-89-34 ~]# usermod -aG docker dockeradmin

  1. Write a Docker file under /opt/docker
[root@ip-172-31-89-34 ~]# mkdir /opt/docker
[root@ip-172-31-89-34 ~]# cd /opt/docker
[root@ip-172-31-89-34 docker]# vi Dockerfile


# Pull base image
From tomcat:8-jre8

# Maintainer
MAINTAINER "valaxytech@gmail.com"

# copy war file on to container
COPY ./webapp.war /usr/local/tomcat/webapps






  5.  Login to Jenkins console and add Docker server to execute commands from Jenkins



Manage Jenkins-àConfigure System-à Publish over SSH-à then click on ADD and fill the below

SSH server name:- you can give any name as per your choice say here docker
Hostname:- here you give private ip address of docker server.
Username:- you give the username whatever you created on docker server like in ourcase docker admin

Then click on advance button and click on Use password authentication, or use a different key then give the password of user you have created in docker server for dockeradmin like in our case raj123456

Passphrase / Password:-    raj123456

Now logging into docker server and in the below file change the yellow part.

[root@ip-172-31-86-168 ~]# vi /etc/ssh/sshd_config


# EC2 uses keys for remote access
PasswordAuthentication yes
#PermitEmptyPasswords no



[root@ip-172-31-86-168 ~]# service sshd restart



Then in Jenkins create new project

6.    Create Jenkins job give below in the different tab of Jenkins
A) Source Code Management
Repository : 
https://github.com/rajkumargupta14/hello-world.git
Branches to build : */master
B) Build Root POM: pom.xml
Goals and options : clean install package
C) send files or execute commands over SSH Name: docker_host
Source files : webapp/target/*.war
Remove prefix : webapp/target
Remote directory : //opt//docker
Exec command : docker stop valaxy_demo; docker rm -f valaxy_demo; docker image rm -f valaxy_demo; cd /opt/docker; docker build -t valaxy_demo .  (in last  . must give)
D) send files or execute commands over SSH
Name: docker_host
Exec command : docker run -d --name valaxy_demo -p 8090:8080 valaxy_demo

  1. Login to Docker host and check images and containers. (no images and containers)
[root@ip-172-31-89-34 docker]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
[root@ip-172-31-89-34 docker]# docker ps    ----this give active container
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@ip-172-31-89-34 docker]# docker ps –a  ----this give active and unactive both container
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@ip-172-31-89-34 docker]#


  1. Execute Jenkins job(click on build now)

If you got the below error
ERROR: Exception when publishing, exception message [Permission denied]
Build step 'Send files or execute commands over SSH' changed build result to UNSTABLE
 
Then run the below command on docker server
[root@ip-172-31-89-34 docker]# chown -R dockeradmin:dockeradmin /opt/docker
[root@ip-172-31-89-34 docker]# docker login --username=rajguptaaws --password=aurangabad@22    (logging into dockerhub account must)

  1. check images and containers again on Docker host. This time an image and container get creates through Jenkins job
[root@ip-172-31-89-34 docker]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
valaxy_demo         latest              c61d83d3a824        3 minutes ago       462MB
tomcat              8-jre8              7ee26c09afb3        6 days ago          462MB
  1. Access web application from browser which is running on container <docker_host_Public_IP>:8090
http://3.88.174.154:8090/webapp/