Showing posts with label Devops. Show all posts
Showing posts with label Devops. Show all posts

Wednesday, 24 July 2019

How to add and remove node from docker swarm by Raj Gupta

Note:- We can run any command on only on master not on any worker node, If you run any command on any worker node it will give error.

it will good if Master is in number:- (n-1)/2   -----3,5,7


Master:-

If you want to know the master or worker token key then run below command

Key to join as worker:-

[root@ip-172-31-40-90 ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-4z0bipbsxzoy2ccn5m22eiems1w0a5du8rlyt6nbvdq3pfegm8-1z76g4wwivmk1s0cp8dethxav 172.31.40.90:2377

Key to join as master:-

[root@ip-172-31-40-90 ~]# docker swarm join-token manager
To add a manager to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-4z0bipbsxzoy2ccn5m22eiems1w0a5du8rlyt6nbvdq3pfegm8-5go59kirhawvq08kdot5umv0m 172.31.40.90:2377

[root@ip-172-31-40-90 ~]#


[root@ip-172-31-40-90 ~]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
q0yqpe0fsihtmoaj1834s1tnf     ip-172-31-35-189    Ready               Active                                  18.06.1-ce
rby7qdb8hc3ebuuy78vpl0i4v *   ip-172-31-40-90     Ready               Active              Leader              18.06.1-ce
pzcz2fs38ks9vrwecn30txuv9     ip-172-31-43-91     Ready               Active                                  18.06.1-ce
[root@ip-172-31-40-90 ~]#

After Worker02 left

[root@ip-172-31-40-90 ~]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
q0yqpe0fsihtmoaj1834s1tnf     ip-172-31-35-189    Ready               Active                                  18.06.1-ce
rby7qdb8hc3ebuuy78vpl0i4v *   ip-172-31-40-90     Ready               Active              Leader              18.06.1-ce
pzcz2fs38ks9vrwecn30txuv9     ip-172-31-43-91     Down                Active                                  18.06.1-ce
[root@ip-172-31-40-90 ~]#

To remove the Down node(Worker02)

[root@ip-172-31-40-90 ~]# docker node rm pzcz2fs38ks9vrwecn30txuv9
pzcz2fs38ks9vrwecn30txuv9
[root@ip-172-31-40-90 ~]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
q0yqpe0fsihtmoaj1834s1tnf     ip-172-31-35-189    Ready               Active                                  18.06.1-ce
rby7qdb8hc3ebuuy78vpl0i4v *   ip-172-31-40-90     Ready               Active              Leader              18.06.1-ce
[root@ip-172-31-40-90 ~]#

Now to remove active node

[root@ip-172-31-40-90 ~]# docker node rm -f q0yqpe0fsihtmoaj1834s1tnf
q0yqpe0fsihtmoaj1834s1tnf
[root@ip-172-31-40-90 ~]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
rby7qdb8hc3ebuuy78vpl0i4v *   ip-172-31-40-90     Ready               Active              Leader              18.06.1-ce
[root@ip-172-31-40-90 ~]#


Now again to add both worker node run the below command in both worker machine 

docker swarm join --token SWMTKN-1-4z0bipbsxzoy2ccn5m22eiems1w0a5du8rlyt6nbvdq3pfegm8-1z76g4wwivmk1s0cp8dethxav 172.31.40.90:2377

After running the above line in both the worker 

ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
af3y6rhgp328s777kw26g7co2     ip-172-31-35-189    Ready               Active                                  18.06.1-ce
rby7qdb8hc3ebuuy78vpl0i4v *   ip-172-31-40-90     Ready               Active              Leader              18.06.1-ce
u9sg7rw8yvb75o5ybrnppeglo     ip-172-31-43-91     Ready               Active                                  18.06.1-ce
[root@ip-172-31-40-90 ~]#

again both worker added to cluster 

Worker01:-

After removing forcefully also by master here it will show active ,So to make it inactive run the below command

[root@ip-172-31-35-189 ~]# docker swarm leave
Node left the swarm.
[root@ip-172-31-35-189 ~]#

[root@ip-172-31-35-189 ~]# docker swarm join --token SWMTKN-1-4z0bipbsxzoy2ccn5m22eiems1w0a5du8rlyt6nbvdq3pfegm8-1z76g4wwivmk1s0cp8dethxav 172.31.40.90:2377
This node joined a swarm as a worker.
[root@ip-172-31-35-189 ~]#





Worker02:-

To leave the cluster 

[root@ip-172-31-43-91 ~]# docker swarm leave

Node left the swarm.
[root@ip-172-31-43-91 ~]# docker info
Swarm: inactive

[root@ip-172-31-43-91 ~]# docker swarm join --token SWMTKN-1-4z0bipbsxzoy2ccn5m22eiems1w0a5du8rlyt6nbvdq3pfegm8-1z76g4wwivmk1s0cp8dethxav 172.31.40.90:2377
This node joined a swarm as a worker.
[root@ip-172-31-43-91 ~]#



Tuesday, 26 March 2019

How to upload docker logs from EC2 instance to CloudWatch before shutdown by Raj Gupta

When you are using an auto scaling group (ASG) on AWS, and sometimes a docker container running in an EC2 instance exits due to some ambiguous reason and the instance may get removed from the ASG. This makes debugging the failure difficult since the ASG terminates the instance and therefore erasing all the evidence of what went wrong. So, below are the way to write docker logs to CloudWatch before it exits.



1. First you attach a role to ec2 server in which docker are running, So that it has permission to write the log to CloudWatch



2. Now create a Log Group from cloudwatch dashboard



3. Now logging into your EC2 server and enter your credentials in file /etc/init/docker.override


[root@ip-172-31-46-121 ~]# vi /etc/init/docker.override

env AWS_ACCESS_KEY_ID=AKIAIUMPX5TCNGRG5RXA

env AWS_SECRET_ACCESS_KEY=qOjKwGQBxwOmZy/yVY/UcROUsVIcMw8pn1RBJLBB

after the save and close the file.


4. Now run the below command to write your docker log to cloudwatch log Group Raj

[root@ip-172-31-46-121 ~]# docker run -it --log-driver="awslogs" --log-opt awslogs-region="us-east-1" --log-opt awslogs-group="Raj" --log-opt awslogs-stream="log-stream" ubuntu:14.04 bash
root@4adc2e0120e6:/#

5. initially if do not have any activity to docker then log group in cloudwatch have no any data like below

root@4adc2e0120e6:/#



6. Once we start any activity to docker then all the log of docker go to log group in cloudwatch like below let say in our case we want to come out of docker then it log will be go to cloudwatch

root@4adc2e0120e6:/# exit
exit



like that all the activity perform on docker will record to cloudwatch log

Saturday, 23 March 2019

Devops Project-4- By using Git, Jenkins, Ansible, DockerHub, Docker to DEPLOY on a docker container by Raj Gupta



In part-01 we create Docker image on ansible server through Jenkins job and pushed it onto DockerHub.

  1. Docker should be installed on ansible server
  2. Should login to "docker hub" on ansible server
  3. Docker admin user should be part of docker group
In Part-02 we create create_docker_container.yml playbook. this get intiated by jenkins job, run by ansible and exected on dokcer_host

In Part-03 we try to improvise to store docker images previous version
So for we used latest docker image to build a container, but what happens if latest version is not working? One easiest solution is, maintaining version for each build. This can be achieved by using environment variables.

Take 3 EC2 REHL server as below for Jenkins server, Ansible Server and Docker host





On Jenkins server: -  Do the same setup for Jenkins server as project 1. (install jenkins, git,, maven and java on REHL Jenkins server)

On Ansible Server install ansible as below:-

yum install ansible
ansible –version

adding client to ansible master

cd /etc/ansible
vi hosts

then add all clients private ip in this file on top and save it

[web]
172.31.38.44      ...like this

then to check the master and client past below in master

[root@ip-172-31-18-161 ~]# ansible -m ping all

On Dokcer_host  server: -  Do the same setup as project 3

To install Docker on RHEL server give the below command

docker --version
service docker start
service docker status
usermod -aG docker dockeradmin     ----- Docker admin user should be part of docker group




Note: --Docker must be install in both Ansible Server(master) and Docker host Server(client) also below command must be run in both the server(master and client)

[root@ip-172-31-89-34 docker]# docker login --username=rajguptaaws --password=aurangabad    

Install below 2 plugging in Jenkins
publish over ssh and Deploy to container

then create a job in Jenkins and give the below in different tab: -
Source Code Management:
  • Repository : https://github.com/rajkumargupta14/hello-world.git
  • Branches to build : */master
Build:
  • Root POM:pom.xml
  • Goals and options : clean install package

Before doing below step create the entry for ansible_server in Manage Jenkins-àConfigure System-à Publish over SSH-à then click on ADD and fill the below
For more details check the project 3

Note crate below in ansible server
[root@ip-172-31-40-233 ~]# mkdir /opt/docker
Post Steps
  • Send files or execute commands over SSH
    • Name: ansible_server
    • Source files : webapp/target/*.war
    • Remove prefix : webapp/target
    • Remote directory : //opt//docker
  • Send files or execute commands over SSH
    • Name: ansible_server
    • Source files : Dockerfile
    • Remote directory : //opt//docker
    • Exec Command:
cd /opt/docker
docker build -t raj_demo4 .
docker tag raj_demo4 rajguptaaws/raj_demo4
docker push rajguptaaws/raj_demo4
docker rmi raj_demo4 rajguptaaws/raj_demo4


1.    Login to Docker host and check images and containers. (no images and containers)
[dockeradmin@ip-172-31-40-233 ~]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
[dockeradmin@ip-172-31-40-233 ~]$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
2.    login to docker hub and check. shouldn't find images with for raj_demo4
3.    Execute Jenkins job
4.    check images in Docker hub. Now you could able to see new images pushed to rajguptaaws Docker_Hub
Note:- if you get access issue then run the below command

[ec2-user@ip-172-31-35-164 ~]$ sudo -i
[root@ip-172-31-35-164 ~]# cd /opt/docker
[root@ip-172-31-35-164 docker]# chown -R dockeradmin:dockeradmin /opt/docker
[root@ip-172-31-35-164 docker]#

Part-02 : Deploy Containers

In the ansible server

[root@ip-172-31-35-164 ~]# cd /opt
[root@ip-172-31-35-164 opt]# mkdir playbooks
[root@ip-172-31-35-164 opt]# ls
containerd  docker  playbooks
[root@ip-172-31-35-164 opt]# cd playbooks/
[root@ip-172-31-35-164 playbooks]# pwd
/opt/playbooks
[root@ip-172-31-35-164 playbooks]# su - dockeradmin
Last login: Fri Feb  1 11:59:39 UTC 2019 on pts/1
[dockeradmin@ip-172-31-35-164 ~]$ cd /opt
[dockeradmin@ip-172-31-35-164 opt]$ ls
containerd  docker  playbooks
[dockeradmin@ip-172-31-35-164 opt]$ sudo chown dockeradmin: dockeradmin playbooks
[dockeradmin@ip-172-31-35-164 opt]$ cd playbooks/
[dockeradmin@ip-172-31-35-164 playbooks]$ sudo vi create_docker_container.yml

Then copy the below code

- hosts: all
  tasks:
  - name: stop previous version docker
    shell: docker stop raj_demo4
  - name: remove stopped container
    shell: docker rm -f raj_demo4
  - name: remove docker images
    shell: docker image rm -f rajguptaaws/raj_demo4
  - name: create docker image
    shell: docker run -d --name raj_demo4 -p 8090:8080 rajguptaaws/raj_demo4


if you get the permission issue then run the below command in ansible server

[dockeradmin@ip-172-31-35-164 opt]$ sudo chown dockeradmin: dockeradmin playbooks
[dockeradmin@ip-172-31-35-164 opt]$ cd playbooks/
[dockeradmin@ip-172-31-35-164 playbooks]$ ls -ld
drwxr-xr-x. 2 dockeradmin dockeradmin 78 Feb  1 12:37 .

to run playbook manually use below command
[dockeradmin@ip-172-31-35-164 playbooks]$ ansible-playbook -v create_docker_container.yml

Now Add this script to Jenkins job.
  • Chose "configure" to modify your jenkins job.
    • Under post build actions
      • Send files or execute commands over SSH
        • Exec Command:
cd /opt/playbooks
ansible-playbook -v create_docker_container.yml
1.    Execute Jenkins job.
2.    You could see a new container on your docker host. can able access it from browser on port 8090
http://54.152.73.16:8090/webapp/      -----Take the public ip of docker host







Part-03 : Deploy with Version Control Containers

we use 2 variables
  • BUILD_ID - The current build id of Jenkins (every time you click on build now it will create new build id like 1,2,3,4,5,6……..)
  • JOB_NAME - Name of the project of this build. This is the name you gave your job when you first set it up like in our case hellow-world
add the below part:-

Send files or execute commands over SSH

Name: ansible_server
Source files    : Dockerfile
Remote directory      : //opt//docker

cd /opt/docker
docker build -t $JOB_NAME:v1.$BUILD_ID .
docker tag $JOB_NAME:v1.$BUILD_ID rajguptaaws/$JOB_NAME:v1.$BUILD_ID
docker tag $JOB_NAME:v1.$BUILD_ID rajguptaaws/$JOB_NAME:latest
docker push rajguptaaws/$JOB_NAME:v1.$BUILD_ID
docker push rajguptaaws/$JOB_NAME:latest
docker rmi $JOB_NAME:v1.$BUILD_ID rajguptaaws/$JOB_NAME:v1.$BUILD_ID
rajguptaaws/$JOB_NAME:latest




Now do the changes in ansible code as per below yellow part:

- hosts: all
  tasks:
  - name: stop previous version docker
    shell: docker stop raj_demo4
  - name: remove stopped container
    shell: docker rm -f raj_demo4
  - name: remove docker images
    shell: docker image rm -f rajguptaaws/hellow-world:latest
  - name: create docker image

    shell: docker run -d --name raj_demo4 -p 8090:8080 rajguptaaws/hellow-world:latest

Devops Project-3 - By using Git, Jenkins and Docker on AWS | CICD on containers by Raj Gupta


So both CI(build) and CD(deploy) part are done by using Jenkins on Docker containers





For this project we require two EC2 server

11. Jenkins server( take REHL ec2)
  2. Docker_Host server (take Amazon Linux ec2)
Do the same setup for Jenkins server as project 1.( install jenkins, git, ,maven and java on REHL Jenkins server)
Now create second Docker_Host ec2 server for this take the Amazon Linux ec2 server and do the below on Docker server.
  1. Launch an Amazon Linux  EC2 instance for Docker host
  2. Install docker on EC2 instance and start services
[root@ip-172-31-89-34 ~]# yum install docker
[root@ip-172-31-89-34 ~]# service docker start
  1. create a new user for Docker management and add him to Docker (default) group
[root@ip-172-31-89-34 ~]# useradd dockeradmin
[root@ip-172-31-89-34 ~]# passwd dockeradmin
[root@ip-172-31-89-34 ~]# usermod -aG docker dockeradmin

  1. Write a Docker file under /opt/docker
[root@ip-172-31-89-34 ~]# mkdir /opt/docker
[root@ip-172-31-89-34 ~]# cd /opt/docker
[root@ip-172-31-89-34 docker]# vi Dockerfile


# Pull base image
From tomcat:8-jre8

# Maintainer
MAINTAINER "valaxytech@gmail.com"

# copy war file on to container
COPY ./webapp.war /usr/local/tomcat/webapps






  5.  Login to Jenkins console and add Docker server to execute commands from Jenkins



Manage Jenkins-àConfigure System-à Publish over SSH-à then click on ADD and fill the below

SSH server name:- you can give any name as per your choice say here docker
Hostname:- here you give private ip address of docker server.
Username:- you give the username whatever you created on docker server like in ourcase docker admin

Then click on advance button and click on Use password authentication, or use a different key then give the password of user you have created in docker server for dockeradmin like in our case raj123456

Passphrase / Password:-    raj123456

Now logging into docker server and in the below file change the yellow part.

[root@ip-172-31-86-168 ~]# vi /etc/ssh/sshd_config


# EC2 uses keys for remote access
PasswordAuthentication yes
#PermitEmptyPasswords no



[root@ip-172-31-86-168 ~]# service sshd restart



Then in Jenkins create new project

6.    Create Jenkins job give below in the different tab of Jenkins
A) Source Code Management
Repository : 
https://github.com/rajkumargupta14/hello-world.git
Branches to build : */master
B) Build Root POM: pom.xml
Goals and options : clean install package
C) send files or execute commands over SSH Name: docker_host
Source files : webapp/target/*.war
Remove prefix : webapp/target
Remote directory : //opt//docker
Exec command : docker stop valaxy_demo; docker rm -f valaxy_demo; docker image rm -f valaxy_demo; cd /opt/docker; docker build -t valaxy_demo .  (in last  . must give)
D) send files or execute commands over SSH
Name: docker_host
Exec command : docker run -d --name valaxy_demo -p 8090:8080 valaxy_demo

  1. Login to Docker host and check images and containers. (no images and containers)
[root@ip-172-31-89-34 docker]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
[root@ip-172-31-89-34 docker]# docker ps    ----this give active container
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@ip-172-31-89-34 docker]# docker ps –a  ----this give active and unactive both container
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@ip-172-31-89-34 docker]#


  1. Execute Jenkins job(click on build now)

If you got the below error
ERROR: Exception when publishing, exception message [Permission denied]
Build step 'Send files or execute commands over SSH' changed build result to UNSTABLE
 
Then run the below command on docker server
[root@ip-172-31-89-34 docker]# chown -R dockeradmin:dockeradmin /opt/docker
[root@ip-172-31-89-34 docker]# docker login --username=rajguptaaws --password=aurangabad@22    (logging into dockerhub account must)

  1. check images and containers again on Docker host. This time an image and container get creates through Jenkins job
[root@ip-172-31-89-34 docker]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
valaxy_demo         latest              c61d83d3a824        3 minutes ago       462MB
tomcat              8-jre8              7ee26c09afb3        6 days ago          462MB
  1. Access web application from browser which is running on container <docker_host_Public_IP>:8090
http://3.88.174.154:8090/webapp/