Infra as Code – GITOPS – part 1 : TERRAFORM on scaleway provider

STEP 1 install TERRAFORM

on UBUNTU :

curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -

sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"

sudo apt-get update && sudo apt-get install terraform

verify it s insalled by using :

$ terraform version

STEP 2 on your scaleway account Create API credential on scaleway admin interface

STEP 3 create a project folder then create a terraform declarative configuration file
vi scaleway.tf

Add the following content to it to deploy a General Purpose C1 instance running the Ubuntu Bionic base image in the fr-par-1 zone:

terraform {
required_providers {
scaleway = {
source = "scaleway/scaleway"
}
}
required_version = ">= 0.13"
}

provider "scaleway" {
access_key = "YOUR ACCESS KEY"
secret_key = "YOUR SECRET KEY"
organization_id = "YOUR ORGANIZATION ID"
zone = "fr-par-1"
region = "fr-par"
}

resource "scaleway_instance_ip" "public_ip" {}

resource "scaleway_instance_volume" "data" {
size_in_gb = 50
type = "l_ssd"
}

resource "scaleway_instance_server" "my-ubuntu-instance" {
type = "C1"
image = "ubuntu_bionic"

tags = [ "devops_terraformC1", "MyUbuntuInstance" ]

ip_id = scaleway_instance_ip.public_ip.id

additional_volume_ids = [ scaleway_instance_volume.data.id ]

}

STEP 4 Run terraform init to load the newly created configuration file into Terraform

STEP 5 Plan the execution of the tasks to be done by terraform using the command terraform plan

STEP 6 terraform apply.

Confirm the execution of the plan by typing yes when prompted

STEP 7 Go to the Instances section in your Scaleway Console. You can see that the instance has been created

! You can delete everything by using terraform destroy in your terminal

mobydock

devops.pm father

More Posts - Website

Follow Me:
TwitterFacebook

Trying jenkins plugin Blue Ocean

after installing the plugin by using plugin configuration

launch by using the blue ocean button

Then let’s « create our first pipeline »

First job : configure where the code is … github is an easy way

you must create a file Jenkinsfile in your repo

Jenkinsfile :

pipeline {
     agent any
stages {
    stage('Build') {
        steps {
            echo 'Building..'
        }
    }
    stage('Test') {
        steps {
            echo 'Testing..'
        }
    }
    stage('Deploy') {
        steps {
            echo 'Deploying....'
        }
    }
}

first pipeline created : well done !

mobydock

devops.pm father

More Posts - Website

Follow Me:
TwitterFacebook

kubernetes cluster on raspberry pi

MASTER NODE :


sudo kubeadm config images pull -v3

sudo kubeadm init –token-ttl=0

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


 Install the Weave Net network driver


kubectl apply -f « https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d ‘\n’) »

launch
kubectl get nodes

NAME STATUS ROLES AGE VERSION
yourPC Ready master 2m57s v1.16.2

on workers after installing kubeadm and kubectl : cf my post

launch your command like :

kubeadm join –token 9e700f.7dc97f5e3a45c9e5 192.168.0.27:6443 –discovery-token-ca-cert-hash sha256:95cbb9ee5536aa61ec0239d6edd8598af68758308d0a0425848ae1af28859bea

mobydock

devops.pm father

More Posts - Website

Follow Me:
TwitterFacebook

Kubernetes on Rpi !!

The cherry on the k8ke

install docker :

curl -sSL get.docker.com | sh && \
sudo usermod pi -aG docker && \
newgrp docker

delete swap

sudo dphys-swapfile swapoff && \
sudo dphys-swapfile uninstall && \
sudo update-rc.d dphys-swapfile remove

modify the file /boot/cmdline.txt by adding at the end :

cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

then reboot :
sudo reboot

create file /etc/apt/sources.list.d/kubernetes.list within
deb http://apt.kubernetes.io/ kubernetes-xenial main

launch :
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add –

then
sudo apt-get update


 Install kubeadm it will also install kubectl

sudo apt-get install -qy kubeadm

ENJOY !!!!

mobydock

devops.pm father

More Posts - Website

Follow Me:
TwitterFacebook

DOCKER ON raspberry pi ZERO – Octobre 2019

Device: Raspberry Pi 0 v1.3
OS: Raspbian Buster Lite (2019-09-26)
From a fresh image:

  1. wget -O /tmp/containerd.io_1.2.10-1_armhf.deb --content-disposition https://packagecloud.io/Hypriot/rpi/packages/raspbian/buster/containerd.io_1.2.10-1_armhf.deb/download.deb
  2. sudo apt install /tmp/containerd.io_1.2.10-1_armhf.deb
  3. sudo apt-get install curl git
  4. bash -c "$(curl -fsSL https://get.docker.com)"

ENJOY !!!!!!!

why not go further with KUBERNETES …

mobydock

devops.pm father

More Posts - Website

Follow Me:
TwitterFacebook

Starting DevOps with a simple Pipeline on AWS

Step 1 – Create a AWS S3 bucket :
 https://console.aws.amazon.com/s3/

Step 2 – Download file in dist repository on GitHub :
https://github.com/awslabs/aws-codepipeline-s3-aws-codedeploy_linux.

Step 3 : file name : aws-codepipeline-s3-aws-codedeploy_linux.zip

Step 4 :Upload this file on your S3 bucket

Step 5 : Create AWS ECS Linux instances :
https://console.aws.amazon.com/ec2/

clic on launch , use the free Amazon Machine Images (AMI)

Amazon Linux 2 AMI (HVM)

Step 6 : create an IAM profil for your Amazon EC2
https://docs.aws.amazon.com/fr_fr/codedeploy/latest/userguide/getting-started-create-iam-instance-profile.html

Step 7 : when creating the EC2 instance put this code

#!/bin/bash yum -y update yum install -y ruby yum install -y aws-cli cd /home/ec2-user aws s3 cp s3://aws-codedeploy-us-east-2/latest/install . –region us-east-2 chmod +x ./install ./install auto

Step 8 : Launch instance

Step 9 : Verify codepipeline is running :
sudo service codedeploy-agent status

Step 10 : Create a codedepoy

Open the console
https://console.aws.amazon.com/codedeploy.

create a deployment group

Step 11 : Create your first pipeline !!

Connect to the AWS Management Console and open the CodePipeline console http://console.aws.amazon.com/codesuite/codepipeline/home

create a pipeline and follow the instructions

let’s play : source and deploy … after add step build !

Congratulations!

You have successfully created a pipeline that retrieved this source application from an Amazon S3 bucket and deployed it to three Amazon EC2 instances using AWS CodeDeploy.

mobydock

devops.pm father

More Posts - Website

Follow Me:
TwitterFacebook

One of the best 2019 devops Tool ! Gradle

gradle
Gradle Build Tool – the fatest way to microservices !

Your DevOps tool stack will need a reliable build tool. Apache Ant and Maven dominated the automated build tools market for many years, but Gradle showed up on the scene in 2009, and its popularity has steadily grown since then. Gradle is an incredibly versatile tool which allows you to write your code in Java, C++, Python, or other languages. Gradle is also supported by popular IDEs such as Netbeans, Eclipse, and IntelliJ IDEA. If that doesn’t convince you, it might help to know that Google also chose it as the official build tool for Android Studio.

While Maven and Ant use XML for configuration, Gradle introduces a Groovy-based DSL for describing builds. In 2016, the Gradle team also released a Kotlin-based DSL, so now you can write your build scripts in Kotlin as well. This means that Gradle does have some learning curves, so it can help a lot if you have used Groovy, Kotlin or another JVM language before. Besides, Gradle uses Maven’s repository format, so dependency management will be familiar if you have prior experience with Maven. You can also import your Ant buildsinto Gradle.

The best thing about Gradle is incremental builds, as they save a nice amount of compile time. According to Gradle’s performance measurements, it’s up to 100 times faster than Maven. This is in part because of incrementality, but also due to Gradle’s build cache and daemon. The build cache reuses task outputs, while the Gradle Daemon keeps build information hot in memory in-between builds.

All in all, Gradle allows faster shipping and comes with a lot of configuration possibilities.

more : https://raygun.com/blog/best-devops-tools/

mobydock

devops.pm father

More Posts - Website

Follow Me:
TwitterFacebook

SRE and DevOps – Site Reliability Engineering

Both SRE and DevOps are methodologies addressing organizations’ needs for production operation management. But the differences between the two doctrines are quite significant: While DevOps raise problems and dispatch them to Dev to solve, the SRE approach is to find problems and solve some of them themselves. While DevOps teams would usually choose the more conservative approach, leaving the production environment untouched unless absolutely necessary, SREs are more confident in their ability to maintain a stable production environment and push for rapid changes and software updates. Not unlike the DevOps team, SREs also thrive on a stable production environment, but one of the SRE team’s goals is to improve performance and operational efficiency.

mobydock

devops.pm father

More Posts - Website

Follow Me:
TwitterFacebook

Shadow IT is back, with a vengeance

Since the rise of Docker, it’s not uncommon to hear the following story: our developers, instead of getting VMs from the IT department, get one giant big VM, install Docker on it, and now they don’t have to ask for VMs each time they need a new environment.

it’s good for developers, because they can finally work quickly; it’s bad for the IT department, because now they have lots of unknown resources lying around and it’s a nightmare to manage and/or clean up afterwards.

Opportunity ? threat ?

source from
https://jpetazzo.github.io/2017/10/31/devops-docker-empathy/

mobydock

devops.pm father

More Posts - Website

Follow Me:
TwitterFacebook

Rapsberry PI Docker Swarm and Portainer.io

Today we will try portainer.io to monitorize dowker containers created with docker swarm on multi piZERO workers with a RPI3 leader

to install docker on rpi :
curl -sSL https://get.docker.com | sh
then
sudo usermod -aG docker pi

To init the leader
docker swarm init

This will give you the command for the workers
docker swarm join \
--token SWMTKN-1-5awy2ej1d55mvgpq1obunnh6u2r8b0jjujel619es-7caoz16dxre2bkplp3sh \
xxx.xxx.xxx.xxx:2377

On the leader you can control your node :
docker node ls

————————–
PORTAINER.IO
————————–

To install to manage your swarm cluster you need to install it on the leader
docker service create \
> --name portainer \
> --publish 9000:9000 \
> --constraint 'node.role == manager' \
> --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
> portainer/portainer \
> -H unix:///var/run/docker.sock

Then enjoy by connecting to :

http://IP_LEADER:9000

mobydock

devops.pm father

More Posts - Website

Follow Me:
TwitterFacebook