Infra as Code – GITOPS – part 1 : TERRAFORM on scaleway provider



curl -fsSL | sudo apt-key add -

sudo apt-add-repository "deb [arch=amd64] $(lsb_release -cs) main"

sudo apt-get update && sudo apt-get install terraform

verify it s insalled by using :

$ terraform version

STEP 2 on your scaleway account Create API credential on scaleway admin interface

STEP 3 create a project folder then create a terraform declarative configuration file

Add the following content to it to deploy a General Purpose C1 instance running the Ubuntu Bionic base image in the fr-par-1 zone:

terraform {
required_providers {
scaleway = {
source = "scaleway/scaleway"
required_version = ">= 0.13"

provider "scaleway" {
access_key = "YOUR ACCESS KEY"
secret_key = "YOUR SECRET KEY"
organization_id = "YOUR ORGANIZATION ID"
zone = "fr-par-1"
region = "fr-par"

resource "scaleway_instance_ip" "public_ip" {}

resource "scaleway_instance_volume" "data" {
size_in_gb = 50
type = "l_ssd"

resource "scaleway_instance_server" "my-ubuntu-instance" {
type = "C1"
image = "ubuntu_bionic"

tags = [ "devops_terraformC1", "MyUbuntuInstance" ]

ip_id =

additional_volume_ids = [ ]


STEP 4 Run terraform init to load the newly created configuration file into Terraform

STEP 5 Plan the execution of the tasks to be done by terraform using the command terraform plan

STEP 6 terraform apply.

Confirm the execution of the plan by typing yes when prompted

STEP 7 Go to the Instances section in your Scaleway Console. You can see that the instance has been created

! You can delete everything by using terraform destroy in your terminal

mobydock father

More Posts - Website

Follow Me:

Trying jenkins plugin Blue Ocean

after installing the plugin by using plugin configuration

launch by using the blue ocean button

Then let’s « create our first pipeline »

First job : configure where the code is … github is an easy way

you must create a file Jenkinsfile in your repo

Jenkinsfile :

pipeline {
     agent any
stages {
    stage('Build') {
        steps {
            echo 'Building..'
    stage('Test') {
        steps {
            echo 'Testing..'
    stage('Deploy') {
        steps {
            echo 'Deploying....'

first pipeline created : well done !

mobydock father

More Posts - Website

Follow Me:

kubernetes cluster on raspberry pi


sudo kubeadm config images pull -v3

sudo kubeadm init –token-ttl=0

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

 Install the Weave Net network driver

kubectl apply -f «$(kubectl version | base64 | tr -d ‘\n’) »

kubectl get nodes

yourPC Ready master 2m57s v1.16.2

on workers after installing kubeadm and kubectl : cf my post

launch your command like :

kubeadm join –token 9e700f.7dc97f5e3a45c9e5 –discovery-token-ca-cert-hash sha256:95cbb9ee5536aa61ec0239d6edd8598af68758308d0a0425848ae1af28859bea

mobydock father

More Posts - Website

Follow Me:

Kubernetes on Rpi !!

The cherry on the k8ke

install docker :

curl -sSL | sh && \
sudo usermod pi -aG docker && \
newgrp docker

delete swap

sudo dphys-swapfile swapoff && \
sudo dphys-swapfile uninstall && \
sudo update-rc.d dphys-swapfile remove

modify the file /boot/cmdline.txt by adding at the end :

cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

then reboot :
sudo reboot

create file /etc/apt/sources.list.d/kubernetes.list within
deb kubernetes-xenial main

launch :
curl -s | apt-key add –

sudo apt-get update

 Install kubeadm it will also install kubectl

sudo apt-get install -qy kubeadm

ENJOY !!!!

mobydock father

More Posts - Website

Follow Me:

DOCKER ON raspberry pi ZERO – Octobre 2019

Device: Raspberry Pi 0 v1.3
OS: Raspbian Buster Lite (2019-09-26)
From a fresh image:

  1. wget -O /tmp/containerd.io_1.2.10-1_armhf.deb --content-disposition
  2. sudo apt install /tmp/containerd.io_1.2.10-1_armhf.deb
  3. sudo apt-get install curl git
  4. bash -c "$(curl -fsSL"

ENJOY !!!!!!!

why not go further with KUBERNETES …

mobydock father

More Posts - Website

Follow Me:

Starting DevOps with a simple Pipeline on AWS

Step 1 – Create a AWS S3 bucket :

Step 2 – Download file in dist repository on GitHub :

Step 3 : file name :

Step 4 :Upload this file on your S3 bucket

Step 5 : Create AWS ECS Linux instances :

clic on launch , use the free Amazon Machine Images (AMI)

Amazon Linux 2 AMI (HVM)

Step 6 : create an IAM profil for your Amazon EC2

Step 7 : when creating the EC2 instance put this code

#!/bin/bash yum -y update yum install -y ruby yum install -y aws-cli cd /home/ec2-user aws s3 cp s3://aws-codedeploy-us-east-2/latest/install . –region us-east-2 chmod +x ./install ./install auto

Step 8 : Launch instance

Step 9 : Verify codepipeline is running :
sudo service codedeploy-agent status

Step 10 : Create a codedepoy

Open the console

create a deployment group

Step 11 : Create your first pipeline !!

Connect to the AWS Management Console and open the CodePipeline console

create a pipeline and follow the instructions

let’s play : source and deploy … after add step build !


You have successfully created a pipeline that retrieved this source application from an Amazon S3 bucket and deployed it to three Amazon EC2 instances using AWS CodeDeploy.

mobydock father

More Posts - Website

Follow Me:

One of the best 2019 devops Tool ! Gradle

Gradle Build Tool – the fatest way to microservices !

Your DevOps tool stack will need a reliable build tool. Apache Ant and Maven dominated the automated build tools market for many years, but Gradle showed up on the scene in 2009, and its popularity has steadily grown since then. Gradle is an incredibly versatile tool which allows you to write your code in Java, C++, Python, or other languages. Gradle is also supported by popular IDEs such as Netbeans, Eclipse, and IntelliJ IDEA. If that doesn’t convince you, it might help to know that Google also chose it as the official build tool for Android Studio.

While Maven and Ant use XML for configuration, Gradle introduces a Groovy-based DSL for describing builds. In 2016, the Gradle team also released a Kotlin-based DSL, so now you can write your build scripts in Kotlin as well. This means that Gradle does have some learning curves, so it can help a lot if you have used Groovy, Kotlin or another JVM language before. Besides, Gradle uses Maven’s repository format, so dependency management will be familiar if you have prior experience with Maven. You can also import your Ant buildsinto Gradle.

The best thing about Gradle is incremental builds, as they save a nice amount of compile time. According to Gradle’s performance measurements, it’s up to 100 times faster than Maven. This is in part because of incrementality, but also due to Gradle’s build cache and daemon. The build cache reuses task outputs, while the Gradle Daemon keeps build information hot in memory in-between builds.

All in all, Gradle allows faster shipping and comes with a lot of configuration possibilities.

more :

mobydock father

More Posts - Website

Follow Me:

SRE and DevOps – Site Reliability Engineering

Both SRE and DevOps are methodologies addressing organizations’ needs for production operation management. But the differences between the two doctrines are quite significant: While DevOps raise problems and dispatch them to Dev to solve, the SRE approach is to find problems and solve some of them themselves. While DevOps teams would usually choose the more conservative approach, leaving the production environment untouched unless absolutely necessary, SREs are more confident in their ability to maintain a stable production environment and push for rapid changes and software updates. Not unlike the DevOps team, SREs also thrive on a stable production environment, but one of the SRE team’s goals is to improve performance and operational efficiency.

mobydock father

More Posts - Website

Follow Me:

Shadow IT is back, with a vengeance

Since the rise of Docker, it’s not uncommon to hear the following story: our developers, instead of getting VMs from the IT department, get one giant big VM, install Docker on it, and now they don’t have to ask for VMs each time they need a new environment.

it’s good for developers, because they can finally work quickly; it’s bad for the IT department, because now they have lots of unknown resources lying around and it’s a nightmare to manage and/or clean up afterwards.

Opportunity ? threat ?

source from

mobydock father

More Posts - Website

Follow Me:

Rapsberry PI Docker Swarm and

Today we will try to monitorize dowker containers created with docker swarm on multi piZERO workers with a RPI3 leader

to install docker on rpi :
curl -sSL | sh
sudo usermod -aG docker pi

To init the leader
docker swarm init

This will give you the command for the workers
docker swarm join \
--token SWMTKN-1-5awy2ej1d55mvgpq1obunnh6u2r8b0jjujel619es-7caoz16dxre2bkplp3sh \

On the leader you can control your node :
docker node ls


To install to manage your swarm cluster you need to install it on the leader
docker service create \
> --name portainer \
> --publish 9000:9000 \
> --constraint 'node.role == manager' \
> --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
> portainer/portainer \
> -H unix:///var/run/docker.sock

Then enjoy by connecting to :


mobydock father

More Posts - Website

Follow Me: