Build Your Own Kubernetes Cluster

Nihat Alim
17 min readOct 20, 2023

--

Motivation

I tried lots of cloud services and other vps/vds providers to serve my application to the world. According to my experience if you don’t have commercial application it comes to you expensive. Because who wants to pay lots of money to hobby projects? And also even you are not using it? If you want to build a project with a couple of microservices, your costs multiplies with your microservices and so on. I can not say about how to store data, how you monitor your application and how you observe your system.

At the early January, I started a new job in Trendyol at Indexing Team. It like an evolution for me because this is the first meet with Kubernetes. When I deeply learn Kubernetes, lots of things comes to my mind about build my own kubernetes cluster. After a little search and evaluate pros/cons, I decided to build my own cluster.

Literally everything comes with a cost. I need to buy some hardware and static ip address with good internet connection. And I supply them with electricity 7/24. Hardwares have one time cost but internet, static ip and electricty costs are montly. To calculate, I have to be objective to myself, don’t have to allow my emotions to cheat on me. Because building my own cluster looks very exciting and I know how much I want it.

Table of Contents

· Pros & Costs
· Design
· Prerequires
Internet Connection
Static IP
Domain
Operating System Installation
Port Forwarding
· Infrastructure
Microk8s Setup
Cluster Installation
Add Nodes to Cluster
Enabling Community Addons
Enable Ingress
Install Argo CD
Certification Manager
Image Registry Installation (Harbor)
Export KubeConfig to Accessing Another Machine
Push Docker Image to Harbor Manually
Create Argo Deployments

Pros & Costs

Pros

  • 4 piece Intel J5040 (4 core, 3.2ghz) | 16GB RAM | 256 SSD
  • If you buy rather than 4 piece vps/vds server with same specification your cost will be ~100$ about montly
  • Experience

Costs

  • (~520$) 4x Intel NUC Barebone BOXNUC7PJYHN2
  • (~210$) 4x 16GB 3200MHz DDR4 NonECC CL22 SODIMM (Buy Link)
  • (~155$) 4x 256GB SSD Disk
  • Domain
  • Good internet connection and static IP
  • Total hardware cost is ~890$

Design

I would like to explain how I connect to the router of my NUCs. NUC means that I named my each machine and gave each them a number for easy to address them.

Figure 1: Machine connection model to the router

Some routers don’t have lots of ethernet ports. Because of that I bought an ethernet switch to connect my nucs to my router. I recommend to you build away your cluster anywhere where you live because servers noise may be a little bit annoying especially under the load.

In Kubernetes, we named node to each nuc/server. And in Kubernetes Cluster there should be at least one node has to be master role. In my cluster I seperate nodes as one master and three worker nodes.

Figure 2: Master and worker design

Prerequires

Internet Connection

You have to good internet connection to serve your application correctly because when you uploading or downloading a file it will be bottlenect to your cluster. I basically recommend your internet connection at least 100 mbps download and 20mbps upload.

Static IP

You have to request static ip from your internet provider. Because it is required to open your cluster to the world.

Domain

You have to buy a domain because we will use it.

Operating System Installation

Before installing Kubernetes to nucs, we have to install a Linux distro. I prefer Ubuntu. I don’t want to waste my time in operating system. So in Ubuntu, it is easy to fix a problem with a little search. I recommend to you select a distro that you already familiar. And you have to install openssh package for accessing remotely through ssh.

Port Forwarding

If you want to access your Kubernetes cluster especially your nucs you have to setting up your router’s port forwarding options. This means you should define a specific port to access each nucs ssh port. And also to access directly inside your application such as my-app.yourdomain.com in Kubernetes you have to redirect 80 and 443 port to your Kubernetes master node. In design section I choose nuc1 as master node in my cluster.

Before configuring port forwarding I strongly recommend configure static ip reservation from DHCP server in router. Therefore you will be safe during port forwarding because you will be sure which ip address assigned to your nucs.

These are following steps may be useful:

  • Assign local static ip to your nucs by mac address
  • Forward 80 port to master node’s (nuc1) 80 port
  • Forward 443 port to master node’s (nuc1) 443 port
  • Determine specific ssh port for each nuc, such as 10022 to nuc 1 ssh port

Infrastructure

Microk8s Setup

There are lots of build kits for building kubernetes cluster. I prefer microk8s and it is easy to manage. If you haven’t install microk8s, lets install it with following code:

nuc1@nuc1:~$ sudo snap install microk8s --classic

After installation, lets check the microk8s status. You will see following output for the first installation. Lets follow these commands.

nuc1@nuc1:~$ microk8s status

Insufficient permissions to access MicroK8s.
You can either try again with sudo or add the user dan to the 'microk8s' group:

sudo usermod -a -G microk8s nuc1
sudo chown -R nuc1 ~/.kube
  • Give the permission
nuc1@nuc1:~$ sudo usermod -a -G microk8s nuc1 
  • Create the .kube folder to your home directory
nuc1@nuc1:~$ mkdir .kube
  • Change the owner of .kube folder
nuc1@nuc1:~$ sudo chown -R nuc1 ~/.kube
  • Restart the microk8s
nuc1@nuc1:~$ newgrp microk8s

Conguratulation! You have an microk8s server and it is ready to configure your cluster node!

Cluster Installation

You may install kubernetes cluster with a few commands.

  • Look at the microk8s status with following command:
nuc1@nuc1:~$ microk8s status

microk8s is not running, try microk8s start
  • Start the microk8s server with following command:
nuc1@nuc1:~$ microk8s start
  • Check again the status:
nuc1@nuc1:~$ microk8s status

microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
  • Your kubernetes cluster is ready to run. Now, we can check nodes:
nuc1@nuc1:~$ microk8s kubectl get node

NAME STATUS ROLES AGE VERSION
nuc1 Ready <none> 94m v1.27.5

Add Nodes to Cluster

We feel free to add nodes to our cluster now. I will add nuc2, nuc3 and nuc4 machines to the cluster. To do that, I have to run the following command in nuc1 (aka master node):

nuc1@nuc1:~$ microk8s add-node

From the node you wish to join to this cluster, run the following:
microk8s join 192.168.1.105:25000/8f304acba7424f341e3c035f41cc401f/17dc88fac38e

Use the '--worker' flag to join a node as a worker not running the control plane, eg:
microk8s join 192.168.1.105:25000/8f304acba7424f341e3c035f41cc401f/17dc88fac38e --worker

If the node you are adding is not reachable through the default interface you can use one of the following:
microk8s join 192.168.1.105:25000/8f304acba7424f341e3c035f41cc401f/17dc88fac38e

Microk8s generates one time code to join the cluster. I have to generate this command for each machine. If you haven’t yet microk8s you have to firstly install microk8s each machine :)

  • Install microk8s to nuc2:
nuc2@nuc2:~$ sudo snap install microk8s --classic
[sudo] password for nuc2:
microk8s (1.27/stable) v1.27.5 from Canonical✓ installed
  • Lets join to the cluster:
nuc2@nuc2:~$ microk8s join 192.168.1.105:25000/8f304acba7424f341e3c035f41cc401f/17dc88fac38e --worker

Contacting cluster at 192.168.1.105
Connection failed. The hostname (nuc2) of the joining node does not resolve to the IP "192.168.1.107". Refusing join (400).

Uppss, we got an error. It means that nuc1 (aka master) could not resolve nuc2 host. We should add a record to /etc/hosts for nuc2 in master node. To do that I use vim. You can feel free to how adding this record. Eventually you have to see this record in /etc/hosts file:

nuc2@nuc2:~$ cat /etc/hosts

127.0.0.1 localhost
127.0.1.1 nuc1

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.1.107 nuc2

And also I have to add a record for all machines to this file. Lets add nuc3 and nuc4 records:

nuc2@nuc2:~$ cat /etc/hosts

127.0.0.1 localhost
127.0.1.1 nuc1

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.1.107 nuc2
192.168.1.108 nuc3
192.168.1.111 nuc4

I am ready to add nodes to cluster right now. Before do that I generate a new code repeating my previous step:

nuc1@nuc1:~$ microk8s add-node

From the node you wish to join to this cluster, run the following:
microk8s join 192.168.1.105:25000/0a6d0a832ac212d2481a8ca42c1f6cde/17dc88fac38e

Use the '--worker' flag to join a node as a worker not running the control plane, eg:
microk8s join 192.168.1.105:25000/0a6d0a832ac212d2481a8ca42c1f6cde/17dc88fac38e --worker

If the node you are adding is not reachable through the default interface you can use one of the following:
microk8s join 192.168.1.105:25000/0a6d0a832ac212d2481a8ca42c1f6cde/17dc88fac38e

In nuc2, I run this command:

microk8s join 192.168.1.105:25000/0a6d0a832ac212d2481a8ca42c1f6cde/17dc88fac38e --worker

Contacting cluster at 192.168.1.105

The node has joined the cluster and will appear in the nodes list in a few seconds.

This worker node gets automatically configured with the API server endpoints.
If the API servers are behind a loadbalancer please set the '--refresh-interval' to '0s' in:
/var/snap/microk8s/current/args/apiserver-proxy
and replace the API server endpoints with the one provided by the loadbalancer in:
/var/snap/microk8s/current/args/traefik/provider.yaml

This output means nuc2 joined to the cluster and it will be ready a few minutes to see in cluster. I can see nuc2 with following command:

nuc1@nuc1:~$ microk8s kubectl get node

NAME STATUS ROLES AGE VERSION
nuc2 Ready <none> 109s v1.27.5
nuc1 Ready <none> 128m v1.27.5

Here, you are know how to add a node to the cluster anymore. I will also add nuc3 and nuc4 machines to the cluster with the same commands. Then I see the whole nodes like this:

nuc1@nuc1:~$ microk8s kubectl get node

NAME STATUS ROLES AGE VERSION
nuc2 Ready <none> 7m16s v1.27.5
nuc1 Ready <none> 133m v1.27.5
nuc3 Ready <none> 55s v1.27.5
nuc4 Ready <none> 15s v1.27.5

It looks perfect! We have the cluster now.

Enabling Community Addons

If you visit microk8s addons page you see some addons already exists and waiting to enable it. Lets enable also community addons with following command:

nuc1@nuc1:~$ microk8s enable community

Infer repository core for addon community
Cloning into '/var/snap/microk8s/common/addons/community'...
done.
Community repository is now enabled

Enable Ingress

We are actively use ingress while accessing our application via our domain. Before using that you should already done port forwarding for 80 and 443 ports to master node.

Lets enable ingress server:

nuc1@nuc1:~$ microk8s enable ingress

Infer repository core for addon ingress
Enabling Ingress
ingressclass.networking.k8s.io/public created
ingressclass.networking.k8s.io/nginx created
namespace/ingress created
serviceaccount/nginx-ingress-microk8s-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-microk8s-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-microk8s-role created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-microk8s created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-microk8s created
configmap/nginx-load-balancer-microk8s-conf created
configmap/nginx-ingress-tcp-microk8s-conf created
configmap/nginx-ingress-udp-microk8s-conf created
daemonset.apps/nginx-ingress-microk8s-controller created
Ingress is enabled

Install Argo CD

ArgoCD is useful to deploying applications. We will use it for our applications. Lets enable it:

nuc1@nuc1:~$ microk8s enable argocd

Infer repository community for addon argocd
Infer repository core for addon helm3
Addon core/helm3 is already enabled
Installing ArgoCD (Helm v4.6.3)
"argo" has been added to your repositories
Release "argo-cd" does not exist. Installing it now.
NAME: argo-cd
LAST DEPLOYED: Tue Oct 17 20:40:53 2023
NAMESPACE: argocd
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
In order to access the server UI you have the following options:

1. kubectl port-forward service/argo-cd-argocd-server -n argocd 8080:443

and then open the browser on http://localhost:8080 and accept the certificate

2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/ingress.md#option-1-ssl-passthrough
- Add the `--insecure` flag to `server.extraArgs` in the values file and terminate SSL at your ingress: https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/ingress.md#option-2-multiple-ingress-objects-and-hosts


After reaching the UI the first time you can login with username: admin and the random password generated during the installation. You can find the password by running:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

(You should delete the initial secret afterwards as suggested by the Getting Started Guide: https://github.com/argoproj/argo-cd/blob/master/docs/getting_started.md#4-login-using-the-cli)
ArgoCD is installed

You have successfully enabled argocd. At the first login you have to get your password with following command:

nuc1@nuc1:~$ microk8s kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

Afterwards, lets create an ingress config for accessing argocd service from domain url. You can directly use this configuration file with changing your domain address. To the seperation configuration files, I create all ingress files under the ingress folder.

nuc1@nuc1:~$ cat ingress/argo-ingress.yaml

kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: argo-ingress
namespace: argocd
annotations:
#cert-manager.io/cluster-issuer: lets-encrypt
#nginx.ingress.kubernetes.io/ssl-redirect: "false"
#nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- argocd.yourdomain.com
secretName: argo-ingress-tls
rules:
- host: argocd.yourdomain.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: argo-cd-argocd-server
port:
number: 443

After creating this file, you are ready to activate:

nuc1@nuc1:~$ microk8s kubectl apply -f ingress/argo-ingress.yaml

ingress.networking.k8s.io/argo-ingress created

You can see this ingress how is looking with following command:

nuc1@nuc1:~$ microk8s kubectl get ingress -A

NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
argocd argo-ingress public argocd.yourdomain.com 80, 443 4s

Now, lets go to the argocd.yourdomain.com and proceed to the site:

Figure 3: Accept certification and continue to the ArgoCD

Login screen shows up and type username as admin and type password as initial password that you already obtained with a command. After login don’t forgot to update your password.

Figure 4: ArgoCD login screen

Congratulations! We got installed ArgoCD 😍

Certification Manager

Microk8s automatically provides us Lets Encrypt SSL certificate with cert-manager addon. I recommend to you to visit this documentation if these documentation not enough for you.

Lets enable the certification manager:

nuc1@nuc1:~$ microk8s enable cert-manager

Infer repository core for addon cert-manager
Enable DNS addon
Infer repository core for addon dns
Addon core/dns is already enabled
Enabling cert-manager
namespace/cert-manager created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
configmap/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
Waiting for cert-manager to be ready.
...ready
Enabled cert-manager

===========================

Cert-manager is installed. As a next step, try creating a ClusterIssuer
for Let's Encrypt by creating the following resource:

$ microk8s kubectl apply -f - <<EOF
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: me@example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-account-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: public
EOF

Then, you can create an ingress to expose 'my-service:80' on 'https://my-service.example.com' with:

$ microk8s enable ingress
$ microk8s kubectl create ingress my-ingress --annotation cert-manager.io/cluster-issuer=letsencrypt --rule 'my-service.example.com/*=my-service:80,tls=my-service-tls'

According to the output we have to create a new ClusterIssuer. I created it under cert folder and gonna apply it for kubernetes. It will look like following:

nuc1@nuc1:~$ cat cert/cluster-issuer.yaml

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: lets-encrypt
spec:
acme:
email: contact@yourdomain.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-account-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: public

Lets apply our ClusterIssuer component:

nuc1@nuc1:~$ microk8s kubectl apply -f cert/cluster-issuer.yaml

clusterissuer.cert-manager.io/lets-encrypt created

It looks like following in kubernetes:

nuc1@nuc1:~$ microk8s kubectl get clusterissuer

NAME READY AGE
lets-encrypt True 28s

Afterwards, we are ready to activate clusterissuer in our first ingress, argocd. Comment out the first clusterissuer related line in ingress/argo-ingress.yaml file. It looks like following:

nuc1@nuc1:~$ cat ingress/argo-ingress.yaml

kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: argo-ingress
namespace: argocd
annotations:
cert-manager.io/cluster-issuer: lets-encrypt
#nginx.ingress.kubernetes.io/ssl-redirect: "false"
#nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- argocd.yourdomain.com
secretName: argo-ingress-tls
rules:
- host: argocd.yourdomain.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: argo-cd-argocd-server
port:
number: 443

Lets apply the changes with following command:

nuc1@nuc1:~$ microk8s kubectl apply -f ingress/argo-ingress.yaml

ingress.networking.k8s.io/argo-ingress configured

After a few seconds, a pod created by clusterissuer with argo’s namespace and fetch a new certificate from Lets Encrypt automatically. You can see the following command:

nuc1@nuc1:~$ microk8s kubectl get pod -n argocd

NAME READY STATUS RESTARTS AGE
argo-cd-argocd-redis-795df6745b-wp7lm 1/1 Running 0 120m
argo-cd-argocd-applicationset-controller-799f9dff99-w9wkp 1/1 Running 0 120m
argo-cd-argocd-notifications-controller-7654d89877-zt5gp 1/1 Running 0 120m
argo-cd-argocd-dex-server-65676fc47f-bvmdb 1/1 Running 0 120m
argo-cd-argocd-repo-server-78f476498c-qbsdw 1/1 Running 0 120m
argo-cd-argocd-application-controller-0 1/1 Running 0 120m
argo-cd-argocd-server-64d878878f-fzzf9 1/1 Running 0 120m
cm-acme-http-solver-q8xnd 1/1 Running 0 32s

cm-acme-http-solver-q8xnd pod is created and will be remove after its job has been done. After that you will have an SSL certificate for your argocd.yourdomain.com address.

Congratulations! We are ready to automatically get SSL certificate any website located in our cluster 😉

Image Registry Installation (Harbor)

Private image registry is useful when don’t want to dependent to docker image registry. You can hold and serve your docker images with your private image registry for ci/cd flows. In here, you can feel free to use it when you are private registry because there is no limitation such as docker.io.

We will install Harbor image registry to our kubernetes cluster. We use helm chart to installation. So, lets start.

  • Before installation, lets create harbor namespace:
nuc1@nuc1:~$ microk8s kubectl create namespace harbor

namespace/harbor created
  • Install Harbor to harbor namespace:
nuc1@nuc1:~$ microk8s helm install my-harbor --namespace harbor harbor/harbor --set expose.ingress.hosts.core=harbor.yourdomain.com --set expose.ingress.hosts.notary=notary.yourdomain.com --set persistence.enabled=false --set externalURL=https://harbor.yourdomain.com --set harborAdminPassword="yourpassword" --set expose.ingress.annotations."cert-manager\.io/cluster-issuer=lets-encrypt"

NAME: my-harbor
LAST DEPLOYED: Wed Oct 18 22:38:35 2023
NAMESPACE: harbor
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at https://harbor.yourdomain.com
For more details, please visit https://github.com/goharbor/harbor

This installation takes a while. You can follow this installation with:

nuc1@nuc1:~$ microk8s kubectl get pod -n harbor

NAME READY STATUS RESTARTS AGE
my-harbor-portal-58d8697657-jl8k6 1/1 Running 0 3m39s
my-harbor-redis-0 1/1 Running 0 3m39s
my-harbor-registry-695f9c9657-xklpv 2/2 Running 0 3m39s
my-harbor-database-0 1/1 Running 0 3m39s
my-harbor-trivy-0 1/1 Running 0 3m39s
my-harbor-core-696d96b86f-b2shz 1/1 Running 0 3m39s
my-harbor-jobservice-759b8bcb8b-6qgtk 1/1 Running 2 (3m28s ago) 3m39s

You know should be access to the harbor in harbor.yourdomain.com. Go there and type admin/yourpassword combination that given by arguments while installing.

  • Lets create new project named registry as following:
  • Then create a new user named git as following:
Figure 6: Create new user in Harbor
  • Go registry project that you created before then add git to the members as Project Admin role:
Figure 7: Create new member git in registry project

You can access push commands under Repositories > Push Command section.

Congratulations! We have image registry server now. 😍

Export KubeConfig to Accessing Another Machine

If you want to access kubernetes cluster, you can share your kubeconfig file. I will refer this documentation to how you achieve.

Firstly, add your domain to /var/snap/microk8s/current/certs/csr.conf.template file as following:

[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster
DNS.5 = kubernetes.default.svc.cluster.local
DNS.6 = yourdomain.com

After that, update config with following command:

nuc1@nuc1:~$ sudo microk8s refresh-certs --cert server.crt

Certification has been updated and you can access your cluster with yourdomain.com domain. Lets export kubeconfig and get it to use:

nuc1@nuc1:~$ microk8s config

You have kubeconfig now and you can access from any machine. Copy it to your .kube/config file to use it.

Push Docker Image to Harbor Manually

We already know from previous steps that how we push image to registry via push command.

Here is my push command in Harbor:

docker push harbor.yourdomain.com/registry/REPOSITORY[:TAG]

As you can see that the tag of the image should be start as harbor.yourdomain.com/registry/ and continue with my image name. I have basic hello world project on github and I will build and push it to this registry.

Lets build an image from source code of hello world project:

docker build -t harbor.yourdomain.com/registry/go-hello-world:latest .

Login to image registry with following command in terminal:

$ docker login harbor.yourdomain.com                                     [2:20:42]
Username: git
Password: <- enter your name
Login Succeeded

After login we will push image to our registry via push command:

$ docker push harbor.yourdomain.com/registry/go-hello-world:latest

The push refers to repository [harbor.yourdomain.com/registry/go-hello-world]
a95840657740: Pushed
6f0f0f985a5b: Mounted from library/world
0ce80d229897: Pushed
45e5db89953b: Mounted from library/world
d761f6a5e00b: Mounted from library/world
92b03ba11b22: Mounted from library/world
a93e0ec13d9a: Pushed
e51777ae0bce: Pushed
2ef3351afa6d: Pushed
5cc3a4df1251: Pushed
2fa37f2ee66e: Pushed
latest: digest: sha256:fc52f610a03e55852cb828ba1d634423420319700bb3c805cf17e0d814540d66 size: 2631
Figure 8: First image in registry

We got our first registry here 🐥

Create Argo Deployments

We will create deployment files for our first basic go application to deploy to our cluster. Any changes in this deployment files, this changes detected by Argo and affected to the cluster ondemand.

  • Create deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-hello-world
namespace: default
labels:
app: go-hello-world
spec:
replicas: 1
selector:
matchLabels:
app: go-hello-world
template:
metadata:
labels:
app: go-hello-world
spec:
imagePullSecrets:
- name: git-secret
containers:
- name: go-hello-world-container
resources:
requests:
memory: "100Mi"
cpu: "100m"
limits:
memory: "521Mi"
cpu: "500m"
image: harbor.yourdomain.com/registry/go-hello-world:latest
ports:
- name: http
containerPort: 8080
  • Create service
apiVersion: v1
kind: Service
metadata:
name: go-hello-world-service
namespace: default
spec:
selector:
app: go-hello-world
ports:
- name: http
port: 8080
targetPort: http
  • Create ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: go-hello-world-ingress
namespace: default
labels:
app: go-hello-world
annotations:
cert-manager.io/cluster-issuer: lets-encrypt
spec:
tls:
- hosts:
- go-hello-world.yourdomain.com
secretName: go-hello-world-ingress-tls
rules:
- host: go-hello-world.yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: go-hello-world-service
port:
number: 8080

After creating these files push them to the repository to store your deployment files. Also you should not skip the important part. If you look into deployment file, you can see imagePullSecrets and we have no defined secret file named git-secret yet. So lets create a secret with name git-secret:

nuc1@nuc1:~$ kubectl create secret docker-registry git-secret --docker-server=harbor.yourdomain.com --docker-username=git --docker-password=your-pass --docker-email=giveyourmailaddress@gmail.com

secret/git-secret created

This secret will be use to fetching image from our private repository so we can authenticate through this secret.

Lets open the ArgoCD and follow the instructions:

https://www.youtube.com/watch?v=0S4uFYMkmBY

--

--

Responses (1)