Part 5 - Tanzu Kubernetes Grid Cluster Setup¶
Installing Tanzu Mission Control Self-Managed Series¶
- Part 1 - Introduction and Environment Details
- Part 2 - Setting Up Harbor as an Image Registry
- Part 3 - Configuring Okta as Identity Provider
- Part 4 - Configuring Other Prerequisites
- Part 5 - Tanzu Kubernetes Grid Cluster Setup
- Part 6 - Installing Tanzu Mission Control Self-Managed
Prerequisites¶
As we are focusing on TMC Self-Managed installation for this post we are going with a few assumptions:
- Tanzu CLI v2.2 and relevant OVA are present for Kubernetes cluster creation
- Tanzu Kubernetes Grid Management Cluster is already created
- You have a load balancer provider already configured. In this environment, we are using NSX ALB(AVI) but you can use any one of your choice. MetalLb is another great option if you are trying this in a lab environment.
- Tanzu Kubernetes Workload Cluster created using
prod
plan with at least 4 worker nodes- All the nodes that I used in the cluster are 8GB RAM, 8 CPU and 60G hard disk. Your requirements may vary.
Add Harbor CA certificate to containerd
configuration¶
Few things to note with a workload cluster configuration
- We need the containerd on the workload cluster to trust the Harbor instance that we deployed in Part 2 - Setting Up Harbor as an Image Registry so that we can pull TMC Self-Managed package and images. We will cover the steps to do that here.
- Make sure you have the Harbor CA cert downloaded using the steps here
Create a patch file¶
- As we are using a
ClusterClass
based cluster we will use theadditionalImageRegistries
variable to patch our Harbor's CA certificate to theCluster
object - As we are using a self-signed certificate
skipTlsVerify
is set totrue
export HARBOR_DOMAIN='harbor.debuggingmode.com'
export HARBOR_CA_CRT=$(cat harbor-ca.crt | base64 -w 0)
cat > harbor-ca-patch.yaml << EOF
- op: add
path: "/spec/topology/variables/-"
value:
name: additionalImageRegistries
value:
- caCert: $HARBOR_CA_CRT
host: $HARBOR_DOMAIN
skipTlsVerify: true
EOF
Patch the workload cluster¶
Executing the command below will start recreating both the control plane and worker nodes one by one. Once the recreation is complete you should be able to pull images from the Harbor instance.
- Example of node rollout
kubectl get machines
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
wld-md-0-9wd6s-548cdf7585-sn8w5 wld Provisioning 4s v1.25.7+vmware.2
wld-md-0-9wd6s-bf4d4d997-htx29 wld wld-md-0-9wd6s-bf4d4d997-htx29 vsphere://4216567a-ab66-7da3-0c7a-0212b38bfe4c Running 14h v1.25.7+vmware.2
wld-md-0-9wd6s-bf4d4d997-nbxql wld wld-md-0-9wd6s-bf4d4d997-nbxql vsphere://4216e107-e15c-edeb-e1d8-4c02a218e916 Running 14h v1.25.7+vmware.2
wld-md-1-m5ckv-6544db6cf-8ffn8 wld Provisioning 3s v1.25.7+vmware.2
wld-md-1-m5ckv-8fdd9cb58-n52dx wld wld-md-1-m5ckv-8fdd9cb58-n52dx vsphere://4216d18f-6cde-f1f2-8cc8-c36a315b24b6 Running 14h v1.25.7+vmware.2
wld-md-2-2dj2b-54fcbd555b-4pkd5 wld Provisioning 3s v1.25.7+vmware.2
wld-md-2-2dj2b-c9b4bf87c-qz5fx wld wld-md-2-2dj2b-c9b4bf87c-qz5fx vsphere://4216ff08-3356-796c-c529-7c74fbcc1fd5 Running 14h v1.25.7+vmware.2
wld-slpcv-2mjgx wld wld-slpcv-2mjgx vsphere://42163dbb-5160-c6d1-781f-33e85e1f01fa Running 14h v1.25.7+vmware.2
wld-slpcv-hg92r wld Provisioning 1s v1.25.7+vmware.2
wld-slpcv-lnqsn wld wld-slpcv-lnqsn vsphere://42164daf-011f-1602-e170-4e29d4545719 Running 14h v1.25.7+vmware.2
wld-slpcv-sn6vd wld wld-slpcv-sn6vd vsphere://421612a2-a7a6-e3f6-1406-26fb9cf4dffe Running 14h v1.25.7+vmware.2
Verify Completion¶
Once the node rollout completes verify if all nodes are healthy in the cluster
k get nodes
NAME STATUS ROLES AGE VERSION
wld-md-0-9wd6s-548cdf7585-hc886 Ready <none> 9m45s v1.25.7+vmware.2
wld-md-0-9wd6s-548cdf7585-sn8w5 Ready <none> 11m v1.25.7+vmware.2
wld-md-1-m5ckv-6544db6cf-8ffn8 Ready <none> 12m v1.25.7+vmware.2
wld-md-2-2dj2b-54fcbd555b-4pkd5 Ready <none> 12m v1.25.7+vmware.2
wld-slpcv-69k6s Ready control-plane 6m48s v1.25.7+vmware.2
wld-slpcv-9mq8t Ready control-plane 2m35s v1.25.7+vmware.2
wld-slpcv-hg92r Ready control-plane 11m v1.25.7+vmware.2
Add Harbor CA certificate to KappControllerConfig
¶
The kapp-controller
running on the workload cluster wld
like containerd
needs to trust our self-signed certificate. In clusters created using ClusterClass
this can be achieved by updating the KappControllerConfig
object on the management cluster. This will propagate the settings to the workload cluster's kapp-controller
.
Patch file to update KappControllerConfig
¶
- The certificate should be a string rather than
base64
encoded value dangerousSkipTLSVerify
is set totrue
as we are using a self-signed certificate. Based on how you generated your certs this may not be needed.- If you skip this step, you will encounter errors when adding the package repository for TMC Self-Managed
cat > kapp-controller-config-template.yaml << EOF
- op: add
path: "/spec/kappController"
value:
config:
caCerts:
dangerousSkipTLSVerify: "true"
EOF
yq eval '.[].value.config.caCerts = "'"$(< harbor-ca.crt)"'"' kapp-controller-config-template.yaml > kapp-controller-config-patch.yaml
Patch KappControllerConfig
¶
Get the name of object¶
# Switch context to management cluster
kubectl config use-context mgmt-avi-admin@mgmt-avi
kubectl get kappcontrollerconfigs.run.tanzu.vmware.com
NAME NAMESPACE GLOBALNAMESPACE SECRETNAME
wld-kapp-controller-package tkg-system tkg-system
Patch the Object¶
kubectl patch kappcontrollerconfigs.run.tanzu.vmware.com wld-kapp-controller-package --patch-file kapp-controller-config-patch.yaml --type json
Force Reconciliation¶
# Pause
kubectl patch pkgi wld-kapp-controller -p '{"spec":{"paused":true}}' --type=merge
# Unpause
kubectl patch pkgi wld-kapp-controller -p '{"spec":{"paused":false}}' --type=merge
Make sure to switch the context back to the workload cluster for the next steps.
Install cert-manager
package¶
TMC Self-Managed leverages cert-manager
for it's certificates. On a workload cluster this may not be installed by default. We need to install this before we can start TMC Self-Managed installation. Switch back the kubectl context back to the workload cluster context before executing these steps.
Add tanzu-standard
package repository¶
To install cert-manager
first we need to add tanzu-standard
repository to our cluster. You will need to change the --url
if you are using internal/custom image registry.
tanzu package repository add tanzu-standard --url projects.registry.vmware.com/tkg/packages/standard/repo:v2.2.0 --namespace tkg-system
Confirm if the repository has reconciled successfully.
tanzu package repository list -A
NAMESPACE NAME SOURCE STATUS
tkg-system tanzu-standard (imgpkg) projects.registry.vmware.com/tkg/packages/standard/repo:v2.2.0 Reconcile succeeded
Deploy cert-manager
¶
- Multiple versions of
cert-manger
should be available now
tanzu package available list cert-manager.tanzu.vmware.com
NAME VERSION RELEASED-AT
cert-manager.tanzu.vmware.com 1.1.0+vmware.1-tkg.2 2020-11-24 13:00:00 -0500 EST
cert-manager.tanzu.vmware.com 1.1.0+vmware.2-tkg.1 2020-11-24 13:00:00 -0500 EST
cert-manager.tanzu.vmware.com 1.10.2+vmware.1-tkg.1 2023-01-11 07:00:00 -0500 EST
cert-manager.tanzu.vmware.com 1.5.3+vmware.2-tkg.1 2021-08-23 13:22:51 -0400 EDT
cert-manager.tanzu.vmware.com 1.5.3+vmware.4-tkg.1 2021-08-23 13:22:51 -0400 EDT
cert-manager.tanzu.vmware.com 1.5.3+vmware.7-tkg.1 2021-08-23 13:22:51 -0400 EDT
cert-manager.tanzu.vmware.com 1.5.3+vmware.7-tkg.3 2021-08-23 13:22:51 -0400 EDT
cert-manager.tanzu.vmware.com 1.7.2+vmware.1-tkg.1 2021-10-29 08:00:00 -0400 EDT
cert-manager.tanzu.vmware.com 1.7.2+vmware.3-tkg.1 2021-10-29 08:00:00 -0400 EDT
cert-manager.tanzu.vmware.com 1.7.2+vmware.3-tkg.3 2021-10-29 08:00:00 -0400 EDT
- We are going to install
v1.10.2+vmware.1-tkg.1
kubectl create ns cert-manager
tanzu package install cert-manager \
--package cert-manager.tanzu.vmware.com \
--version 1.10.2+vmware.1-tkg.1 \
--namespace cert-manager
- Verify installation
tanzu package installed list -n cert-manager
NAME PACKAGE-NAME PACKAGE-VERSION STATUS
cert-manager cert-manager.tanzu.vmware.com 1.10.2+vmware.1-tkg.1 Reconcile succeeded
Create ClusterIssuer
¶
In this setup, we are going to create a ClusterIssuer
. This will generate a self-signed CA certificate which we will use to sign TMC Self-Managed certificates. You can create your own ClusterIssuer
based on your certificate needs using the steps detailed in cert-manager
documentation
cat << EOF > tmcsm-cluster-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: selfsigned-issuer
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: tmcsm-issuer
spec:
ca:
secretName: tmcsm-issuer
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: tmcsm-issuer
namespace: cert-manager
spec:
isCA: true
commonName: tmcsm
secretName: tmcsm-issuer
issuerRef:
name: selfsigned-issuer
kind: ClusterIssuer
group: cert-manager.io
EOF
kubectl apply -f tmcsm-cluster-issuer.yaml
This marks the end of setting up our Kubernetes cluster for TMC Self-Managed installation.