Talos Linux

Upgrading Talos Linux

Follow recommendation and upgrade node by node. Start with control nodes and then clients.

# tmux pane 1 - follow node versions
k get nodes -o wide -w

# tmux pane 2 - follow kernal for spec. node
talosctl dmesg -f -n A.B.C.D

# tmux pane 3 - start upgrade 
task talos:upgrade-node HOSTNAME=A.B.C.D

Upgrade Talos Kubernetes

Process is save to repeat if any fault occurr.

# tmux pane 1 - follow node versions
k get nodes -o wide -w

# tmux pane 2 - follow kernal for spec. node
talosctl dmesg -f -n A.B.C.D

# tmux pane 3 - prepare upgrade - backup db
talosctl -n A.B.C.D etcd snapshot etcd.backup

# tmux pane 3 - dry run
talosctl -n A.B.C.D upgrade-k8s --to=1.32.2 --dry-run

# tmux pane 3 - start upgrade 
task talos:upgrade-node HOSTNAME=A.B.C.D

DNS

Use ExternalDNS with Unifi provider to make kubernetes services discoverable.


NOTE: Chech this page how to handle external-dns and traefik ingress routes.

Instruction

  1. Update external dns helm deployment
# From helm files
# add source=traefik-proxy
#disable traefik-legacy to avoid ext dns dies not finding old paths...
# not sure if this is needed,  but added crd to sources array 
extraArgs:
      - --ignore-ingress-tls-spec
      - --traefik-disable-legacy
      - --source=traefik-proxy
    policy: sync
    sources: ["service","ingress", "crd"]
  1. ClusterRole need to be edit
# should have entries for traefik.io...
- apiGroups: ["traefik.containo.us","traefik.io"]
  resources: ["ingressroutes", "ingressroutetcps", "ingressrouteudps"]
  verbs: ["get","watch","list"]
  1. Annotate all ingress routes i.e. target shall point to traefik service
annotations:
    external-dns.alpha.kubernetes.io/target: traefikqqq.${SECRET_CLUSTER_LOCAL_DOMAIN}

  1. Traefik hemlm release, deployment need to annotated
service:
      annotations:
        external-dns.alpha.kubernetes.io/hostname: traefikqqq.${SECRET_CLUSTER_LOCAL_DOMAIN}
        external-dns.alpha.kubernetes.io/owner-id: main
        lbipam.cilium.io/ips: 172.16.32.99
  1. Traefik hostname and ip was pre configured in router SEEMS NOT NEEDED, EXTERNAL-DNS CREATES TRAEFIK ENTRY.

Fault finding

# List external-dns 
k -n network get all 

# Chech external-dns log
 k -n network logs -f pods/external-dns-unifi-85678f4b86-cq7x7

# Check/Edit cluster role
k edit clusterroles.rbac.authorization.k8s.io external-dns-unifi

Util Tools

Short info about different tools that can be used to improve setup, maintenance or anything else.

- Run locally Github Actions workflows - act

# Install act tool
brew install act

# Secure that docker is installed

# Run a specific event (trigger on push)
act push

For more information see how to use act to locally run github workflows

- Use mdBook to display some good info - mdbook

# Install mdbook tool
brew install mdbook

# To create a new mdBook project
mdbook init docs

# Build book locally
mdbook build

# Create book
mdbook serve

For more information see mdBook is a command line tool to create books with Markdown

- De/encrypt sops files

vscode plugin is not reliable working in wsl.

Encrypting Secrets The GitOps Way With sops And age - Mircea Anton

Cloud native distributed block storage for Kubernetes - Longhorn

Inspiration

Getting started with Longhorn

Add S3 backup to Truenas Scale using Minio

How to create certificate to use https Setup Minio on Truenas Scale

  • choose certificate to use
  • create new dataset for Minio to use
  • In Minio create new bucket and access keys

Note: "backupTarget: s3://s3-longhorn-backup@eu-north-se2/"

  • s3-longhorn-backup = bucket in Minio
  • eu-north-se2 = server location in Minio
# Base64 encode secret
echo -n "https://myurl.io:9002" | base64

# output from above is added to longhorn.secret.yaml

# Sops/Age encrypt secret file to proper sops yaml. 
encrypt longhorn.secret.yaml > longhorn.secret.sops.yaml

Restore backup

# List pvc and connected volume/pv
 k -n observability get pvc uptime-kuma-pvc

# Scale down deployment
 k -n observability scale deployment uptime-kuma --replicas 0

# In UI - Volume Tab
# 1. wait for volume to be detached
# 2. copy name of PV volume
# 3. delete the volume
# In UI - Backup Tab
# 4. restore latest backup - use name of old volume
# In UI - Volume
# 5. Create PV/PVC(accept all predefined settings)    

# Scale up deployment
 k -n observability scale deployment uptime-kuma --replicas 1

# Now the backup volume be attached to new pod

MQTT behind Traefik

To secure your MQTT broker with TLS, you can place it behind a Traefik reverse proxy. This setup allows Traefik to manage TLS termination.

Since it is TCP then use IngressRouteTCP to handle the connection.

Traefik

Need to enable/expose MQTT ports i.e. configure Traefik helm values to expose port 8883 and also redirect unsecure 1883 to secure 8883 port.

  mqtt:
    port: 1883
    protocol: TCP
    redirectTo:
      port: mqttsecure
  mqttsecure:
    port: 8883
    protocol: TCP
    expose:
      default: true
    exposedPort: 8883
    tls:
      enabled: true

Set up a password

# Ignore initial password files i.e. mosquitto_passwd_file and mqtt.secret.yaml
echo mosquitto_passwd_file >> .gitignore
echo mqtt.secret.yaml >> .gitignore

# First create password acc. to mosquitto rules. (mosquitto_passwd is included in mosquitto broker package)
mosquitto_passwd -c mosquitto_passwd_file USERNAME

# Base64 encode password
base64 mosquitto_passwd_file 

# Create secret file and add enccoded password
cat mqtt.secret.yaml

data:
  mosquitto_passwd_file: |
    fjdJfdhjdhf

# Do the usual sops encoding
task utils:encode