hirano00o's blog

技術的な記録、日常の記録

Kubernetes The Hard Way for RaspberryPi 5 〜kube-schedulerデプロイまで〜

Raspberry Pi 5を利用しておうちKubernetes(v1.29)を構築する。全3回のうち今回は2回目で、etcdkube-apiserverkube-controller-managerkube-schedulerのデプロイを行う。

前回はセットアップまで実施した。 hirano00o.hateblo.jp

CyberAgentのリポジトリ を参考にしつつ、v1.29に合わせて手順やスクリプトを一部改変している。

Hardwayの手順は下記の通り。

No 項目 構築対象 備考
1 証明書の作成 rs01
2 Kubeconfigの生成 rs01
3 etcdのデプロイ rs01
4 kube-apiserverのデプロイ rs01
5 kube-controller-managerのデプロイ rs01
6 kube-schedulerのデプロイ rs01 今回はここまで
7 kubeletのデプロイ rs01
rs02
rs03
ここからは第3回で実施
8 kube-proxyのデプロイ rs01
rs02
rs03
9 ルーティング追加 rs01
rs02
rs03
10 kubeletのkube-apiserver認証とRBAC設定 rs01
11 CoreDNSのデプロイ rs01

改めてサブネットやホスト名等を下記に示しておく。

名前 サブネット
自宅 10.105.136.0/22
Pod 10.0.0.0/16
ClusterIP用 10.10.0.0/24
ホスト名 IPアドレス Pod用サブネット ログインユーザ 備考
rs01 10.105.138.201 10.0.1.0/24 hirano00o Master, Worker兼用
rs02 10.105.138.202 10.0.2.0/24 hirano00o Worker
rs03 10.105.138.203 10.0.3.0/24 hirano00o Worker

証明書の作成

Kubernetes内のコンポーネント間での通信にはTLSが使用されている。そのため証明書を作成する必要がある。

mkdir -p ~/ws/k8s/ && cd $_
vi generate-cert.sh # ファイル内容は下記参照
chmod +x generate-cert.sh
./generate-cert.sh
# Hostname of Node1: rs01
# Hostname of Node2: rs02
# Hostname of Node3: rs03
# Addresses of Node1 (x.x.x.x[,x.x.x.x]): 10.105.138.201
# Addresses of Node2 (x.x.x.x[,x.x.x.x]): 10.105.138.202
# Addresses of Node3 (x.x.x.x[,x.x.x.x]): 10.105.138.203
# Address of Kubernetes ClusterIP (first address of ClusterIP subnet): 10.10.0.1
# City or Locality (L) of distinguished name: Chiba
# State or Province (ST) of distinguished name: Chiba
# Generate cert? [y/N]: y
# ...

cd cert/
ls -l
ls | wc -l
# 41
# 証明書を各ノードに送る
scp ca.pem rs01.pem rs01-key.pem rs01:~/
scp ca.pem rs02.pem rs02-key.pem rs02:~/
scp ca.pem rs03.pem rs03-key.pem rs03:~/

generate-cert.shは、元はCyberAgentのリポジトリにあるファイルだが多少修正している。変更内容は下記の通り。

変更したgenerate-cert.sh

#!/bin/bash

rm -rf cert && mkdir -p cert && cd cert
if  $? != 0 ; then
  exit
fi

echo -n "Hostname of Node1: "
read NODE1_HOSTNAME

echo -n "Hostname of Node2: "
read NODE2_HOSTNAME

echo -n "Hostname of Node3: "
read NODE3_HOSTNAME

echo -n "Addresses of Node1 (x.x.x.x[,x.x.x.x]): "
read NODE1_ADDRESS

echo -n "Addresses of Node2 (x.x.x.x[,x.x.x.x]): "
read NODE2_ADDRESS

echo -n "Addresses of Node3 (x.x.x.x[,x.x.x.x]): "
read NODE3_ADDRESS

echo -n "Address of Kubernetes ClusterIP (first address of ClusterIP subnet): "
read KUBERNETES_SVC_ADDRESS

echo -n "City or Locality (L) of distinguished name: "
read LOCALITY

echo -n "State or Province (ST) of distinguished name: "
read STATE

yN=""
while [ "$yN" != "y" ]
do
    echo -n "Generate cert? [y/N]: "
    read yN
    if [ "$yN" == "N" ]; then exit; fi
done

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": ["signing", "key encipherment", "server auth", "client auth"],
        "expiry": "8760h"
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
  "CN": "Kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "JP",
      "L": "${LOCALITY}",
      "O": "Kubernetes",
      "ST": "${STATE}"
    }
  ]
}
EOF

echo "---> Generate CA certificate"
cfssl gencert -initca ca-csr.json | cfssljson -bare ca


cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "JP",
      "L": "${LOCALITY}",
      "O": "system:masters",
      "ST": "${STATE}"
    }
  ]
}
EOF

echo "---> Generate certificate for admin user"
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  admin-csr.json | cfssljson -bare admin


for instance in ${NODE1_HOSTNAME} ${NODE2_HOSTNAME} ${NODE3_HOSTNAME}; do
cat > ${instance}-csr.json <<EOF
{
  "CN": "system:node:${instance}",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "JP",
      "L": "${LOCALITY}",
      "O": "system:nodes",
      "ST": "${STATE}"
    }
  ]
}
EOF
done

echo "---> Generate certificate for kubelet"
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=${NODE1_HOSTNAME},${NODE1_ADDRESS} \
  -profile=kubernetes \
  ${NODE1_HOSTNAME}-csr.json | cfssljson -bare ${NODE1_HOSTNAME}
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=${NODE2_HOSTNAME},${NODE2_ADDRESS} \
  -profile=kubernetes \
  ${NODE2_HOSTNAME}-csr.json | cfssljson -bare ${NODE2_HOSTNAME}
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=${NODE3_HOSTNAME},${NODE3_ADDRESS} \
  -profile=kubernetes \
  ${NODE3_HOSTNAME}-csr.json | cfssljson -bare ${NODE3_HOSTNAME}


cat > kube-controller-manager-csr.json <<EOF
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "JP",
      "L": "${LOCALITY}",
      "O": "system:kube-controller-manager",
      "ST": "${STATE}"
    }
  ]
}
EOF

echo "---> Generate certificate for kube-controller-manager"
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager


cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "JP",
      "L": "${LOCALITY}",
      "O": "system:node-proxier",
      "ST": "${STATE}"
    }
  ]
}
EOF

echo "---> Generate certificate for kube-proxy"
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-proxy-csr.json | cfssljson -bare kube-proxy


cat > kube-scheduler-csr.json <<EOF
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "JP",
      "L": "${LOCALITY}",
      "O": "system:kube-scheduler",
      "ST": "${STATE}"
    }
  ]
}
EOF

echo "---> Generate certificate for kube-scheduler"
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-scheduler-csr.json | cfssljson -bare kube-scheduler


KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local

cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "JP",
      "L": "${LOCALITY}",
      "O": "Kubernetes",
      "ST": "${STATE}"
    }
  ]
}
EOF

echo "---> Generate certificate for kube-api-server"
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=${KUBERNETES_SVC_ADDRESS},${NODE1_ADDRESS},${NODE2_ADDRESS},${NODE3_ADDRESS},127.0.0.1,${KUBERNETES_HOSTNAMES} \
  -profile=kubernetes \
  kubernetes-csr.json | cfssljson -bare kubernetes



cat > service-account-csr.json <<EOF
{
  "CN": "service-accounts",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "JP",
      "L": "${LOCALITY}",
      "O": "Kubernetes",
      "ST": "${STATE}"
    }
  ]
}
EOF

echo "---> Generate certificate for generating token of ServiceAccount"
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  service-account-csr.json | cfssljson -bare service-account


echo "---> Complete to generate certificate"

Kubeconfigの生成

kubelet, kube-proxy, kube-controller-manager, kube-scheduler, kubectlが、kube-apiserverと通信するための設定ファイルを生成する。

# kubectlのインストール
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version
# Client Version: v1.29.2
# Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
# The connection to the server localhost:8080 was refused - did you specify the right host or port?

# Kubeconfigの生成
cd ~/ws/k8s/
vi generate-kubeconfig.sh # ファイル内容は下記参照
chmod +x generate-kubeconfig.sh
./generate-kubeconfig.sh
# Hostname of Node1: rs01
# Hostname of Node2: rs02
# Hostname of Node3: rs03
# Address of Master Node: 10.105.138.201
# Cluster Name: hirano00o-k8s
# Generate Kubeconfig? [y/N]: y
# ...

cd kubeconfig/
ls
ls | wc -l
# 7
# configを各ノードに送る
scp rs01.kubeconfig kube-proxy.kubeconfig rs01:~/
scp rs02.kubeconfig kube-proxy.kubeconfig rs02:~/
scp rs03.kubeconfig kube-proxy.kubeconfig rs03:~/

generate-kubeconfig.shも元はCAのリポジトリにあるファイルだが、 多少修正している。変更内容は下記の通り。

  • クラスタ名を標準入力から取得するように変更
  • 実行前に実行可否を問うチェックを追加
  • kubeconfigディレクトリがあったら削除

generate-kubeconfig.sh

#!/bin/bash

ls cert >/dev/null 2>&1
if  $? != 0 ; then
  echo "Please run in the same directory as cert"
  exit
fi

rm -rf kubeconfig && mkdir kubeconfig && cd kubeconfig
if  $? != 0 ; then
  exit
fi

CERT_DIR="../cert"

echo -n "Hostname of Node1: "
read NODE1_HOSTNAME

echo -n "Hostname of Node2: "
read NODE2_HOSTNAME

echo -n "Hostname of Node3: "
read NODE3_HOSTNAME

echo -n "Address of Master Node: "
read MASTER_ADDRESS

echo -n "Cluster Name: "
read CLUSTER_NAME

yN=""
while [ "$yN" != "y" ]
do
    echo -n "Generate Kubeconfig? [y/N]: "
    read yN
    if [ "$yN" == "N" ]; then exit; fi
done

echo "---> Generate kubelet kubeconfig"
for instance in ${NODE1_HOSTNAME} ${NODE2_HOSTNAME} ${NODE3_HOSTNAME}; do
  kubectl config set-cluster ${CLUSTER_NAME} \
    --certificate-authority=${CERT_DIR}/ca.pem \
    --embed-certs=true \
    --server=https://${MASTER_ADDRESS}:6443 \
    --kubeconfig=${instance}.kubeconfig

  kubectl config set-credentials system:node:${instance} \
    --client-certificate=${CERT_DIR}/${instance}.pem \
    --client-key=${CERT_DIR}/${instance}-key.pem \
    --embed-certs=true \
    --kubeconfig=${instance}.kubeconfig

  kubectl config set-context default \
    --cluster=${CLUSTER_NAME} \
    --user=system:node:${instance} \
    --kubeconfig=${instance}.kubeconfig

  kubectl config use-context default --kubeconfig=${instance}.kubeconfig
done


echo "---> Generate kube-proxy kubeconfig"
kubectl config set-cluster ${CLUSTER_NAME} \
  --certificate-authority=${CERT_DIR}/ca.pem \
  --embed-certs=true \
  --server=https://${MASTER_ADDRESS}:6443 \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials system:kube-proxy \
  --client-certificate=${CERT_DIR}/kube-proxy.pem \
  --client-key=${CERT_DIR}/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=${CLUSTER_NAME} \
  --user=system:kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig


echo "---> Generate kube-controller-manager kubeconfig"
kubectl config set-cluster ${CLUSTER_NAME} \
  --certificate-authority=${CERT_DIR}/ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=${CERT_DIR}/kube-controller-manager.pem \
  --client-key=${CERT_DIR}/kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context default \
  --cluster=${CLUSTER_NAME} \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig


echo "---> Generate kube-scheduler kubeconfig"
kubectl config set-cluster ${CLUSTER_NAME} \
  --certificate-authority=${CERT_DIR}/ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
  --client-certificate=${CERT_DIR}/kube-scheduler.pem \
  --client-key=${CERT_DIR}/kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context default \
  --cluster=${CLUSTER_NAME} \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig


echo "---> Generate admin user kubeconfig"
kubectl config set-cluster ${CLUSTER_NAME} \
  --certificate-authority=${CERT_DIR}/ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=admin.kubeconfig

kubectl config set-credentials admin \
  --client-certificate=${CERT_DIR}/admin.pem \
  --client-key=${CERT_DIR}/admin-key.pem \
  --embed-certs=true \
  --kubeconfig=admin.kubeconfig

kubectl config set-context default \
  --cluster=${CLUSTER_NAME} \
  --user=admin \
  --kubeconfig=admin.kubeconfig

kubectl config use-context default --kubeconfig=admin.kubeconfig

echo "---> Complete to generate kubeconfig"

etcdのデプロイ

etcdは分散キーバリューストアで、Kubernetesで利用されるデータは基本的にetcdで管理される。

デプロイの流れは、バイナリのダウンロードと配置、証明書やデータ用ディレクトリの作成、systemd用のunitファイルの作成、起動確認、データを暗号化する機能のための設定ファイル生成。

cd ~/ws/k8s/
curl -LO https://github.com/etcd-io/etcd/releases/download/v3.5.12/etcd-v3.5.12-linux-arm64.tar.gz
tar zxf etcd-v3.5.12-linux-arm64.tar.gz
sudo mv etcd-v3.5.12-linux-arm64/etcd /usr/local/bin/
etcd --version
# etcd Version: 3.5.12
# Git SHA: e7b3bb6cc
# Go Version: go1.20.13
# Go OS/Arch: linux/arm64

sudo mv etcd-v3.5.12-linux-arm64/etcdctl /usr/local/bin/
etcdctl version
# etcdctl version: 3.5.12
# API version: 3.5

# 証明書やデータ用ディレクトリの作成
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo chmod 700 /var/lib/etcd
cd ~/ws/k8s/
sudo cp cert/ca.pem cert/kubernetes-key.pem cert/kubernetes.pem /etc/etcd/
ls -l /etc/etcd/

# unitファイル作成
export ETCD_NAME="rs01" # etcdの任意の名前
export INTERNAL_IP="10.105.138.201" # Master node(今回で言うとrs01)のIP
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
  --name ${ETCD_NAME} \\
  --cert-file=/etc/etcd/kubernetes.pem \\
  --key-file=/etc/etcd/kubernetes-key.pem \\
  --peer-cert-file=/etc/etcd/kubernetes.pem \\
  --peer-key-file=/etc/etcd/kubernetes-key.pem \\
  --trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
  --advertise-client-urls https://${INTERNAL_IP}:2379 \\
  --initial-cluster-token etcd-initial-token \\
  --initial-cluster ${ETCD_NAME}=https://${INTERNAL_IP}:2380 \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
Environment=ETCD_UNSUPPORTED_ARCH=arm64

[Install]
WantedBy=multi-user.target
EOF

ls -l /etc/systemd/system/etcd.service
sudo systemctl daemon-reload
sudo systemctl start etcd
sudo systemctl status etcd
# ● etcd.service - etcd
#      Loaded: loaded (/etc/systemd/system/etcd.service; disabled; preset: enabled)
#      Active: active (running) since Fri 2024-02-23 14:45:29 JST; 40s ago
# ...
sudo ETCDCTL_API=3 etcdctl member list \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/etcd/ca.pem \
  --cert=/etc/etcd/kubernetes.pem \
  --key=/etc/etcd/kubernetes-key.pem
# 
# 60d821cf550d72ab, started, rs01, https://10.105.138.201:2380, https://10.105.138.201:2379, false

sudo systemctl enable etcd

# secretはまだ保存していないのでidentity providerは削除。(暗号化していないsecretを既に利用していたらidentity providerは必要)
# aescbcは弱いのでkmsなど強い暗号に変更したい
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
cat > encryption-config.yaml <<EOF
kind: EncryptionConfiguration
apiVersion: apiserver.config.k8s.io/v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
EOF

sudo mkdir -p /etc/kubernetes/config/
sudo mkdir -p /var/lib/kubernetes/
sudo cp -ai cert/ca.pem cert/ca-key.pem \
cert/kubernetes-key.pem cert/kubernetes.pem \
cert/service-account-key.pem cert/service-account.pem \
encryption-config.yaml \
/var/lib/kubernetes/

kube-apiserverのデプロイ

kube-apiserverは、クラスタのスケジューリングなどの全体の決定を担うコントロールプレーンのフロントエンドで、ユーザやクラスタ、外部コンポーネントと相互に通信するためのAPIサーバ。

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kube-apiserver"
chmod +x kube-apiserver
sudo mv kube-apiserver /usr/local/bin/
kube-apiserver --version
# Kubernetes v1.29.2

# unitファイルの作成
# --kubelet-https=trueフラグは廃止されたので削除
# ServiceAccountIssuerDiscoveryのために--service-account-issuerと--service-account-signing-key-fileが必要なので追加
export INTERNAL_IP="10.105.138.201"
export CLUSTER_IP_NETWORK="10.10.0.0/24"
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
  --advertise-address=${INTERNAL_IP} \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/log/audit.log \\
  --authorization-mode=Node,RBAC \\
  --bind-address=0.0.0.0 \\
  --client-ca-file=/var/lib/kubernetes/ca.pem \\
  --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  --etcd-cafile=/var/lib/kubernetes/ca.pem \\
  --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
  --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
  --etcd-servers=https://${INTERNAL_IP}:2379 \\
  --event-ttl=1h \\
  --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
  --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
  --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
  --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
  --runtime-config='api/all=true' \\
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
  --service-account-key-file=/var/lib/kubernetes/service-account.pem \\
  --service-account-signing-key-file=/var/lib/kubernetes/service-account-key.pem \\
  --service-cluster-ip-range=${CLUSTER_IP_NETWORK} \\
  --service-node-port-range=30000-32767 \\
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl start kube-apiserver
sudo systemctl status kube-apiserver
# ● kube-apiserver.service - Kubernetes API Server
#      Loaded: loaded (/etc/systemd/system/kube-apiserver.service; disabled; preset: enabled)
#      Active: active (running) since Fri 2024-02-23 15:53:57 JST; 2ms ago

sudo systemctl enable kube-apiserver

kube-controller-managerのデプロイ

kube-controller-managerはKubernetesのコントロールプレーンの一部で、Kubernetesの主要な制御ループを含むコンポーネント。これらの制御ループは、クラスターの異なる側面を監視し、必要に応じて変更を行う。例えば、Node ControllerはNodeがダウンしている場合に対応し、Replication Controllerは各Podが常に必要な数のレプリカを持つようにする。kube-controller-managerは、これら制御ループを一元的に管理する。

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kube-controller-manager"
chmod +x kube-controller-manager
sudo mv kube-controller-manager /usr/local/bin/
kube-controller-manager --version
# Kubernetes v1.29.2

sudo cp -ai kubeconfig/kube-controller-manager.kubeconfig /var/lib/kubernetes/

# unitファイルの作成
export POD_NETWORK="10.0.0.0/16"
export CLUSTER_IP_NETWORK="10.10.0.0/24"
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
  --bind-address=0.0.0.0 \\
  --cluster-cidr=${POD_NETWORK} \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
  --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
  --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
  --leader-elect=true \\
  --root-ca-file=/var/lib/kubernetes/ca.pem \\
  --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
  --service-cluster-ip-range=${CLUSTER_IP_NETWORK} \\
  --use-service-account-credentials=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl start kube-controller-manager
sudo systemctl status kube-controller-manager
# ● kube-controller-manager.service - Kubernetes Controller Manager
#      Loaded: loaded (/etc/systemd/system/kube-controller-manager.service; disabled; preset: enabled)
#      Active: active (running) since Fri 2024-02-23 16:10:09 JST; 672ms ago

sudo systemctl enable kube-controller-manager

kube-schedulerのデプロイ

kube-schedulerは、Podがクラスタ内のどのNode上で実行されるべきかを決定する。

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kube-scheduler"
chmod +x kube-scheduler
sudo mv kube-scheduler /usr/local/bin/
kube-scheduler --version
# Kubernetes v1.29.2

sudo cp -ai kubeconfig/kube-scheduler.kubeconfig /var/lib/kubernetes/

# unitファイルの作成
# v1alphaは無いのでv1に変更
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
clientConnection:
  kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
  leaderElect: true
EOF

cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
  --config=/etc/kubernetes/config/kube-scheduler.yaml \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl start kube-scheduler
sudo systemctl status kube-scheduler
# ● kube-scheduler.service - Kubernetes Scheduler
#      Loaded: loaded (/etc/systemd/system/kube-scheduler.service; disabled; preset: enabled)
#      Active: active (running) since Fri 2024-02-23 16:23:41 JST; 204ms ago

sudo systemctl enable kube-scheduler

KubeSchedulerConfigurationのリファレンスは https://kubernetes.io/docs/reference/config-api/kube-scheduler-config.v1/#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration

ここまでの動作確認

mkdir -p $HOME/.kube && cp -i kubeconfig/admin.kubeconfig $HOME/.kube/config
# 全部okであること
kubectl get --raw='/readyz?verbose'
# [+]ping ok
# [+]log ok
# [+]etcd ok
# [+]etcd-readiness ok
# [+]informer-sync ok
# [+]poststarthook/start-kube-apiserver-admission-initializer ok
# [+]poststarthook/generic-apiserver-start-informers ok
# [+]poststarthook/priority-and-fairness-config-consumer ok
# [+]poststarthook/priority-and-fairness-filter ok
# [+]poststarthook/storage-object-count-tracker-hook ok
# [+]poststarthook/start-apiextensions-informers ok
# [+]poststarthook/start-apiextensions-controllers ok
# [+]poststarthook/crd-informer-synced ok
# [+]poststarthook/start-service-ip-repair-controllers ok
# [+]poststarthook/rbac/bootstrap-roles ok
# [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
# [+]poststarthook/priority-and-fairness-config-producer ok
# [+]poststarthook/start-system-namespaces-controller ok
# [+]poststarthook/bootstrap-controller ok
# [+]poststarthook/start-cluster-authentication-info-controller ok
# [+]poststarthook/start-kube-apiserver-identity-lease-controller ok
# [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
# [+]poststarthook/start-legacy-token-tracking-controller ok
# [+]poststarthook/start-kube-aggregator-informers ok
# [+]poststarthook/apiservice-registration-controller ok
# [+]poststarthook/apiservice-status-available-controller ok
# [+]poststarthook/kube-apiserver-autoregistration ok
# [+]autoregister-completion ok
# [+]poststarthook/apiservice-openapi-controller ok
# [+]poststarthook/apiservice-openapiv3-controller ok
# [+]poststarthook/apiservice-discovery-controller ok
# [+]shutdown ok
# readyz check passed

# 一見okで問題なさそうに見えても各サービスが正常に稼働しているとは限らないのでjournalctlでもエラーが出ていないか確認する
journalctl -u kube-apiserver -r -n 30 --no-pager
journalctl -u kube-controller-manager -r -n 30 --no-pager
journalctl -u kube-scheduler -r -n 30 --no-pager

終わりに

今回はkube-schedulerのデプロイまで実施した。スクリプトのおかげで構築自体は簡単だが、各要素を理解することも目的の一つなので忘れないようにする。パラメータが廃止で動かなかったおかげでオプション等も細かく見れたのは却って良かった。