Compare commits

..

10 Commits

39 changed files with 2627 additions and 1073 deletions

2
.gitignore vendored
View File

@@ -1,3 +1,5 @@
build.env.work
cluster.env.work
.DS_Store
clitools/bin
packages/

236
README.md
View File

@@ -1,88 +1,196 @@
# monok8s
An Alpine-based Kubernetes cluster image for Mono's Gateway Development Kit
https://docs.mono.si/gateway-development-kit/getting-started
## Features
* A/B deployment
* Read-only RootFS
* k8s style OS upgrade (see Upgrading)
This is an Alpine-based Kubernetes image for Mono's Gateway Development Kit.
## DISCLAIMER
It gives you a ready-to-boot Kubernetes control-plane image so you can get your device running first, then learn and customize from there.
* This is not your everyday linux image! It is best suited for users that is already familiar
with k8s. For first-timers, you may want to try the default config that gives you a ready-to-use
cluster and get yourself started from there
Project/device docs: <https://docs.mono.si/gateway-development-kit/getting-started>
* USE AT YOUR OWN RISKS. I leverage ChatGPT heavily for this.
---
## Current Status
## What you get
### Boostrapping
* [x] initramfs
* [x] booting into alpine
* [x] k8s control-plane
* [ ] k8s worker node
The default image boots into a small Kubernetes control-plane environment with:
### Kubernetes
* OSUpgrade
* [x] Control Plane - kubeadm upgrade apply
* Upgrade chain
* [x] 1.33.3 -> 1.33.10
* [x] 1.33.10 -> 1.34.6
* [x] 1.34.6 -> 1.35.3
* [ ] Worker node - kubeadm upgrade node
* CNI
* [x] default bridge-cni
* [ ] Cilium
- Alpine Linux
- Kubernetes initialized through `kubeadm`
- read-only root filesystem layout
- A/B rootfs layout for safer OS upgrades
- a Kubernetes-style OS upgrade path through `OSUpgrade`
### Network Traffics
* VPP Pod
* [x] fmc - works? But no way to test it yet
* [ ] vpp - does not work
You do **not** need to know Go or understand the internal build system to try the image.
## Table of Contents
1. Flashing
- [USB](docs/flashing-usb.md)
- [Over network (tftp)](docs/flashing-network.md)
2. [Upgrading](docs/ota.md)
3. Getting shell access to the host
- UART. The thing you did in first time flashing.
- [Install an ssh pod](docs/installing-ssh-pod.md) (Recommended)
---
## Build
## Before you start
Prerequisites
* make
* Docker
You need:
- a Linux build machine or VM
- Docker
- `make`
- basic command-line comfort
If you are building on a fresh Debian machine, you can install the usual build dependencies with:
```bash
devtools/setup-build-host.sh
```
Or install the minimum packages yourself:
```bash
sudo apt-get update
sudo apt-get install -y docker.io make qemu-user-static binfmt-support
```
Make sure your user can run Docker, or use `sudo` where needed.
---
## Fast path: build an image
Download the project tarball, extract it, then run:
```bash
make release
```
The default configuration will boot as a first time control-plane.
When the build finishes, check the `out/` directory for the generated image artifacts.
For control-plane
That is the main path most users should try first.
---
## Flash the image
After building, flash the generated image to your device.
Start with one of these guides:
- [Flash over USB](docs/flashing-usb.md)
- [Flash over network / TFTP](docs/flashing-network.md)
USB flashing is usually the easiest path when you are setting up the device for the first time.
---
## First boot
The default configuration is intended to boot as a first-time Kubernetes control-plane node.
Default-style control-plane configuration looks like this:
```bash
make cluster-config \
MKS_HOSTNAME=monok8s-master \
MKS_CLUSTER_ROLE=control-plane \
MKS_INIT_CONTROL_PLANE=true \
MKS_MGMT_ADDRESS=10.0.0.10/24 \
MKS_APISERVER_ADVERTISE_ADDRESS=10.0.0.10
```
If you are just trying the image for the first time, start with the default control-plane setup. Worker-node setup is still incomplete.
For all available configuration values, see:
- [configs/cluster.env.default](configs/cluster.env.default)
For worker node
```
make cluster-config \
MKS_HOSTNAME=monok8s-master \
MKS_CLUSTER_ROLE=control-plane \
MKS_INIT_CONTROL_PLANE=true \
MKS_MGMT_ADDRESS=10.0.0.10/24 \
MKS_APISERVER_ADVERTISE_ADDRESS=10.0.0.10
MKS_HOSTNAME=monok8s-worker \
MKS_CLUSTER_ROLE=worker \
MKS_INIT_CONTROL_PLANE=no \
MKS_MGMT_ADDRESS=10.0.0.10/24 \
MKS_APISERVER_ADVERTISE_ADDRESS=10.0.0.10 \
MKS_API_SERVER_ENDPOINT=10.0.0.1:6443 \
MKS_CNI_PLUGIN=none \
MKS_BOOTSTRAP_TOKEN=abcd12.ef3456789abcdef0 \
MKS_DISCOVERY_TOKEN_CA_CERT_HASH=sha256:9f1c2b3a4d5e6f7890abc1234567890abcdef1234567890abcdef1234567890ab
```
For worker
```
make cluster-config \
MKS_HOSTNAME=monok8s-worker \
MKS_CLUSTER_ROLE=worker \
MKS_INIT_CONTROL_PLANE=no \
MKS_MGMT_ADDRESS=10.0.0.10/24 \
MKS_APISERVER_ADVERTISE_ADDRESS=10.0.0.10 \
MKS_API_SERVER_ENDPOINT=10.0.0.1:6443 \
MKS_CNI_PLUGIN=none \
MKS_BOOTSTRAP_TOKEN=abcd12.ef3456789abcdef0 \
MKS_DISCOVERY_TOKEN_CA_CERT_HASH=sha256:9f1c2b3a4d5e6f7890abc1234567890abcdef1234567890abcdef1234567890ab
---
## Getting shell access
For first-time setup, UART is the most direct option because it is already part of the flashing process.
After the device is running, the recommended path is:
- [Install an SSH pod](docs/installing-ssh-pod.md)
---
## Upgrading
monok8s includes a Kubernetes-style OS upgrade flow using the `OSUpgrade` custom resource.
See:
- [OTA upgrade guide](docs/ota.md)
The currently tested upgrade chain is:
- `1.33.3 -> 1.33.10`
- `1.33.10 -> 1.34.6`
- `1.34.6 -> 1.35.3`
Tested worker node upgrade chain:
- `1.33.3 -> 1.34.1`
- `1.33.1 -> 1.35.3`
---
## Current status
This project is usable for experimenting with a single control-plane device image, but it is still a development project.
Working today:
- initramfs boot flow
- Alpine boot
- Kubernetes control-plane bootstrap
- default bridge CNI
- control-plane OS upgrade path
Still in progress:
- Kubernetes worker-node support
- Cilium support
- VPP/DPAA networking experiments
---
## Common build issue
### `chroot: failed to run command '/bin/sh': Exec format error`
This usually means the build host cannot run ARM64 binaries.
On Debian, install ARM64 emulation support:
```bash
sudo apt-get install -y qemu-user-static binfmt-support
```
Check inside [configs/cluster.env.default](configs/cluster.env.default) for configuration details
Then run the build again:
```bash
make release
```
---
## Notes
This is not a general-purpose Linux distribution. It is a device image for experimenting with Kubernetes on Mono's Gateway Development Kit.
The safest path is:
1. build the default image,
2. flash it,
3. boot the control-plane,
4. confirm Kubernetes is running,
5. customize only after the base image works.

View File

@@ -2,9 +2,11 @@
set -euo pipefail
source /utils.sh
/preload-k8s-images.sh || exit 1
export CTL_BIN_LAYER=$( skopeo inspect docker-daemon:localhost/monok8s/node-control:dev | jq -r '.Layers[0] | sub("^sha256:"; "")' )
export CTL_BIN_LAYER=$( skopeo inspect docker-daemon:localhost/monok8s/node-control:$TAG | jq -r '.Layers[0] | sub("^sha256:"; "")' )
mkdir -p \
"$ROOTFS/dev" \
@@ -117,6 +119,8 @@ mkdir -p "$FAKE_DEV" "$MNT_ROOTFS_IMG" "$MNT_DATA"
echo "##################################################### Packaging RootFS $(du -sh "$ROOTFS" | awk '{print $1}')"
ensure_loop_ready
###############################################################################
# 1. Build reusable rootfs ext4 image once
###############################################################################

View File

@@ -38,4 +38,4 @@ if [ -n "$K8S_MINOR" ]; then
"$MIGRATION_STATE_DIR/k8s/$K8S_MINOR"
fi
/usr/local/bin/ctl init --env-file "$CONFIG_DIR/cluster.env" >>/var/log/monok8s/bootstrap.log 2>&1 &
/usr/lib/monok8s/lib/supervised-init.sh &

View File

@@ -0,0 +1,57 @@
#!/bin/sh
set -eu
CONFIG_DIR=/opt/monok8s/config
LOG=/var/log/monok8s/bootstrap.log
STATE_DIR=/run/monok8s
FAIL_COUNT_FILE="$STATE_DIR/bootstrap-fail-count"
LOCK_DIR="$STATE_DIR/supervised-init.lock"
# For debugging
HOLD_FILE="$CONFIG_DIR/bootstrap.hold"
mkdir -p "$STATE_DIR" /var/log/monok8s
if ! mkdir "$LOCK_DIR" 2>/dev/null; then
echo "[$(date -Is)] supervised-init already running" >> "$LOG"
exit 0
fi
trap 'rmdir "$LOCK_DIR"' EXIT INT TERM
fail_count=0
if [ -f "$FAIL_COUNT_FILE" ]; then
fail_count="$(cat "$FAIL_COUNT_FILE" 2>/dev/null || echo 0)"
case "$fail_count" in
''|*[!0-9]*) fail_count=0 ;;
esac
fi
while true; do
if [ -f "$HOLD_FILE" ]; then
echo "[$(date -Is)] bootstrap held by $HOLD_FILE" >> "$LOG"
sleep 300
continue
fi
echo "[$(date -Is)] starting ctl init" >> "$LOG"
if /usr/local/bin/ctl init --env-file "$CONFIG_DIR/cluster.env" >> "$LOG" 2>&1; then
echo "[$(date -Is)] ctl init succeeded" >> "$LOG"
rm -f "$FAIL_COUNT_FILE"
exit 0
fi
fail_count=$((fail_count + 1))
echo "$fail_count" > "$FAIL_COUNT_FILE"
echo "[$(date -Is)] ctl init failed, count=$fail_count" >> "$LOG"
case "$fail_count" in
1) sleep 10 ;;
2) sleep 30 ;;
3) sleep 60 ;;
4) sleep 120 ;;
*) sleep 300 ;;
esac
done

57
alpine/utils.sh Executable file
View File

@@ -0,0 +1,57 @@
#!/bin/bash
ensure_loop_ready() {
# The loop kernel module is host-side. This only works if the container
# has permission and modprobe exists; otherwise the host must load it.
if ! grep -qw loop /proc/modules 2>/dev/null; then
modprobe loop 2>/dev/null || true
fi
# /dev/loop-control: char device 10:237
if [ ! -e /dev/loop-control ]; then
echo "Creating missing /dev/loop-control" >&2
mknod /dev/loop-control c 10 237 || {
echo "ERROR: cannot create /dev/loop-control" >&2
echo "Run container with --privileged, or pass --device=/dev/loop-control and loop devices." >&2
exit 1
}
chmod 600 /dev/loop-control || true
fi
if [ ! -c /dev/loop-control ]; then
echo "ERROR: /dev/loop-control exists but is not a character device" >&2
ls -l /dev/loop-control >&2 || true
exit 1
fi
# Create a reasonable pool of loop block devices.
# loopN block devices are major 7, minor N.
for i in $(seq 0 31); do
if [ ! -e "/dev/loop$i" ]; then
echo "Creating missing /dev/loop$i" >&2
mknod "/dev/loop$i" b 7 "$i" || {
echo "ERROR: cannot create /dev/loop$i" >&2
echo "Run container with --privileged, or pre-create/pass loop devices." >&2
exit 1
}
chmod 660 "/dev/loop$i" || true
fi
if [ ! -b "/dev/loop$i" ]; then
echo "ERROR: /dev/loop$i exists but is not a block device" >&2
ls -l "/dev/loop$i" >&2 || true
exit 1
fi
done
# Smoke test: ask losetup for a free loop device.
if ! losetup -f >/dev/null 2>&1; then
echo "ERROR: losetup cannot find/use a loop device" >&2
echo "Debug info:" >&2
ls -l /dev/loop-control /dev/loop* >&2 || true
grep -w loop /proc/modules >&2 || true
echo >&2
echo "Docker likely needs --privileged, or at minimum CAP_SYS_ADMIN plus loop devices." >&2
exit 1
fi
}

View File

@@ -41,4 +41,9 @@ ALPINE_HOSTNAME=monok8s-hostname
BUILD_TAG=MONOK8S
# Optional apt cache
APT_PROXY=apt-cacher-ng.eco-system.svc.cluster.local:3142
# example: apt-cacher-ng.eco-system.svc.cluster.local:3142
APT_PROXY=
# remote image repository prefix to push to
# e.g. ghcr.io/monok8s
IMAGE_REPOSITORY=

View File

@@ -1,16 +1,41 @@
ARG BASE_IMAGE=localhost/monok8s/ctl-build-base:dev
FROM --platform=$BUILDPLATFORM ${BASE_IMAGE} AS build
ARG VERSION=dev
ARG TARGETOS
ARG TARGETARCH
WORKDIR /src
COPY . .
RUN test -f pkg/buildinfo/buildinfo_gen.go
RUN mkdir -p /out && \
GOOS=${TARGETOS} GOARCH=${TARGETARCH} CGO_ENABLED=0 \
go build -trimpath -ldflags="-s -w" \
-o /out/ctl ./cmd/ctl
FROM alpine:latest AS cacerts
FROM scratch
ARG VERSION
ARG TARGETOS
ARG TARGETARCH
ENV VERSION=${VERSION}
WORKDIR /
COPY bin/ctl-linux-aarch64-${VERSION} ./ctl
COPY out/fw_printenv ./
COPY out/fw_setenv ./
COPY --from=build /out/ctl /ctl
COPY --from=cacerts /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY out/uboot-tools/${TARGETOS}_${TARGETARCH}/fw_printenv /fw_printenv
COPY out/uboot-tools/${TARGETOS}_${TARGETARCH}/fw_setenv /fw_setenv
ENV PATH=/
ENTRYPOINT ["/ctl"]

View File

@@ -1,24 +0,0 @@
FROM golang:1.26-alpine AS build
ARG VERSION
ARG KUBE_VERSION
ARG GIT_REV=unknown
WORKDIR /src
RUN apk add --no-cache git build-base
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN test -f pkg/buildinfo/buildinfo_gen.go
RUN mkdir -p /out && \
GOOS=darwin GOARCH=arm64 CGO_ENABLED=0 \
go build -trimpath -ldflags="-s -w" \
-o /out/ctl-${VERSION} ./cmd/ctl
FROM scratch
COPY --from=build /out/ /

View File

@@ -1,20 +0,0 @@
ARG BASE_IMAGE=localhost/monok8s/ctl-build-base:dev
FROM ${BASE_IMAGE} AS build
ARG VERSION=dev
ARG TARGETOS=linux
ARG TARGETARCH=arm64
WORKDIR /src
COPY . .
RUN test -f pkg/buildinfo/buildinfo_gen.go
RUN mkdir -p /out && \
GOOS=${TARGETOS} GOARCH=${TARGETARCH} CGO_ENABLED=0 \
go build -trimpath -ldflags="-s -w" \
-o /out/ctl-linux-aarch64-${VERSION} ./cmd/ctl
FROM scratch
COPY --from=build /out/ /

View File

@@ -1,4 +1,6 @@
include ../build.env
-include ../build.env.work
export
BUILD_PLATFORM ?= linux/amd64
@@ -11,30 +13,29 @@ KUBE_VERSION ?= v1.33.3
GIT_REV := $(shell git rev-parse HEAD)
PACKAGES_DIR := packages
BIN_DIR := bin
OUT_DIR := out
PACKAGES_DIR := packages
OUT_DIR := out
UBOOT_TOOLS_OUT := $(OUT_DIR)/uboot-tools
UBOOT_TAR := $(PACKAGES_DIR)/uboot-$(UBOOT_VERSION).tar.gz
BUILDINFO_FILE := pkg/buildinfo/buildinfo_gen.go
CRD_PATHS := ./pkg/apis/...
ASSETS_PATH := ./pkg/assets
BUILDX_BUILDER := container-builder
LOCAL_REGISTRY := registry
LOCAL_REGISTRY_PORT := 5000
CTL_BUILD_BASE_IMAGE := localhost:5000/monok8s/ctl-build-base:$(VERSION)
CTL_BINARY := ctl-linux-aarch64-$(VERSION)
CTL_BUILD_BASE_REPO := localhost:5000/monok8s/ctl-build-base
CTL_IMAGE_REPO := localhost:5000/monok8s/node-control
CTL_BUILD_BASE_IMAGE := $(CTL_BUILD_BASE_REPO):$(VERSION)
CTL_IMAGE := $(CTL_IMAGE_REPO):$(VERSION)
DOWNLOAD_PACKAGES_STAMP := $(PACKAGES_DIR)/.download-packages.stamp
$(PACKAGES_DIR):
mkdir -p $@
$(BIN_DIR):
mkdir -p $@
$(OUT_DIR):
mkdir -p $@
@@ -88,11 +89,14 @@ $(DOWNLOAD_PACKAGES_STAMP): docker/download-packages.Dockerfile makefile | $(PAC
@touch $@
uboot-tools: $(DOWNLOAD_PACKAGES_STAMP)
docker buildx build --platform linux/arm64 \
rm -rf "$(UBOOT_TOOLS_OUT)"
mkdir -p "$(UBOOT_TOOLS_OUT)"
docker buildx build \
--platform linux/amd64,linux/arm64 \
-f docker/uboot-tools.Dockerfile \
--build-arg UBOOT_VERSION=$(UBOOT_VERSION) \
--build-arg UBOOT_TAR=$(UBOOT_TAR) \
--output type=local,dest=./$(OUT_DIR) .
--output type=local,dest=./$(UBOOT_TOOLS_OUT),platform-split=true .
ctl-build-base: ensure-buildx ensure-registry
docker buildx build \
@@ -101,16 +105,6 @@ ctl-build-base: ensure-buildx ensure-registry
-t $(CTL_BUILD_BASE_IMAGE) \
--output type=image,push=true,registry.insecure=true .
build-bin: .buildinfo ctl-build-base | $(BIN_DIR)
docker buildx build \
--platform $(BUILD_PLATFORM) \
-f docker/ctl-builder.Dockerfile \
--build-arg BASE_IMAGE=$(CTL_BUILD_BASE_IMAGE) \
--build-arg VERSION=$(VERSION) \
--build-arg TARGETOS=linux \
--build-arg TARGETARCH=arm64 \
--output type=local,dest=./$(BIN_DIR) .
build-crds: ctl-build-base | $(OUT_DIR)
mkdir -p "$(OUT_DIR)/crds"
docker buildx build \
@@ -118,35 +112,47 @@ build-crds: ctl-build-base | $(OUT_DIR)
-f docker/crdgen.Dockerfile \
--build-arg BASE_IMAGE=$(CTL_BUILD_BASE_IMAGE) \
--output type=local,dest=./$(OUT_DIR)/crds .
rm -rf "$(ASSETS_PATH)/crds"
mkdir -p "$(ASSETS_PATH)/crds"
cp -R "$(OUT_DIR)/crds/." "$(ASSETS_PATH)/crds/"
build-agent: build uboot-tools
build-agent: .buildinfo build-crds uboot-tools
docker buildx build \
--platform linux/amd64,linux/arm64 \
-f docker/ctl-agent.Dockerfile \
--build-arg BASE_IMAGE=$(CTL_BUILD_BASE_IMAGE) \
--build-arg VERSION=$(VERSION) \
-t $(CTL_IMAGE) \
--output type=image,push=true,registry.insecure=true .
build-local: .buildinfo build-crds uboot-tools
docker buildx build \
--platform linux/arm64 \
-f docker/ctl-agent.Dockerfile \
--build-arg BASE_IMAGE=$(CTL_BUILD_BASE_IMAGE) \
--build-arg VERSION=$(VERSION) \
--load \
-t localhost/monok8s/node-control:$(VERSION) .
build-local: .buildinfo | $(BIN_DIR)
push-agent: .buildinfo build-crds uboot-tools
test -n "$(IMAGE_REPOSITORY)"
docker buildx build \
-f docker/ctl-builder-local.Dockerfile \
--platform linux/amd64,linux/arm64 \
-f docker/ctl-agent.Dockerfile \
--build-arg BASE_IMAGE=$(CTL_BUILD_BASE_IMAGE) \
--build-arg VERSION=$(VERSION) \
--build-arg KUBE_VERSION=$(KUBE_VERSION) \
--build-arg GIT_REV=$(GIT_REV) \
--output type=local,dest=./$(BIN_DIR) .
-t $(IMAGE_REPOSITORY)/node-control:$(VERSION) \
--push .
run-agent:
docker run --rm \
-v "$$(pwd)/out:/work/out" \
localhost/monok8s/node-control:$(VERSION) \
$(CTL_IMAGE) \
agent --env-file /work/out/cluster.env
build: build-bin build-crds
clean:
-docker image rm localhost/monok8s/node-control:$(VERSION) >/dev/null 2>&1 || true
rm -rf \
$(BIN_DIR) \
$(OUT_DIR)/crds \
$(BUILDINFO_FILE)
@@ -158,7 +164,6 @@ dockerclean:
- docker rmi \
localhost/monok8s/ctl-build-base:$(VERSION) \
localhost/monok8s/node-control:$(VERSION) \
localhost/monok8s/ctl-builder:$(VERSION) \
localhost/monok8s/crdgen:$(VERSION) \
2>/dev/null || true
@@ -169,10 +174,10 @@ dockerclean:
pkgclean:
rm -rf $(PACKAGES_DIR)
all: build build-agent build-local
all: build-agent build-local
.PHONY: \
all clean dockerclean \
.buildinfo ensure-buildx ensure-registry \
build build-bin build-crds build-local build-agent \
uboot-tools run-agent
build-crds build-local build-agent build-agent-local push-agent \
uboot-tools run-agent run-agent-local

View File

@@ -0,0 +1,178 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.20.1
name: monoksconfigs.monok8s.io
spec:
group: monok8s.io
names:
kind: MonoKSConfig
listKind: MonoKSConfigList
plural: monoksconfigs
singular: monoksconfig
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
properties:
allowSchedulingOnControlPlane:
type: boolean
apiServerAdvertiseAddress:
type: string
apiServerEndpoint:
type: string
bootstrapToken:
type: string
clusterDomain:
type: string
clusterName:
type: string
clusterRole:
type: string
cniPlugin:
type: string
containerRuntimeEndpoint:
type: string
controlPlaneCertKey:
type: string
discoveryTokenCACertHash:
type: string
enableNodeControl:
type: boolean
initControlPlane:
type: boolean
kubeProxyNodePortAddresses:
items:
type: string
type: array
kubernetesVersion:
type: string
network:
properties:
dnsNameservers:
items:
type: string
type: array
dnsSearchDomains:
items:
type: string
type: array
hostname:
type: string
managementCIDR:
type: string
managementGateway:
type: string
managementIface:
type: string
type: object
nodeLabels:
additionalProperties:
type: string
type: object
nodeName:
type: string
podSubnet:
type: string
serviceSubnet:
type: string
skipImageCheck:
type: boolean
subjectAltNames:
items:
type: string
type: array
type: object
status:
properties:
appliedSteps:
items:
type: string
type: array
conditions:
items:
description: Condition contains details for one aspect of the current
state of this API Resource.
properties:
lastTransitionTime:
description: |-
lastTransitionTime is the last time the condition transitioned from one status to another.
This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable.
format: date-time
type: string
message:
description: |-
message is a human readable message indicating details about the transition.
This may be an empty string.
maxLength: 32768
type: string
observedGeneration:
description: |-
observedGeneration represents the .metadata.generation that the condition was set based upon.
For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date
with respect to the current state of the instance.
format: int64
minimum: 0
type: integer
reason:
description: |-
reason contains a programmatic identifier indicating the reason for the condition's last transition.
Producers of specific condition types may define expected values and meanings for this field,
and whether the values are considered a guaranteed API.
The value should be a CamelCase string.
This field may not be empty.
maxLength: 1024
minLength: 1
pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
type: string
status:
description: status of the condition, one of True, False, Unknown.
enum:
- "True"
- "False"
- Unknown
type: string
type:
description: type of condition in CamelCase or in foo.example.com/CamelCase.
maxLength: 316
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
type: string
required:
- lastTransitionTime
- message
- reason
- status
- type
type: object
type: array
observedGeneration:
format: int64
type: integer
phase:
type: string
type: object
type: object
served: true
storage: true

View File

@@ -0,0 +1,124 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.20.1
name: osupgradeprogresses.monok8s.io
spec:
group: monok8s.io
names:
kind: OSUpgradeProgress
listKind: OSUpgradeProgressList
plural: osupgradeprogresses
shortNames:
- osup
singular: osupgradeprogress
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .spec.nodeName
name: Node
type: string
- jsonPath: .spec.sourceRef.name
name: Source
type: string
- jsonPath: .status.currentVersion
name: Current
type: string
- jsonPath: .status.targetVersion
name: Target
type: string
- jsonPath: .status.phase
name: Phase
type: string
name: v1alpha1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: Specification of the desired behavior of the OSUpgradeProgress.
properties:
nodeName:
type: string
retryNonce:
description: |-
RetryNonce triggers a retry when its value changes.
Users can update this field (for example, set it to the current time)
to request a retry of a failed OS upgrade.
type: string
sourceRef:
properties:
name:
type: string
namespace:
type: string
type: object
type: object
status:
description: Most recently observed status of the OSUpgradeProgress.
properties:
completedAt:
format: date-time
type: string
currentFrom:
type: string
currentStep:
format: int32
type: integer
currentTo:
type: string
currentVersion:
type: string
failureReason:
type: string
inactivePartition:
type: string
lastUpdatedAt:
format: date-time
type: string
message:
type: string
observedRetryNonce:
description: |-
ObservedRetryNonce records the last retryNonce value the agent accepted.
When spec.retryNonce is changed by the user and differs from this value,
the agent may retry a failed upgrade.
type: string
phase:
type: string
plannedPath:
items:
type: string
type: array
retryCount:
format: int32
type: integer
startedAt:
format: date-time
type: string
targetVersion:
type: string
type: object
type: object
served: true
storage: true
subresources:
status: {}

View File

@@ -0,0 +1,202 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.20.1
name: osupgrades.monok8s.io
spec:
group: monok8s.io
names:
kind: OSUpgrade
listKind: OSUpgradeList
plural: osupgrades
shortNames:
- osu
singular: osupgrade
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .spec.desiredVersion
name: Desired
type: string
- jsonPath: .status.resolvedVersion
name: Resolved
type: string
- jsonPath: .status.phase
name: Phase
type: string
name: v1alpha1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: Specification of the desired behavior of the OSUpgrade.
properties:
catalog:
properties:
configMapRef:
type: string
inline:
type: string
url:
type: string
type: object
desiredVersion:
minLength: 1
type: string
flashProfile:
default: balanced
description: |-
Profiles (TODO)
safe - api-server can be responsive most of the time
balanced - api-server can sometimes be unresponsive
fast - disable throttling. Good for worker node.
enum:
- fast
- balanced
- safe
type: string
nodeSelector:
description: |-
A label selector is a label query over a set of resources. The result of matchLabels and
matchExpressions are ANDed. An empty label selector matches all objects. A null
label selector matches no objects.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements.
The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector applies
to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
type: object
status:
description: Most recently observed status of the OSUpgrade.
properties:
conditions:
items:
description: Condition contains details for one aspect of the current
state of this API Resource.
properties:
lastTransitionTime:
description: |-
lastTransitionTime is the last time the condition transitioned from one status to another.
This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable.
format: date-time
type: string
message:
description: |-
message is a human readable message indicating details about the transition.
This may be an empty string.
maxLength: 32768
type: string
observedGeneration:
description: |-
observedGeneration represents the .metadata.generation that the condition was set based upon.
For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date
with respect to the current state of the instance.
format: int64
minimum: 0
type: integer
reason:
description: |-
reason contains a programmatic identifier indicating the reason for the condition's last transition.
Producers of specific condition types may define expected values and meanings for this field,
and whether the values are considered a guaranteed API.
The value should be a CamelCase string.
This field may not be empty.
maxLength: 1024
minLength: 1
pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
type: string
status:
description: status of the condition, one of True, False, Unknown.
enum:
- "True"
- "False"
- Unknown
type: string
type:
description: type of condition in CamelCase or in foo.example.com/CamelCase.
maxLength: 316
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
type: string
required:
- lastTransitionTime
- message
- reason
- status
- type
type: object
type: array
message:
type: string
observedGeneration:
format: int64
type: integer
phase:
type: string
reason:
type: string
resolvedVersion:
type: string
type: object
type: object
served: true
storage: true
subresources:
status: {}

View File

@@ -0,0 +1,6 @@
package assets
import "embed"
//go:embed crds/*.yaml
var CRDs embed.FS

View File

@@ -0,0 +1,49 @@
package assets
import (
"fmt"
"io"
"path/filepath"
"sort"
)
func PrintCRDs(out io.Writer) error {
entries, err := CRDs.ReadDir("crds")
if err != nil {
return err
}
names := make([]string, 0, len(entries))
for _, entry := range entries {
if entry.IsDir() {
continue
}
if filepath.Ext(entry.Name()) != ".yaml" {
continue
}
names = append(names, entry.Name())
}
sort.Strings(names)
for _, name := range names {
b, err := CRDs.ReadFile("crds/" + name)
if err != nil {
return err
}
if _, err := fmt.Fprintln(out, "---"); err != nil {
return err
}
if _, err := out.Write(b); err != nil {
return err
}
if len(b) == 0 || b[len(b)-1] != '\n' {
if _, err := fmt.Fprintln(out); err != nil {
return err
}
}
}
return nil
}

View File

@@ -70,7 +70,7 @@ func NewRunner(cfg *monov1alpha1.MonoKSConfig) *Runner {
{
RegKey: "EngageControlGate",
Name: "Engage the control gate",
Desc: "Prevents agent polling resources prematurely",
Desc: "Prevents agent watching resources prematurely",
},
{
RegKey: "StartCRIO",
@@ -112,6 +112,11 @@ func NewRunner(cfg *monov1alpha1.MonoKSConfig) *Runner {
Name: "Wait for existing cluster",
Desc: "Block until control plane is reachable when joining or reconciling an existing cluster",
},
{
RegKey: "CheckForVersionSkew",
Name: "Check for version skew",
Desc: "Validate wether version satisfy the requirements againts current cluster if any",
},
{
RegKey: "ReconcileControlPlane",
Name: "Reconcile control plane",
@@ -122,11 +127,6 @@ func NewRunner(cfg *monov1alpha1.MonoKSConfig) *Runner {
Name: "Reconcile worker node",
Desc: "Reconcile the worker node",
},
{
RegKey: "CheckForVersionSkew",
Name: "Check for version skew",
Desc: "Validate wether version satisfy the requirements againts current cluster if any",
},
{
RegKey: "RunKubeadmUpgradeApply",
Name: "Run kubeadm upgrade apply",
@@ -165,7 +165,7 @@ func NewRunner(cfg *monov1alpha1.MonoKSConfig) *Runner {
{
RegKey: "ReleaseControlGate",
Name: "Release the control gate",
Desc: "Allow agent to start polling resources",
Desc: "Allow agent to start watching resources",
},
},
}

View File

@@ -6,7 +6,9 @@ import (
"github.com/spf13/cobra"
"k8s.io/cli-runtime/pkg/genericclioptions"
"os"
"strings"
assets "example.com/monok8s/pkg/assets"
render "example.com/monok8s/pkg/render"
)
@@ -42,13 +44,20 @@ func NewCmdCreate(flags *genericclioptions.ConfigFlags) *cobra.Command {
return err
},
},
&cobra.Command{
Use: "crds",
Short: "Print the bundled CRDs",
RunE: func(cmd *cobra.Command, _ []string) error {
return assets.PrintCRDs(cmd.OutOrStdout())
},
},
)
var authorizedKeysPath string
sshdcmd := cobra.Command{
Use: "sshd",
Short: "Print sshd deployment template",
Short: "Print sshd deployments template",
RunE: func(cmd *cobra.Command, _ []string) error {
ns, _, err := flags.ToRawKubeConfigLoader().Namespace()
if err != nil {
@@ -77,8 +86,12 @@ func NewCmdCreate(flags *genericclioptions.ConfigFlags) *cobra.Command {
cconf := render.ControllerConf{}
controllercmd := cobra.Command{
Use: "controller",
Short: "Print controller deployment template",
Short: "Print controller deployments template",
RunE: func(cmd *cobra.Command, _ []string) error {
if len(cconf.ImagePullSecrets) > 0 && strings.TrimSpace(cconf.Image) == "" {
return fmt.Errorf("--image-pull-secret requires --image")
}
ns, _, err := flags.ToRawKubeConfigLoader().Namespace()
if err != nil {
return err
@@ -102,9 +115,56 @@ func NewCmdCreate(flags *genericclioptions.ConfigFlags) *cobra.Command {
"",
"Controller image, including optional registry and tag",
)
controllercmd.Flags().StringSliceVar(
&cconf.ImagePullSecrets,
"image-pull-secret",
nil,
"Image pull secret name for the agent image; may be specified multiple times or as a comma-separated list",
)
cmd.AddCommand(&controllercmd)
aconf := render.AgentConf{}
agentcmd := cobra.Command{
Use: "agent",
Short: "Print agent daemonsets template",
RunE: func(cmd *cobra.Command, _ []string) error {
if len(aconf.ImagePullSecrets) > 0 && strings.TrimSpace(aconf.Image) == "" {
return fmt.Errorf("--image-pull-secret requires --image")
}
ns, _, err := flags.ToRawKubeConfigLoader().Namespace()
if err != nil {
return err
}
aconf.Namespace = ns
out, err := render.RenderAgentDaemonSets(aconf)
if err != nil {
return err
}
_, err = fmt.Fprint(cmd.OutOrStdout(), out)
return err
},
}
agentcmd.Flags().StringVar(
&aconf.Image,
"image",
"",
"Agent image, including optional registry and tag",
)
agentcmd.Flags().StringSliceVar(
&aconf.ImagePullSecrets,
"image-pull-secret",
nil,
"Image pull secret name for the agent image; may be specified multiple times or as a comma-separated list",
)
cmd.AddCommand(&agentcmd)
return cmd
}

View File

@@ -272,11 +272,17 @@ func listTargetNodeNames(
})
if osu.Spec.NodeSelector != nil {
sel, err := metav1.LabelSelectorAsSelector(osu.Spec.NodeSelector)
userSelector, err := metav1.LabelSelectorAsSelector(osu.Spec.NodeSelector)
if err != nil {
return nil, fmt.Errorf("invalid nodeSelector: %w", err)
}
selector = sel
reqs, selectable := userSelector.Requirements()
if !selectable {
selector = labels.Nothing()
} else {
selector = selector.Add(reqs...)
}
}
list, err := clients.Kubernetes.CoreV1().

View File

@@ -1,76 +0,0 @@
package crds
import (
monov1alpha1 "example.com/monok8s/pkg/apis/monok8s/v1alpha1"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func Definitions() []*apiextensionsv1.CustomResourceDefinition {
return []*apiextensionsv1.CustomResourceDefinition{
monoKSConfigCRD(),
osUpgradeCRD(),
}
}
func monoKSConfigCRD() *apiextensionsv1.CustomResourceDefinition {
return &apiextensionsv1.CustomResourceDefinition{
ObjectMeta: metav1.ObjectMeta{
Name: monov1alpha1.MonoKSConfigCRD,
},
Spec: apiextensionsv1.CustomResourceDefinitionSpec{
Group: monov1alpha1.Group,
Scope: apiextensionsv1.NamespaceScoped,
Names: apiextensionsv1.CustomResourceDefinitionNames{
Plural: "monoksconfigs",
Singular: "monoksconfig",
Kind: "MonoKSConfig",
ShortNames: []string{"mkscfg"},
},
Versions: []apiextensionsv1.CustomResourceDefinitionVersion{{
Name: "v1alpha1",
Served: true,
Storage: true,
Schema: &apiextensionsv1.CustomResourceValidation{OpenAPIV3Schema: &apiextensionsv1.JSONSchemaProps{
Type: "object",
Properties: map[string]apiextensionsv1.JSONSchemaProps{
"spec": {Type: "object", XPreserveUnknownFields: boolPtr(true)},
"status": {Type: "object", XPreserveUnknownFields: boolPtr(true)},
},
}},
}},
},
}
}
func osUpgradeCRD() *apiextensionsv1.CustomResourceDefinition {
return &apiextensionsv1.CustomResourceDefinition{
ObjectMeta: metav1.ObjectMeta{
Name: monov1alpha1.OSUpgradeCRD,
},
Spec: apiextensionsv1.CustomResourceDefinitionSpec{
Group: monov1alpha1.Group,
Scope: apiextensionsv1.NamespaceScoped,
Names: apiextensionsv1.CustomResourceDefinitionNames{
Plural: "osupgrades",
Singular: "osupgrade",
Kind: "OSUpgrade",
ShortNames: []string{"osup"},
},
Versions: []apiextensionsv1.CustomResourceDefinitionVersion{{
Name: "v1alpha1",
Served: true,
Storage: true,
Schema: &apiextensionsv1.CustomResourceValidation{OpenAPIV3Schema: &apiextensionsv1.JSONSchemaProps{
Type: "object",
Properties: map[string]apiextensionsv1.JSONSchemaProps{
"spec": {Type: "object", XPreserveUnknownFields: boolPtr(true)},
"status": {Type: "object", XPreserveUnknownFields: boolPtr(true)},
},
}},
}},
},
}
}
func boolPtr(v bool) *bool { return &v }

View File

@@ -3,37 +3,27 @@ package node
import (
"context"
"fmt"
"reflect"
"strings"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/klog/v2"
monov1alpha1 "example.com/monok8s/pkg/apis/monok8s/v1alpha1"
"example.com/monok8s/pkg/kube"
"example.com/monok8s/pkg/render"
templates "example.com/monok8s/pkg/templates"
)
const (
controlAgentImage = "localhost/monok8s/node-control:dev"
kubeconfig = "/etc/kubernetes/admin.conf"
)
const kubeconfig = "/etc/kubernetes/admin.conf"
func ApplyNodeControlDaemonSetResources(ctx context.Context, n *NodeContext) error {
// Only the control-plane should bootstrap this DaemonSet definition.
// And only when the feature is enabled.
if strings.TrimSpace(n.Config.Spec.ClusterRole) != "control-plane" || !n.Config.Spec.EnableNodeControl {
klog.InfoS("skipped for", "clusterRole", n.Config.Spec.ClusterRole, "enableNodeAgent", n.Config.Spec.EnableNodeControl)
klog.InfoS("skipped for",
"clusterRole", n.Config.Spec.ClusterRole,
"enableNodeAgent", n.Config.Spec.EnableNodeControl,
)
return nil
}
err := ApplyCRDs(ctx, n)
if err != nil {
if err := ApplyCRDs(ctx, n); err != nil {
return err
}
@@ -47,363 +37,13 @@ func ApplyNodeControlDaemonSetResources(ctx context.Context, n *NodeContext) err
return fmt.Errorf("build kube clients from %s: %w", kubeconfig, err)
}
labels := map[string]string{
"app.kubernetes.io/name": monov1alpha1.NodeAgentName,
"app.kubernetes.io/component": "agent",
"app.kubernetes.io/part-of": "monok8s",
"app.kubernetes.io/managed-by": monov1alpha1.NodeControlName,
conf := render.AgentConf{
Namespace: namespace,
}
kubeClient := clients.Kubernetes
if err := ensureNamespace(ctx, kubeClient, namespace, labels); err != nil {
return fmt.Errorf("ensure namespace %q: %w", namespace, err)
}
if err := applyNodeAgentServiceAccount(ctx, kubeClient, namespace, labels); err != nil {
return fmt.Errorf("apply serviceaccount: %w", err)
}
if err := applyNodeAgentClusterRole(ctx, kubeClient, labels); err != nil {
return fmt.Errorf("apply clusterrole: %w", err)
}
if err := applyNodeAgentClusterRoleBinding(ctx, kubeClient, namespace, labels); err != nil {
return fmt.Errorf("apply clusterrolebinding: %w", err)
}
if err := applyNodeAgentDaemonSet(ctx, kubeClient, namespace, labels); err != nil {
return fmt.Errorf("apply daemonset: %w", err)
if err := render.ApplyAgentDaemonSets(ctx, clients.Kubernetes, conf); err != nil {
return fmt.Errorf("apply node agent daemonset resources: %w", err)
}
return nil
}
func ensureNamespace(
ctx context.Context,
kubeClient kubernetes.Interface,
namespace string,
labels map[string]string,
) error {
_, err := kubeClient.CoreV1().Namespaces().Get(ctx, namespace, metav1.GetOptions{})
if err == nil {
return nil
}
if !apierrors.IsNotFound(err) {
return fmt.Errorf("get namespace: %w", err)
}
ns := &corev1.Namespace{
ObjectMeta: metav1.ObjectMeta{
Name: namespace,
Labels: copyStringMap(labels),
},
}
_, err = kubeClient.CoreV1().Namespaces().Create(ctx, ns, metav1.CreateOptions{})
if err != nil && !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("create namespace: %w", err)
}
return nil
}
func copyStringMap(in map[string]string) map[string]string {
if len(in) == 0 {
return nil
}
out := make(map[string]string, len(in))
for k, v := range in {
out[k] = v
}
return out
}
func applyNodeAgentServiceAccount(ctx context.Context, kubeClient kubernetes.Interface, namespace string, labels map[string]string) error {
want := &corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: monov1alpha1.NodeAgentName,
Namespace: namespace,
Labels: labels,
},
}
existing, err := kubeClient.CoreV1().ServiceAccounts(namespace).Get(ctx, monov1alpha1.NodeAgentName, metav1.GetOptions{})
if apierrors.IsNotFound(err) {
_, err = kubeClient.CoreV1().ServiceAccounts(namespace).Create(ctx, want, metav1.CreateOptions{})
return err
}
if err != nil {
return err
}
changed := false
if !reflect.DeepEqual(existing.Labels, want.Labels) {
existing.Labels = want.Labels
changed = true
}
if !changed {
return nil
}
_, err = kubeClient.CoreV1().ServiceAccounts(namespace).Update(ctx, existing, metav1.UpdateOptions{})
return err
}
func applyNodeAgentClusterRole(ctx context.Context, kubeClient kubernetes.Interface, labels map[string]string) error {
wantRules := []rbacv1.PolicyRule{
{
APIGroups: []string{monov1alpha1.Group},
Resources: []string{"osupgrades"},
Verbs: []string{"get"},
},
{
APIGroups: []string{monov1alpha1.Group},
Resources: []string{"osupgradeprogresses"},
Verbs: []string{"get", "list", "watch", "create", "patch", "update"},
},
{
APIGroups: []string{monov1alpha1.Group},
Resources: []string{"osupgradeprogresses/status"},
Verbs: []string{"get", "list", "watch", "create", "patch", "update"},
},
{
APIGroups: []string{""},
Resources: []string{"nodes"},
Verbs: []string{"get", "list", "watch"},
},
}
want := &rbacv1.ClusterRole{
ObjectMeta: metav1.ObjectMeta{
Name: monov1alpha1.NodeAgentName,
Labels: labels,
},
Rules: wantRules,
}
existing, err := kubeClient.RbacV1().ClusterRoles().Get(ctx, monov1alpha1.NodeAgentName, metav1.GetOptions{})
if apierrors.IsNotFound(err) {
_, err = kubeClient.RbacV1().ClusterRoles().Create(ctx, want, metav1.CreateOptions{})
return err
}
if err != nil {
return err
}
changed := false
if !reflect.DeepEqual(existing.Labels, want.Labels) {
existing.Labels = want.Labels
changed = true
}
if !reflect.DeepEqual(existing.Rules, want.Rules) {
existing.Rules = want.Rules
changed = true
}
if !changed {
return nil
}
_, err = kubeClient.RbacV1().ClusterRoles().Update(ctx, existing, metav1.UpdateOptions{})
return err
}
func applyNodeAgentClusterRoleBinding(ctx context.Context, kubeClient kubernetes.Interface, namespace string, labels map[string]string) error {
wantRoleRef := rbacv1.RoleRef{
APIGroup: rbacv1.GroupName,
Kind: "ClusterRole",
Name: monov1alpha1.NodeAgentName,
}
wantSubjects := []rbacv1.Subject{
{
Kind: "ServiceAccount",
Name: monov1alpha1.NodeAgentName,
Namespace: namespace,
},
}
want := &rbacv1.ClusterRoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: monov1alpha1.NodeAgentName,
Labels: labels,
},
RoleRef: wantRoleRef,
Subjects: wantSubjects,
}
existing, err := kubeClient.RbacV1().ClusterRoleBindings().Get(ctx, monov1alpha1.NodeAgentName, metav1.GetOptions{})
if apierrors.IsNotFound(err) {
_, err = kubeClient.RbacV1().ClusterRoleBindings().Create(ctx, want, metav1.CreateOptions{})
return err
}
if err != nil {
return err
}
// roleRef is immutable. If it differs, fail loudly instead of pretending we can patch it.
if !reflect.DeepEqual(existing.RoleRef, want.RoleRef) {
return fmt.Errorf("existing ClusterRoleBinding %q has different roleRef and must be recreated", monov1alpha1.NodeAgentName)
}
changed := false
if !reflect.DeepEqual(existing.Labels, want.Labels) {
existing.Labels = want.Labels
changed = true
}
if !reflect.DeepEqual(existing.Subjects, want.Subjects) {
existing.Subjects = want.Subjects
changed = true
}
if !changed {
return nil
}
_, err = kubeClient.RbacV1().ClusterRoleBindings().Update(ctx, existing, metav1.UpdateOptions{})
return err
}
func applyNodeAgentDaemonSet(ctx context.Context, kubeClient kubernetes.Interface, namespace string, labels map[string]string) error {
privileged := true
dsLabels := monov1alpha1.NodeAgentLabels()
want := &appsv1.DaemonSet{
ObjectMeta: metav1.ObjectMeta{
Name: monov1alpha1.NodeAgentName,
Namespace: namespace,
Labels: labels,
},
Spec: appsv1.DaemonSetSpec{
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"app.kubernetes.io/name": monov1alpha1.NodeAgentName,
},
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: dsLabels,
},
Spec: corev1.PodSpec{
ServiceAccountName: monov1alpha1.NodeAgentName,
HostNetwork: true,
HostPID: true,
DNSPolicy: corev1.DNSClusterFirstWithHostNet,
NodeSelector: map[string]string{
monov1alpha1.NodeControlKey: "true",
},
Tolerations: []corev1.Toleration{
{Operator: corev1.TolerationOpExists},
},
Containers: []corev1.Container{
{
Name: "agent",
Image: controlAgentImage,
ImagePullPolicy: corev1.PullNever,
Args: []string{"agent", "--env-file", "$(CLUSTER_ENV_FILE)"},
Env: []corev1.EnvVar{
{
Name: "NODE_NAME",
ValueFrom: &corev1.EnvVarSource{
FieldRef: &corev1.ObjectFieldSelector{
APIVersion: "v1",
FieldPath: "spec.nodeName",
},
},
},
{
Name: "CLUSTER_ENV_FILE",
Value: "/host/opt/monok8s/config/cluster.env",
},
{
Name: "FW_ENV_CONFIG_FILE",
Value: "/host/etc/fw_env.config",
},
},
SecurityContext: &corev1.SecurityContext{
Privileged: &privileged,
},
VolumeMounts: []corev1.VolumeMount{
{
Name: "host-dev",
MountPath: "/dev",
},
{
Name: "host-etc",
MountPath: "/host/etc",
ReadOnly: true,
},
{
Name: "host-config",
MountPath: "/host/opt/monok8s/config",
ReadOnly: true,
},
},
},
},
Volumes: []corev1.Volume{
{
Name: "host-dev",
VolumeSource: corev1.VolumeSource{
HostPath: &corev1.HostPathVolumeSource{
Path: "/dev",
Type: hostPathType(corev1.HostPathDirectory),
},
},
},
{
Name: "host-etc",
VolumeSource: corev1.VolumeSource{
HostPath: &corev1.HostPathVolumeSource{
Path: "/etc",
Type: hostPathType(corev1.HostPathDirectory),
},
},
},
{
Name: "host-config",
VolumeSource: corev1.VolumeSource{
HostPath: &corev1.HostPathVolumeSource{
Path: "/opt/monok8s/config",
Type: hostPathType(corev1.HostPathDirectory),
},
},
},
},
},
},
},
}
existing, err := kubeClient.AppsV1().DaemonSets(namespace).Get(ctx, monov1alpha1.NodeAgentName, metav1.GetOptions{})
if apierrors.IsNotFound(err) {
_, err = kubeClient.AppsV1().DaemonSets(namespace).Create(ctx, want, metav1.CreateOptions{})
return err
}
if err != nil {
return err
}
changed := false
if !reflect.DeepEqual(existing.Labels, want.Labels) {
existing.Labels = want.Labels
changed = true
}
if !reflect.DeepEqual(existing.Spec, want.Spec) {
existing.Spec = want.Spec
changed = true
}
if !changed {
return nil
}
_, err = kubeClient.AppsV1().DaemonSets(namespace).Update(ctx, existing, metav1.UpdateOptions{})
return err
}
func hostPathType(t corev1.HostPathType) *corev1.HostPathType {
return &t
}
func mountPropagationMode(m corev1.MountPropagationMode) *corev1.MountPropagationMode {
return &m
}

View File

@@ -12,9 +12,6 @@ import (
"time"
"gopkg.in/yaml.v3"
"k8s.io/client-go/discovery"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/klog/v2"
monov1alpha1 "example.com/monok8s/pkg/apis/monok8s/v1alpha1"
@@ -27,6 +24,16 @@ const (
tmpKubeadmInitConf = "/tmp/kubeadm-init.yaml"
)
func chooseVersionKubeconfig(state *LocalClusterState) string {
if state.HasAdminKubeconfig {
return adminKubeconfigPath
}
if state.HasKubeletKubeconfig {
return kubeletKubeconfigPath
}
return ""
}
func DetectLocalClusterState(ctx context.Context, nctx *NodeContext) error {
_ = ctx
@@ -259,110 +266,6 @@ func waitForAPIViaKubeconfig(ctx context.Context, kubeconfigPath string, timeout
}
}
func getServerVersion(ctx context.Context, kubeconfigPath string) (string, error) {
restCfg, err := clientcmd.BuildConfigFromFlags("", kubeconfigPath)
if err != nil {
return "", fmt.Errorf("build kubeconfig %s: %w", kubeconfigPath, err)
}
// Keep this short. This is a probe, not a long-running client.
restCfg.Timeout = 5 * time.Second
clientset, err := kubernetes.NewForConfig(restCfg)
if err != nil {
return "", fmt.Errorf("create clientset: %w", err)
}
disc := clientset.Discovery()
return discoverServerVersion(ctx, disc)
}
func discoverServerVersion(ctx context.Context, disc discovery.DiscoveryInterface) (string, error) {
info, err := disc.ServerVersion()
if err != nil {
return "", err
}
if info == nil || strings.TrimSpace(info.GitVersion) == "" {
return "", errors.New("server version is empty")
}
return normalizeKubeVersion(info.GitVersion), nil
}
type kubeVersion struct {
Major int
Minor int
Patch int
}
func parseKubeVersion(s string) (kubeVersion, error) {
s = strings.TrimSpace(s)
s = strings.TrimPrefix(s, "v")
var v kubeVersion
n, err := fmt.Sscanf(s, "%d.%d.%d", &v.Major, &v.Minor, &v.Patch)
// Accepts "1.29" or "1.29.3"
if err != nil || n < 2 {
return kubeVersion{}, fmt.Errorf("invalid kubernetes version %q", s)
}
return v, nil
}
// Control-plane: keep this strict.
// Accept same version, or a one-minor step where the node binary is newer than the current cluster.
// That covers normal control-plane upgrade flow but blocks nonsense.
func isSupportedControlPlaneSkew(clusterVersion, nodeVersion string) bool {
cv, err := parseKubeVersion(clusterVersion)
if err != nil {
return false
}
nv, err := parseKubeVersion(nodeVersion)
if err != nil {
return false
}
if cv.Major != nv.Major {
return false
}
if cv.Minor == nv.Minor {
return true
}
if nv.Minor == cv.Minor+1 {
return true
}
return false
}
// Worker: kubelet generally must not be newer than the apiserver.
// Older kubelets are allowed within supported skew range.
// Your requirement says unsupported worker skew should still proceed, so this
// only classifies support status and must NOT be used to block this function.
func isSupportedWorkerSkew(clusterVersion, nodeVersion string) bool {
cv, err := parseKubeVersion(clusterVersion)
if err != nil {
return false
}
nv, err := parseKubeVersion(nodeVersion)
if err != nil {
return false
}
if cv.Major != nv.Major {
return false
}
// kubelet newer than apiserver => unsupported
if nv.Minor > cv.Minor {
return false
}
// kubelet up to 3 minors older than apiserver => supported
if cv.Minor-nv.Minor <= 3 {
return true
}
return false
}
func ValidateRequiredImagesPresent(ctx context.Context, n *NodeContext) error {
if n.Config.Spec.SkipImageCheck {
klog.Infof("skipping image check (skipImageCheck=true)")
@@ -419,31 +322,6 @@ func checkImagePresent(ctx context.Context, n *NodeContext, image string) error
return nil
}
func chooseVersionKubeconfig(state *LocalClusterState) string {
if state.HasAdminKubeconfig {
return adminKubeconfigPath
}
if state.HasKubeletKubeconfig {
return kubeletKubeconfigPath
}
return ""
}
func versionEq(a, b string) bool {
return normalizeKubeVersion(a) == normalizeKubeVersion(b)
}
func normalizeKubeVersion(v string) string {
v = strings.TrimSpace(v)
if v == "" {
return ""
}
if !strings.HasPrefix(v, "v") {
v = "v" + v
}
return v
}
func buildNodeRegistration(spec monov1alpha1.MonoKSConfigSpec) NodeRegistrationOptions {
nodeName := strings.TrimSpace(spec.NodeName)
criSocket := strings.TrimSpace(spec.ContainerRuntimeEndpoint)
@@ -781,11 +659,6 @@ func RunKubeadmJoin(ctx context.Context, nctx *NodeContext) error {
return nil
}
func RunKubeadmUpgradeNode(context.Context, *NodeContext) error {
klog.Info("run_kubeadm_upgrade_node: TODO implement kubeadm upgrade node")
return nil
}
func ReconcileControlPlane(ctx context.Context, nctx *NodeContext) error {
if nctx.BootstrapState == nil {
return errors.New("BootstrapState is nil, call ClassifyBootstrapAction() first")

View File

@@ -0,0 +1,108 @@
package node
import (
"context"
"errors"
"fmt"
"os"
"strings"
"k8s.io/klog/v2"
"example.com/monok8s/pkg/system"
)
const kubeadmUpgradeNodeHostnameBugFixedIn = "v1.35.0"
// COMPAT(kubeadm-upgrade-node-hostname)
// Affects: Kubernetes/kubeadm < v1.35.0
// Upstream: kubernetes/kubeadm#3244, kubernetes/kubernetes#134319
// RemoveWhen: minimum supported Kubernetes version >= v1.35.0
//
// Affected kubeadm versions can derive the target Node name for
// `kubeadm upgrade node` from the local OS hostname instead of the existing
// kubeadm NodeRegistration / kubelet --hostname-override state.
func needsKubeadmUpgradeNodeHostnameWorkaround(kubeadmVersion string) bool {
lt, err := versionLt(kubeadmVersion, kubeadmUpgradeNodeHostnameBugFixedIn)
if err != nil {
klog.Warningf(
"could not parse kubeadm version %q; enabling kubeadm upgrade node hostname workaround: %v",
kubeadmVersion,
err,
)
return true
}
return lt
}
// runWithTemporaryHostname works around kubernetes/kubeadm#3244, fixed by
// kubernetes/kubernetes#134319 in Kubernetes v1.35.0.
//
// Affected kubeadm versions can derive the target Node name for
// `kubeadm upgrade node` from the local OS hostname instead of the existing
// kubeadm NodeRegistration / kubelet --hostname-override state. That breaks
// valid setups where the machine hostname differs from the Kubernetes Node
// name: kubeadm may authenticate as one node but try to get/patch another Node,
// and the Node authorizer correctly rejects it.
//
// Keep this workaround scoped to affected kubeadm versions only. Set the
// temporary hostname to the Kubernetes Node name, run kubeadm, then restore the
// configured machine hostname immediately afterward.
func runWithTemporaryHostname(ctx context.Context, nctx *NodeContext, fn func(context.Context) error) error {
if nctx == nil {
return errors.New("node context is nil")
}
temporaryHostname := strings.TrimSpace(nctx.Config.Spec.NodeName)
if temporaryHostname == "" {
return errors.New("temporary hostname is required")
}
originalHostname, err := os.Hostname()
if err != nil {
return fmt.Errorf("get current hostname: %w", err)
}
if originalHostname == temporaryHostname {
return fn(ctx)
}
restoreHostname := strings.TrimSpace(nctx.Config.Spec.Network.Hostname)
if restoreHostname == "" {
restoreHostname = originalHostname
}
klog.Warningf(
"temporarily changing hostname for kubeadm upgrade node: current=%q temporary=%q restore=%q",
originalHostname,
temporaryHostname,
restoreHostname,
)
if err := system.SetHostname(temporaryHostname); err != nil {
return fmt.Errorf("set temporary hostname to %q: %w", temporaryHostname, err)
}
defer func() {
if err := system.SetHostname(restoreHostname); err != nil {
klog.Errorf("failed to restore hostname to %q: %v", restoreHostname, err)
}
}()
return fn(ctx)
}
// COMPAT(kubeadm-upgrade-node-hostname)
// RemoveWhen: minimum supported Kubernetes version >= v1.35.0
func runKubeadmUpgradeNodeWithCompat(
ctx context.Context,
nctx *NodeContext,
kubeadmVersion string,
fn func(context.Context) error,
) error {
if needsKubeadmUpgradeNodeHostnameWorkaround(kubeadmVersion) {
return runWithTemporaryHostname(ctx, nctx, fn)
}
return fn(ctx)
}

View File

@@ -257,3 +257,102 @@ func describeHealthCheckFailure(ctx context.Context, kubeClient kubernetes.Inter
return nil
}
func RunKubeadmUpgradeNode(ctx context.Context, nctx *NodeContext) error {
if nctx == nil {
return errors.New("node context is nil")
}
if nctx.Config == nil {
return errors.New("node config is nil")
}
if nctx.LocalClusterState == nil {
return errors.New("LocalClusterState is nil. Please run earlier steps first")
}
if nctx.BootstrapState == nil {
return errors.New("BootstrapState is nil. Please run earlier steps first")
}
switch nctx.BootstrapState.Action {
case BootstrapActionUpgradeWorker:
// continue
default:
klog.V(4).Infof("RunKubeadmUpgradeNode skipped for action %q", nctx.BootstrapState.Action)
return nil
}
wantVersion := normalizeKubeVersion(strings.TrimSpace(nctx.Config.Spec.KubernetesVersion))
if wantVersion == "" {
return errors.New("spec.kubernetesVersion is required")
}
kubeconfigPath := chooseVersionKubeconfig(nctx.LocalClusterState)
if kubeconfigPath == "" {
return errors.New("no kubeconfig available for detecting cluster version")
}
clusterVersion := strings.TrimSpace(nctx.BootstrapState.DetectedClusterVersion)
if clusterVersion == "" {
var err error
clusterVersion, err = getServerVersion(ctx, kubeconfigPath)
if err != nil {
if nctx.BootstrapState.UnsupportedWorkerVersionSkew {
klog.Warningf(
"cluster version unavailable but worker skew was marked unsupported/permissive, continuing: reason=%s",
nctx.BootstrapState.VersionSkewReason,
)
} else {
return fmt.Errorf("get cluster version via %s: %w", kubeconfigPath, err)
}
}
}
if clusterVersion != "" && !isSupportedWorkerSkew(clusterVersion, wantVersion) {
klog.Warningf(
"unsupported worker version skew detected, continuing anyway: cluster=%s node=%s",
clusterVersion,
wantVersion,
)
}
klog.Infof(
"running kubeadm upgrade node: role=%s clusterVersion=%s nodeVersion=%s kubeconfig=%s",
strings.TrimSpace(nctx.Config.Spec.ClusterRole),
clusterVersion,
wantVersion,
kubeconfigPath,
)
args := []string{
"upgrade",
"node",
"--kubeconfig",
kubeconfigPath,
}
runKubeadm := func(ctx context.Context) error {
_, err := nctx.SystemRunner.RunWithOptions(
ctx,
"kubeadm",
args,
system.RunOptions{
Timeout: 10 * time.Minute,
OnStdoutLine: func(line string) {
klog.Infof("[kubeadm] %s", line)
},
OnStderrLine: func(line string) {
klog.Infof("[kubeadm] %s", line)
},
},
)
return err
}
// COMPAT(kubeadm-upgrade-node-hostname)
// RemoveWhen: minimum supported Kubernetes version >= v1.35.0
// Replace this wrapper with direct runKubeadm(ctx).
if err := runKubeadmUpgradeNodeWithCompat(ctx, nctx, wantVersion, runKubeadm); err != nil {
return fmt.Errorf("run kubeadm upgrade node: %w", err)
}
return nil
}

View File

@@ -8,9 +8,18 @@ import (
"strings"
"time"
"k8s.io/client-go/discovery"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/klog/v2"
)
type kubeVersion struct {
Major int
Minor int
Patch int
}
func ValidateNodeIPAndAPIServerReachability(ctx context.Context, nct *NodeContext) error {
requireLocalIP := func(wantedIP string) error {
wantedIP = strings.TrimSpace(wantedIP)
@@ -189,3 +198,136 @@ func CheckForVersionSkew(ctx context.Context, nctx *NodeContext) error {
return nil
}
func versionEq(a, b string) bool {
return normalizeKubeVersion(a) == normalizeKubeVersion(b)
}
func versionLt(a, b string) (bool, error) {
av, err := parseKubeVersion(a)
if err != nil {
return false, err
}
bv, err := parseKubeVersion(b)
if err != nil {
return false, err
}
if av.Major != bv.Major {
return av.Major < bv.Major, nil
}
if av.Minor != bv.Minor {
return av.Minor < bv.Minor, nil
}
return av.Patch < bv.Patch, nil
}
func normalizeKubeVersion(v string) string {
v = strings.TrimSpace(v)
if v == "" {
return ""
}
if !strings.HasPrefix(v, "v") {
v = "v" + v
}
return v
}
func parseKubeVersion(s string) (kubeVersion, error) {
s = strings.TrimSpace(s)
s = strings.TrimPrefix(s, "v")
var v kubeVersion
n, err := fmt.Sscanf(s, "%d.%d.%d", &v.Major, &v.Minor, &v.Patch)
// Accepts "1.29" or "1.29.3"
if err != nil || n < 2 {
return kubeVersion{}, fmt.Errorf("invalid kubernetes version %q", s)
}
return v, nil
}
// Control-plane: keep this strict.
// Accept same version, or a one-minor step where the node binary is newer than the current cluster.
// That covers normal control-plane upgrade flow but blocks nonsense.
func isSupportedControlPlaneSkew(clusterVersion, nodeVersion string) bool {
cv, err := parseKubeVersion(clusterVersion)
if err != nil {
return false
}
nv, err := parseKubeVersion(nodeVersion)
if err != nil {
return false
}
if cv.Major != nv.Major {
return false
}
if cv.Minor == nv.Minor {
return true
}
if nv.Minor == cv.Minor+1 {
return true
}
return false
}
// Worker: kubelet generally must not be newer than the apiserver.
// Older kubelets are allowed within supported skew range.
// Your requirement says unsupported worker skew should still proceed, so this
// only classifies support status and must NOT be used to block this function.
func isSupportedWorkerSkew(clusterVersion, nodeVersion string) bool {
cv, err := parseKubeVersion(clusterVersion)
if err != nil {
return false
}
nv, err := parseKubeVersion(nodeVersion)
if err != nil {
return false
}
if cv.Major != nv.Major {
return false
}
// kubelet newer than apiserver => unsupported
if nv.Minor > cv.Minor {
return false
}
// kubelet up to 3 minors older than apiserver => supported
if cv.Minor-nv.Minor <= 3 {
return true
}
return false
}
func getServerVersion(ctx context.Context, kubeconfigPath string) (string, error) {
restCfg, err := clientcmd.BuildConfigFromFlags("", kubeconfigPath)
if err != nil {
return "", fmt.Errorf("build kubeconfig %s: %w", kubeconfigPath, err)
}
// Keep this short. This is a probe, not a long-running client.
restCfg.Timeout = 5 * time.Second
clientset, err := kubernetes.NewForConfig(restCfg)
if err != nil {
return "", fmt.Errorf("create clientset: %w", err)
}
disc := clientset.Discovery()
return discoverServerVersion(ctx, disc)
}
func discoverServerVersion(ctx context.Context, disc discovery.DiscoveryInterface) (string, error) {
info, err := disc.ServerVersion()
if err != nil {
return "", err
}
if info == nil || strings.TrimSpace(info.GitVersion) == "" {
return "", errors.New("server version is empty")
}
return normalizeKubeVersion(info.GitVersion), nil
}

View File

@@ -0,0 +1,284 @@
package render
import (
"fmt"
"strings"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
monov1alpha1 "example.com/monok8s/pkg/apis/monok8s/v1alpha1"
buildinfo "example.com/monok8s/pkg/buildinfo"
)
type AgentConf struct {
Namespace string
Image string
ImagePullSecrets []string
Labels map[string]string
}
func RenderAgentDaemonSets(conf AgentConf) (string, error) {
objs, err := buildAgentDaemonSetObjects(conf)
if err != nil {
return "", err
}
return renderObjects(objs)
}
func buildAgentDaemonSetObjects(conf AgentConf) ([]runtime.Object, error) {
if strings.TrimSpace(conf.Namespace) == "" {
return nil, fmt.Errorf("namespace is required")
}
conf.Labels = map[string]string{
"app.kubernetes.io/name": monov1alpha1.NodeAgentName,
"app.kubernetes.io/component": "agent",
"app.kubernetes.io/part-of": "monok8s",
"app.kubernetes.io/managed-by": monov1alpha1.NodeControlName,
}
return []runtime.Object{
buildAgentServiceAccount(conf),
buildAgentClusterRole(conf),
buildAgentClusterRoleBinding(conf),
buildAgentDaemonSet(conf),
}, nil
}
func buildAgentNamespace(conf AgentConf) *corev1.Namespace {
return &corev1.Namespace{
TypeMeta: metav1.TypeMeta{
APIVersion: "v1",
Kind: "Namespace",
},
ObjectMeta: metav1.ObjectMeta{
Name: conf.Namespace,
Labels: copyStringMap(conf.Labels),
},
}
}
func buildAgentServiceAccount(conf AgentConf) *corev1.ServiceAccount {
return &corev1.ServiceAccount{
TypeMeta: metav1.TypeMeta{
APIVersion: "v1",
Kind: "ServiceAccount",
},
ObjectMeta: metav1.ObjectMeta{
Name: monov1alpha1.NodeAgentName,
Namespace: conf.Namespace,
Labels: copyStringMap(conf.Labels),
},
}
}
func buildAgentClusterRole(conf AgentConf) *rbacv1.ClusterRole {
wantRules := []rbacv1.PolicyRule{
{
APIGroups: []string{monov1alpha1.Group},
Resources: []string{"osupgrades"},
Verbs: []string{"get"},
},
{
APIGroups: []string{monov1alpha1.Group},
Resources: []string{"osupgradeprogresses"},
Verbs: []string{"get", "list", "watch", "create", "patch", "update"},
},
{
APIGroups: []string{monov1alpha1.Group},
Resources: []string{"osupgradeprogresses/status"},
Verbs: []string{"get", "list", "watch", "create", "patch", "update"},
},
{
APIGroups: []string{""},
Resources: []string{"nodes"},
Verbs: []string{"get", "list", "watch"},
},
}
return &rbacv1.ClusterRole{
TypeMeta: metav1.TypeMeta{
APIVersion: "rbac.authorization.k8s.io/v1",
Kind: "ClusterRole",
},
ObjectMeta: metav1.ObjectMeta{
Name: monov1alpha1.NodeAgentName,
Labels: copyStringMap(conf.Labels),
},
Rules: wantRules,
}
}
func buildAgentClusterRoleBinding(conf AgentConf) *rbacv1.ClusterRoleBinding {
return &rbacv1.ClusterRoleBinding{
TypeMeta: metav1.TypeMeta{
APIVersion: "rbac.authorization.k8s.io/v1",
Kind: "ClusterRoleBinding",
},
ObjectMeta: metav1.ObjectMeta{
Name: monov1alpha1.NodeAgentName,
Labels: copyStringMap(conf.Labels),
},
RoleRef: rbacv1.RoleRef{
APIGroup: rbacv1.GroupName,
Kind: "ClusterRole",
Name: monov1alpha1.NodeAgentName,
},
Subjects: []rbacv1.Subject{
{
Kind: "ServiceAccount",
Name: monov1alpha1.NodeAgentName,
Namespace: conf.Namespace,
},
},
}
}
func buildAgentDaemonSet(conf AgentConf) *appsv1.DaemonSet {
privileged := true
dsLabels := monov1alpha1.NodeAgentLabels()
image, pullPolicy := agentImage(conf)
return &appsv1.DaemonSet{
TypeMeta: metav1.TypeMeta{
APIVersion: "apps/v1",
Kind: "DaemonSet",
},
ObjectMeta: metav1.ObjectMeta{
Name: monov1alpha1.NodeAgentName,
Namespace: conf.Namespace,
Labels: copyStringMap(conf.Labels),
},
Spec: appsv1.DaemonSetSpec{
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"app.kubernetes.io/name": monov1alpha1.NodeAgentName,
},
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: dsLabels,
},
Spec: corev1.PodSpec{
ServiceAccountName: monov1alpha1.NodeAgentName,
HostNetwork: true,
HostPID: true,
DNSPolicy: corev1.DNSClusterFirstWithHostNet,
ImagePullSecrets: imagePullSecrets(conf.ImagePullSecrets),
NodeSelector: map[string]string{
monov1alpha1.NodeControlKey: "true",
},
Tolerations: []corev1.Toleration{
{Operator: corev1.TolerationOpExists},
},
Containers: []corev1.Container{
{
Name: "agent",
Image: image,
ImagePullPolicy: pullPolicy,
Args: []string{"agent", "--env-file", "$(CLUSTER_ENV_FILE)"},
Env: []corev1.EnvVar{
{
Name: "NODE_NAME",
ValueFrom: &corev1.EnvVarSource{
FieldRef: &corev1.ObjectFieldSelector{
APIVersion: "v1",
FieldPath: "spec.nodeName",
},
},
},
{
Name: "CLUSTER_ENV_FILE",
Value: "/host/opt/monok8s/config/cluster.env",
},
{
Name: "FW_ENV_CONFIG_FILE",
Value: "/host/etc/fw_env.config",
},
},
SecurityContext: &corev1.SecurityContext{
Privileged: &privileged,
},
VolumeMounts: []corev1.VolumeMount{
{
Name: "host-dev",
MountPath: "/dev",
},
{
Name: "host-etc",
MountPath: "/host/etc",
ReadOnly: true,
},
{
Name: "host-config",
MountPath: "/host/opt/monok8s/config",
ReadOnly: true,
},
},
},
},
Volumes: []corev1.Volume{
{
Name: "host-dev",
VolumeSource: corev1.VolumeSource{
HostPath: &corev1.HostPathVolumeSource{
Path: "/dev",
Type: hostPathType(corev1.HostPathDirectory),
},
},
},
{
Name: "host-etc",
VolumeSource: corev1.VolumeSource{
HostPath: &corev1.HostPathVolumeSource{
Path: "/etc",
Type: hostPathType(corev1.HostPathDirectory),
},
},
},
{
Name: "host-config",
VolumeSource: corev1.VolumeSource{
HostPath: &corev1.HostPathVolumeSource{
Path: "/opt/monok8s/config",
Type: hostPathType(corev1.HostPathDirectory),
},
},
},
},
},
},
},
}
}
func agentImage(conf AgentConf) (string, corev1.PullPolicy) {
if conf.Image != "" {
return conf.Image, corev1.PullIfNotPresent
}
return fmt.Sprintf("localhost/monok8s/node-control:%s", buildinfo.Version), corev1.PullNever
}
func copyStringMap(in map[string]string) map[string]string {
if len(in) == 0 {
return nil
}
out := make(map[string]string, len(in))
for k, v := range in {
out[k] = v
}
return out
}
func hostPathType(t corev1.HostPathType) *corev1.HostPathType {
return &t
}

View File

@@ -0,0 +1,203 @@
package render
import (
"context"
"fmt"
"reflect"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/kubernetes"
)
func ApplyAgentDaemonSets(ctx context.Context, kubeClient kubernetes.Interface, conf AgentConf) error {
objs, err := buildAgentDaemonSetObjects(conf)
if err != nil {
return err
}
if err := applyAgentNamespace(ctx, kubeClient, buildAgentNamespace(conf)); err != nil {
return fmt.Errorf("apply namespace: %w", err)
}
for _, obj := range objs {
if err := applyAgentObject(ctx, kubeClient, obj); err != nil {
return err
}
}
return nil
}
func applyAgentObject(ctx context.Context, kubeClient kubernetes.Interface, obj runtime.Object) error {
switch want := obj.(type) {
case *corev1.ServiceAccount:
return applyAgentServiceAccount(ctx, kubeClient, want)
case *rbacv1.ClusterRole:
return applyAgentClusterRole(ctx, kubeClient, want)
case *rbacv1.ClusterRoleBinding:
return applyAgentClusterRoleBinding(ctx, kubeClient, want)
case *appsv1.DaemonSet:
return applyAgentDaemonSet(ctx, kubeClient, want)
default:
return fmt.Errorf("unsupported agent object type %T", obj)
}
}
func applyAgentNamespace(ctx context.Context, kubeClient kubernetes.Interface, want *corev1.Namespace) error {
existing, err := kubeClient.CoreV1().Namespaces().Get(ctx, want.Name, metav1.GetOptions{})
if apierrors.IsNotFound(err) {
_, err = kubeClient.CoreV1().Namespaces().Create(ctx, want, metav1.CreateOptions{})
return err
}
if err != nil {
return err
}
labels, changed := mergeStringMapsInto(existing.Labels, want.Labels)
if !changed {
return nil
}
existing.Labels = labels
_, err = kubeClient.CoreV1().Namespaces().Update(ctx, existing, metav1.UpdateOptions{})
return err
}
func applyAgentServiceAccount(ctx context.Context, kubeClient kubernetes.Interface, want *corev1.ServiceAccount) error {
existing, err := kubeClient.CoreV1().ServiceAccounts(want.Namespace).Get(ctx, want.Name, metav1.GetOptions{})
if apierrors.IsNotFound(err) {
_, err = kubeClient.CoreV1().ServiceAccounts(want.Namespace).Create(ctx, want, metav1.CreateOptions{})
return err
}
if err != nil {
return err
}
changed := false
if !reflect.DeepEqual(existing.Labels, want.Labels) {
existing.Labels = want.Labels
changed = true
}
if !changed {
return nil
}
_, err = kubeClient.CoreV1().ServiceAccounts(want.Namespace).Update(ctx, existing, metav1.UpdateOptions{})
return err
}
func applyAgentClusterRole(ctx context.Context, kubeClient kubernetes.Interface, want *rbacv1.ClusterRole) error {
existing, err := kubeClient.RbacV1().ClusterRoles().Get(ctx, want.Name, metav1.GetOptions{})
if apierrors.IsNotFound(err) {
_, err = kubeClient.RbacV1().ClusterRoles().Create(ctx, want, metav1.CreateOptions{})
return err
}
if err != nil {
return err
}
changed := false
if !reflect.DeepEqual(existing.Labels, want.Labels) {
existing.Labels = want.Labels
changed = true
}
if !reflect.DeepEqual(existing.Rules, want.Rules) {
existing.Rules = want.Rules
changed = true
}
if !changed {
return nil
}
_, err = kubeClient.RbacV1().ClusterRoles().Update(ctx, existing, metav1.UpdateOptions{})
return err
}
func applyAgentClusterRoleBinding(ctx context.Context, kubeClient kubernetes.Interface, want *rbacv1.ClusterRoleBinding) error {
existing, err := kubeClient.RbacV1().ClusterRoleBindings().Get(ctx, want.Name, metav1.GetOptions{})
if apierrors.IsNotFound(err) {
_, err = kubeClient.RbacV1().ClusterRoleBindings().Create(ctx, want, metav1.CreateOptions{})
return err
}
if err != nil {
return err
}
// roleRef is immutable. If it differs, fail loudly instead of pretending we can patch it.
if !reflect.DeepEqual(existing.RoleRef, want.RoleRef) {
return fmt.Errorf("existing ClusterRoleBinding %q has different roleRef and must be recreated", want.Name)
}
changed := false
if !reflect.DeepEqual(existing.Labels, want.Labels) {
existing.Labels = want.Labels
changed = true
}
if !reflect.DeepEqual(existing.Subjects, want.Subjects) {
existing.Subjects = want.Subjects
changed = true
}
if !changed {
return nil
}
_, err = kubeClient.RbacV1().ClusterRoleBindings().Update(ctx, existing, metav1.UpdateOptions{})
return err
}
func applyAgentDaemonSet(ctx context.Context, kubeClient kubernetes.Interface, want *appsv1.DaemonSet) error {
existing, err := kubeClient.AppsV1().DaemonSets(want.Namespace).Get(ctx, want.Name, metav1.GetOptions{})
if apierrors.IsNotFound(err) {
_, err = kubeClient.AppsV1().DaemonSets(want.Namespace).Create(ctx, want, metav1.CreateOptions{})
return err
}
if err != nil {
return err
}
changed := false
if !reflect.DeepEqual(existing.Labels, want.Labels) {
existing.Labels = want.Labels
changed = true
}
if !reflect.DeepEqual(existing.Spec, want.Spec) {
existing.Spec = want.Spec
changed = true
}
if !changed {
return nil
}
_, err = kubeClient.AppsV1().DaemonSets(want.Namespace).Update(ctx, existing, metav1.UpdateOptions{})
return err
}
func mergeStringMapsInto(dst map[string]string, src map[string]string) (map[string]string, bool) {
if len(src) == 0 {
return dst, false
}
changed := false
if dst == nil {
dst = map[string]string{}
changed = true
}
for k, v := range src {
if dst[k] != v {
dst[k] = v
changed = true
}
}
return dst, changed
}

View File

@@ -1,7 +1,6 @@
package render
import (
"bytes"
"fmt"
appsv1 "k8s.io/api/apps/v1"
@@ -9,7 +8,6 @@ import (
rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/serializer/json"
"k8s.io/apimachinery/pkg/util/intstr"
monov1alpha1 "example.com/monok8s/pkg/apis/monok8s/v1alpha1"
@@ -17,9 +15,10 @@ import (
)
type ControllerConf struct {
Namespace string
Image string
Labels map[string]string
Namespace string
Image string
ImagePullSecrets []string
Labels map[string]string
}
func RenderControllerDeployments(conf ControllerConf) (string, error) {
@@ -41,27 +40,7 @@ func RenderControllerDeployments(conf ControllerConf) (string, error) {
buildControllerDeployment(conf),
}
s := runtime.NewScheme()
_ = corev1.AddToScheme(s)
_ = rbacv1.AddToScheme(s)
_ = appsv1.AddToScheme(s)
serializer := json.NewYAMLSerializer(json.DefaultMetaFactory, s, s)
var buf bytes.Buffer
for i, obj := range objs {
if i > 0 {
if _, err := fmt.Fprintln(&buf, "---"); err != nil {
return "", err
}
}
if err := serializer.Encode(obj, &buf); err != nil {
return "", err
}
}
return buf.String(), nil
return renderObjects(objs)
}
func buildControllerServiceAccount(conf ControllerConf) *corev1.ServiceAccount {
@@ -191,6 +170,7 @@ func buildControllerDeployment(conf ControllerConf) *appsv1.Deployment {
},
Spec: corev1.PodSpec{
ServiceAccountName: monov1alpha1.ControllerName,
ImagePullSecrets: imagePullSecrets(conf.ImagePullSecrets),
Containers: []corev1.Container{
{
Name: "controller",

View File

@@ -0,0 +1,74 @@
package render
import (
"bytes"
"fmt"
"strings"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/runtime"
"sigs.k8s.io/yaml"
)
func renderObjects(objs []runtime.Object) (string, error) {
var buf bytes.Buffer
for i, obj := range objs {
if i > 0 {
if _, err := fmt.Fprintln(&buf, "---"); err != nil {
return "", err
}
}
b, err := renderObjectYAML(obj)
if err != nil {
return "", err
}
if _, err := buf.Write(b); err != nil {
return "", err
}
}
return buf.String(), nil
}
func renderObjectYAML(obj runtime.Object) ([]byte, error) {
b, err := yaml.Marshal(obj)
if err != nil {
return nil, err
}
var m map[string]any
if err := yaml.Unmarshal(b, &m); err != nil {
return nil, err
}
delete(m, "status")
return yaml.Marshal(m)
}
func imagePullSecrets(names []string) []corev1.LocalObjectReference {
if len(names) == 0 {
return nil
}
refs := make([]corev1.LocalObjectReference, 0, len(names))
for _, name := range names {
name = strings.TrimSpace(name)
if name == "" {
continue
}
refs = append(refs, corev1.LocalObjectReference{
Name: name,
})
}
if len(refs) == 0 {
return nil
}
return refs
}

View File

@@ -1,16 +1,11 @@
package render
import (
"bytes"
"fmt"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/serializer/json"
"k8s.io/apimachinery/pkg/util/intstr"
monov1alpha1 "example.com/monok8s/pkg/apis/monok8s/v1alpha1"
@@ -39,27 +34,7 @@ func RenderSSHDDeployments(namespace, authKeys string) (string, error) {
buildSSHDDeployment(vals, namespace, labels),
}
s := runtime.NewScheme()
_ = corev1.AddToScheme(s)
_ = rbacv1.AddToScheme(s)
_ = appsv1.AddToScheme(s)
serializer := json.NewYAMLSerializer(json.DefaultMetaFactory, s, s)
var buf bytes.Buffer
for i, obj := range objs {
if i > 0 {
if _, err := fmt.Fprintln(&buf, "---"); err != nil {
return "", err
}
}
if err := serializer.Encode(obj, &buf); err != nil {
return "", err
}
}
return buf.String(), nil
return renderObjects(objs)
}
func buildSSHDConfigMap(

196
devtools/create-join-token.sh Executable file
View File

@@ -0,0 +1,196 @@
#!/bin/sh
set -eu
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
ROOT_DIR="$(realpath "$SCRIPT_DIR/..")"
LIB_DIR="$ROOT_DIR/scripts"
CLUSTER_ENV_WORK="${CLUSTER_ENV_WORK:-$ROOT_DIR/configs/cluster.env.work}"
KUBECTL="${KUBECTL:-kubectl}"
TTL_HOURS="${TTL_HOURS:-24}"
WAIT_SECONDS="${WAIT_SECONDS:-30}"
need() {
command -v "$1" >/dev/null 2>&1 || {
echo "missing required command: $1" >&2
exit 1
}
}
rfc3339_after_hours() {
hours="$1"
# GNU date
if date -u -d "+${hours} hours" '+%Y-%m-%dT%H:%M:%SZ' >/dev/null 2>&1; then
date -u -d "+${hours} hours" '+%Y-%m-%dT%H:%M:%SZ'
return
fi
# BSD/macOS date
if date -u -v+"${hours}"H '+%Y-%m-%dT%H:%M:%SZ' >/dev/null 2>&1; then
date -u -v+"${hours}"H '+%Y-%m-%dT%H:%M:%SZ'
return
fi
echo "cannot compute expiration time with this date(1). Set EXPIRATION manually." >&2
exit 1
}
decode_base64_to_file() {
input="$1"
output="$2"
if printf '%s' "$input" | base64 -d >"$output" 2>/dev/null; then
return
fi
if printf '%s' "$input" | base64 -D >"$output" 2>/dev/null; then
return
fi
if printf '%s' "$input" | openssl base64 -d -A >"$output" 2>/dev/null; then
return
fi
echo "failed to decode certificate-authority-data" >&2
exit 1
}
need "$KUBECTL"
need openssl
need awk
need sed
TOKEN_ID="${TOKEN_ID:-$(openssl rand -hex 3)}"
TOKEN_SECRET="${TOKEN_SECRET:-$(openssl rand -hex 8)}"
TOKEN="${TOKEN_ID}.${TOKEN_SECRET}"
SECRET_NAME="bootstrap-token-${TOKEN_ID}"
if [ "${TTL_HOURS}" = "0" ]; then
EXPIRATION=""
else
EXPIRATION="${EXPIRATION:-$(rfc3339_after_hours "$TTL_HOURS")}"
fi
echo "Creating bootstrap token Secret: ${SECRET_NAME}" >&2
{
cat <<EOF
apiVersion: v1
kind: Secret
metadata:
name: ${SECRET_NAME}
namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
description: "Join token created with kubectl"
token-id: "${TOKEN_ID}"
token-secret: "${TOKEN_SECRET}"
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
auth-extra-groups: "system:bootstrappers:kubeadm:default-node-token"
EOF
if [ -n "$EXPIRATION" ]; then
printf ' expiration: "%s"\n' "$EXPIRATION"
fi
} | "$KUBECTL" apply -f -
TMPDIR="$(mktemp -d)"
trap 'rm -rf "$TMPDIR"' EXIT INT TERM
CA_FILE="$TMPDIR/ca.crt"
CA_DATA="$("$KUBECTL" config view --raw --minify --flatten -o jsonpath='{.clusters[0].cluster.certificate-authority-data}')"
if [ -z "$CA_DATA" ]; then
echo "could not find certificate-authority-data in current kubeconfig" >&2
echo "token was created, but cannot print a safe kubeadm join command" >&2
echo "token: ${TOKEN}"
exit 0
fi
decode_base64_to_file "$CA_DATA" "$CA_FILE"
CA_HASH="$(
openssl x509 -in "$CA_FILE" -pubkey -noout |
openssl pkey -pubin -outform der 2>/dev/null |
openssl dgst -sha256 -hex |
awk '{print $2}'
)"
SERVER="$("$KUBECTL" config view --raw --minify -o jsonpath='{.clusters[0].cluster.server}')"
JOIN_ENDPOINT="$(printf '%s\n' "$SERVER" | sed -E 's#^https?://##')"
echo "Waiting for cluster-info signature for token ${TOKEN_ID}..." >&2
i=0
signed="false"
while [ "$i" -lt "$WAIT_SECONDS" ]; do
template="{{ index .data \"jws-kubeconfig-${TOKEN_ID}\" }}"
sig="$("$KUBECTL" -n kube-public get configmap cluster-info -o "go-template=${template}" 2>/dev/null || true)"
if [ -n "$sig" ]; then
signed="true"
break
fi
i=$((i + 1))
sleep 1
done
echo
echo "Token:"
echo " ${TOKEN}"
if [ -n "$EXPIRATION" ]; then
echo
echo "Expires:"
echo " ${EXPIRATION}"
fi
echo
echo "Join command:"
echo " kubeadm join ${JOIN_ENDPOINT} --token ${TOKEN} --discovery-token-ca-cert-hash sha256:${CA_HASH}"
TMP_ENV="$(mktemp)"
trap 'rm -f "$TMP_ENV"; rm -rf "$TMPDIR"' EXIT INT TERM
cat >"$TMP_ENV" <<EOF
MKS_API_SERVER_ENDPOINT=${JOIN_ENDPOINT}
MKS_BOOTSTRAP_TOKEN=${TOKEN}
MKS_DISCOVERY_TOKEN_CA_CERT_HASH=sha256:${CA_HASH}
EOF
echo
echo "cluster-config:"
cat "$TMP_ENV"
if [ ! -x "$LIB_DIR/merge-env.sh" ]; then
echo "merge-env.sh not found or not executable: $LIB_DIR/merge-env.sh" >&2
exit 1
fi
"$LIB_DIR/merge-env.sh" "$TMP_ENV" "$CLUSTER_ENV_WORK"
echo
echo "Merged into:"
echo " $CLUSTER_ENV_WORK"
echo
echo "Try"
cat <<EOF
make cluster-config \\
MKS_HOSTNAME=monok8s-worker \\
MKS_CLUSTER_ROLE=worker \\
MKS_INIT_CONTROL_PLANE=no \\
MKS_MGMT_ADDRESS=10.0.0.10/24 \\
MKS_APISERVER_ADVERTISE_ADDRESS=10.0.0.10 \\
MKS_CNI_PLUGIN=none
EOF
if [ "$signed" != "true" ]; then
echo >&2
echo "warning: cluster-info was not signed within ${WAIT_SECONDS}s." >&2
echo "If kubeadm join fails discovery, check that kube-controller-manager enables bootstrapsigner." >&2
fi

73
devtools/setup-bulid-host.sh Executable file
View File

@@ -0,0 +1,73 @@
#!/usr/bin/env bash
set -euo pipefail
if [ "$(id -u)" -ne 0 ]; then
echo "Run as root, e.g. sudo $0" >&2
exit 1
fi
. /etc/os-release
if [ "${ID:-}" != "debian" ]; then
echo "This script is intended for Debian. Detected ID=${ID:-unknown}" >&2
exit 1
fi
echo "==> Removing conflicting Docker packages, if present"
apt-get remove -y \
docker.io \
docker-compose \
docker-doc \
podman-docker \
containerd \
runc || true
echo "==> Installing minimal repo setup tools"
apt-get update
apt-get install -y --no-install-recommends \
ca-certificates \
curl
echo "==> Adding Docker official APT repo"
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg \
-o /etc/apt/keyrings/docker.asc
chmod a+r /etc/apt/keyrings/docker.asc
cat > /etc/apt/sources.list.d/docker.sources <<EOF
Types: deb
URIs: https://download.docker.com/linux/debian
Suites: ${VERSION_CODENAME}
Components: stable
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/docker.asc
EOF
echo "==> Installing build/test packages"
apt-get update
apt-get install -y --no-install-recommends \
docker-ce \
docker-buildx-plugin \
qemu-user-static \
binfmt-support \
make
echo "==> Enabling Docker"
systemctl enable --now docker
echo "==> Registering binfmt handlers"
systemctl restart binfmt-support || true
echo "==> Docker version"
docker --version
echo "==> Buildx version"
docker buildx version || true
echo "==> Done"
echo
echo "Optional: allow your normal user to run docker without sudo:"
echo " sudo usermod -aG docker \$USER"
echo "Then log out and back in."

View File

@@ -23,6 +23,7 @@ WORKDIR /build/nxplinux
COPY kernel-extra.config /tmp/kernel-extra.config
COPY kernel-build/dts/*.dts ./arch/arm64/boot/dts/freescale/
COPY kernel-build/ensure-kconfig.sh /build/
RUN grep -q "^dtb-\\\$(CONFIG_ARCH_LAYERSCAPE) += ${DEVICE_TREE_TARGET}.dtb$" \
arch/arm64/boot/dts/freescale/Makefile \
@@ -33,7 +34,7 @@ RUN grep -q "^dtb-\\\$(CONFIG_ARCH_LAYERSCAPE) += ${DEVICE_TREE_TARGET}.dtb$" \
RUN make ARCH="${ARCH}" CROSS_COMPILE="${CROSS_COMPILE}" defconfig lsdk.config \
&& ./scripts/kconfig/merge_config.sh -m .config /tmp/kernel-extra.config \
&& make ARCH="${ARCH}" CROSS_COMPILE="${CROSS_COMPILE}" olddefconfig \
&& grep '^CONFIG_NF_TABLES=' .config \
&& /build/ensure-kconfig.sh .config /tmp/kernel-extra.config \
&& make ARCH="${ARCH}" CROSS_COMPILE="${CROSS_COMPILE}" -j"$(nproc)"
# artifact collection

16
docs/cilium.md Normal file
View File

@@ -0,0 +1,16 @@
# Worker node
```yaml
apiVersion: cilium.io/v2
kind: CiliumNodeConfig
metadata:
namespace: kube-system
name: monok8s-worker
spec:
nodeSelector:
matchLabels:
node.kubernetes.io/instance-type: mono-gateway
defaults:
devices: "eth1"
direct-routing-device: "eth1"
```

View File

@@ -22,6 +22,9 @@ mount_retry() {
i=0
while :; do
# BusyBox mount just needs a normal -o option string here.
# The important bit is that overlayfs itself requires lowerdir/upperdir/workdir,
# and workdir must live on the same filesystem as upperdir.
if mount -o "$opts" -t "$fstype" "$dev" "$target"; then
return 0
fi
@@ -32,6 +35,30 @@ mount_retry() {
done
}
mount_data_overlay() {
dir="$1"
case "$dir" in
/*) ;;
*) panic "overlay dir must be absolute: $dir" ;;
esac
lower="/newroot$dir"
state="/newroot/data${dir}-overlay"
upper="$state/upper"
work="$state/work"
[ -d "$lower" ] || mkdir -p "$lower"
mkdir -p "$upper" "$work"
log "Mounting overlay for $dir"
mount_or_panic -t overlay overlay \
-o "lowerdir=$lower,upperdir=$upper,workdir=$work" \
"$lower"
}
wait_for_path() {
path="$1"
i=0
@@ -207,6 +234,16 @@ mount_or_panic -t proc proc /proc
mount_or_panic -t sysfs sysfs /sys
mount_or_panic -t tmpfs tmpfs /run
mkdir -p /sys/fs/bpf
if ! mountpoint -q /sys/fs/bpf; then
mount_or_panic -t bpf bpffs /sys/fs/bpf
fi
mount_or_panic --make-rshared /sys
mount_or_panic --make-rshared /run
mount_or_panic --make-shared /sys/fs/bpf
echo 1 > /proc/sys/kernel/printk
mkdir -p /dev/pts
@@ -264,17 +301,11 @@ mount_retry "$ROOT_DEV" /newroot ext4 ro
mount_retry "$DATA_DEV" /newroot/data ext4 rw
mkdir -p /newroot/data/var
mkdir -p /newroot/data/etc-overlay/upper
mkdir -p /newroot/data/etc-overlay/work
mount_or_panic --bind /newroot/data/var /newroot/var
mount_or_panic --make-rshared /newroot/var
# BusyBox mount just needs a normal -o option string here.
# The important bit is that overlayfs itself requires lowerdir/upperdir/workdir,
# and workdir must live on the same filesystem as upperdir.
mount_or_panic -t overlay overlay \
-o "lowerdir=/newroot/etc,upperdir=/newroot/data/etc-overlay/upper,workdir=/newroot/data/etc-overlay/work" \
/newroot/etc
mount_data_overlay /etc
mount_data_overlay /opt/cni
if [ "$BOOT_PART" = "A" ]; then
ALT_PART="$(find_sibling_part_on_same_disk "$ROOT_DEV" rootfsB || true)"

127
kernel-build/ensure-kconfig.sh Executable file
View File

@@ -0,0 +1,127 @@
#!/bin/sh
set -eu
CONFIG_FILE="${1:-}"
EXPECTED_FILE="${2:-}"
if [ -z "$CONFIG_FILE" ] || [ -z "$EXPECTED_FILE" ]; then
echo "usage: $0 <resolved-.config> <expected-fragment.config>" >&2
exit 2
fi
if [ ! -f "$CONFIG_FILE" ]; then
echo "error: config file not found: $CONFIG_FILE" >&2
exit 2
fi
if [ ! -f "$EXPECTED_FILE" ]; then
echo "error: expected config fragment not found: $EXPECTED_FILE" >&2
exit 2
fi
failed=0
normalize_expected_line() {
line="$1"
case "$line" in
CONFIG_*=y|CONFIG_*=m)
echo "$line"
;;
CONFIG_*=n)
sym="${line%%=*}"
echo "# $sym is not set"
;;
"# CONFIG_"*" is not set")
echo "$line"
;;
CONFIG_*=*)
echo "$line"
;;
*)
return 1
;;
esac
}
is_disabled_expected() {
expected="$1"
case "$expected" in
"# CONFIG_"*" is not set")
return 0
;;
*)
return 1
;;
esac
}
symbol_from_expected() {
expected="$1"
case "$expected" in
CONFIG_*=*)
echo "${expected%%=*}"
;;
"# CONFIG_"*" is not set")
printf '%s\n' "$expected" | sed 's/^# \(CONFIG_[^ ]*\) is not set$/\1/'
;;
*)
return 1
;;
esac
}
check_expected_line() {
expected="$1"
sym="$(symbol_from_expected "$expected")"
actual="$(grep -E "^${sym}=|^# ${sym} is not set$" "$CONFIG_FILE" || true)"
if [ "$actual" = "$expected" ]; then
return 0
fi
# For disabled symbols, absence from the final .config is acceptable.
# Some Kconfig symbols do not exist on this arch/tree, and missing still means "not enabled".
if is_disabled_expected "$expected" && [ -z "$actual" ]; then
return 0
fi
echo "kconfig mismatch: $sym" >&2
echo " expected: $expected" >&2
if [ -n "$actual" ]; then
echo " actual: $actual" >&2
else
echo " actual: <missing>" >&2
fi
failed=1
}
while IFS= read -r raw || [ -n "$raw" ]; do
# Strip leading/trailing whitespace.
line="$(printf '%s\n' "$raw" | sed 's/^[[:space:]]*//; s/[[:space:]]*$//')"
# Ignore blanks.
[ -z "$line" ] && continue
# Ignore normal comments, but keep '# CONFIG_FOO is not set'.
case "$line" in
"# CONFIG_"*" is not set") ;;
"#"*) continue ;;
esac
expected="$(normalize_expected_line "$line" || true)"
[ -z "${expected:-}" ] && continue
check_expected_line "$expected"
done < "$EXPECTED_FILE"
if [ "$failed" -ne 0 ]; then
echo "error: resolved kernel config does not satisfy $EXPECTED_FILE" >&2
exit 1
fi
echo "kernel config satisfies $EXPECTED_FILE"

View File

@@ -3,81 +3,39 @@
###############################################################################
CONFIG_HWMON=y
# Hardware monitoring framework. Needed so sensor drivers can expose temps/fans.
CONFIG_I2C=y
# Core I2C subsystem. Required by your RTC/fan controller drivers.
CONFIG_SENSORS_EMC2305=y
# EMC2305 fan controller driver. Built-in so fan control is available early.
CONFIG_RTC_DRV_PCF2127=y
# RTC driver for PCF2127. Built-in so timekeeping is available early.
###############################################################################
# Namespaces
# These are fundamental container primitives. Keep these built-in.
###############################################################################
CONFIG_NAMESPACES=y
# Master switch for Linux namespaces.
CONFIG_UTS_NS=y
# Isolates hostname/domainname per container.
CONFIG_IPC_NS=y
# Isolates SysV IPC and POSIX message queues between containers.
CONFIG_PID_NS=y
# Gives containers their own PID tree (so processes inside see their own PID 1).
CONFIG_NET_NS=y
# Gives containers their own network stack, interfaces, routing, etc.
CONFIG_USER_NS=y
# User namespaces. Useful for modern container behavior and future flexibility.
# Not every setup strictly needs this on day one, but I would enable it.
###############################################################################
# Cgroups / resource control
# Required for kubelet/CRI-O to manage resource isolation.
###############################################################################
CONFIG_CGROUPS=y
# Master switch for cgroups.
CONFIG_CGROUP_BPF=y
# Allows BPF programs to be attached to cgroups. Not required for first boot,
# but modern systems increasingly expect this.
CONFIG_CGROUP_NET_CLASSID=y
CONFIG_CGROUP_FREEZER=y
# Allows freezing/thawing process groups. Useful for container lifecycle control.
CONFIG_CGROUP_PIDS=y
# Limits number of processes in a cgroup.
CONFIG_CGROUP_DEVICE=y
# Controls device access from containers.
CONFIG_CPUSETS=y
# CPU affinity partitioning by cgroup.
CONFIG_MEMCG=y
# Memory cgroup support. Critical for container memory accounting/limits.
CONFIG_BLK_CGROUP=y
# Block IO control/accounting for cgroups.
CONFIG_CGROUP_SCHED=y
# Scheduler integration for cgroups.
CONFIG_FAIR_GROUP_SCHED=y
# Fair scheduler group support for cgroups.
CONFIG_CFS_BANDWIDTH=y
# CPU quota/limit support. Important for kubelet resource enforcement.
###############################################################################
@@ -85,23 +43,20 @@ CONFIG_CFS_BANDWIDTH=y
###############################################################################
CONFIG_KEYS=y
# Kernel key retention service. Commonly relied on by container/userland tooling.
CONFIG_TMPFS=y
# Tmpfs support. Containers and runtimes rely on this heavily.
CONFIG_TMPFS_XATTR=y
# Extended attributes on tmpfs. Useful for container runtime behavior.
CONFIG_TMPFS_POSIX_ACL=y
# POSIX ACLs on tmpfs. Good compatibility feature for userland.
CONFIG_OVERLAY_FS=y
# Overlay filesystem. This is the big one for container image/layer storage.
# Module is fine; CRI-O can load/use it after boot. No need to bloat FIT image.
CONFIG_FS_POSIX_ACL=y
# General POSIX ACL support. Good to have for overlay/tmpfs behavior.
CONFIG_FHANDLE=y
CONFIG_AUTOFS_FS=y
CONFIG_PROC_FS=y
CONFIG_SYSFS=y
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_BLK_DEV_LOOP=y
###############################################################################
@@ -109,171 +64,144 @@ CONFIG_FS_POSIX_ACL=y
###############################################################################
CONFIG_INET=y
# IPv4 stack.
CONFIG_IPV6=y
# IPv6 stack. You may be tempted to disable it, but Kubernetes/container stacks
# increasingly assume it exists. Keep it on unless you have a hard reason not to.
CONFIG_UNIX=y
# Unix domain sockets. Containers and runtimes absolutely rely on this.
CONFIG_TUN=m
# TUN/TAP device support. Commonly used by networking tools/VPN/CNI-related flows.
# Module is fine.
CONFIG_DUMMY=m
# Dummy network interface. Sometimes useful for CNI/network setups and testing.
CONFIG_VETH=y
CONFIG_BRIDGE=y
CONFIG_BRIDGE_NETFILTER=y
CONFIG_VXLAN=y
# Enables IPv4/IPv6 policy routing and multiple routing tables.
# Required by CNIs such as Cilium for ip-rule based routing.
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IPV6_MULTIPLE_TABLES=y
###############################################################################
# Netfilter / packet filtering / NAT
# This is where container networking gets messy. Better to enable a sane baseline.
# Netfilter base
###############################################################################
CONFIG_NETFILTER=y
# Netfilter core framework. Module is okay if your setup loads it before use.
CONFIG_NETFILTER_ADVANCED=y
# Exposes more advanced netfilter options and modules.
CONFIG_NETFILTER_NETLINK=y
CONFIG_NETFILTER_XTABLES=y
# Linux 6.17+ gates legacy iptables/xtables support behind these options.
# Without these, IP_NF_* / IP6_NF_* options may silently fall back to =m
# or disappear after olddefconfig.
CONFIG_NETFILTER_XTABLES_LEGACY=y
CONFIG_NF_CONNTRACK=y
# Connection tracking. Critical for NAT, Kubernetes service traffic, and many CNIs.
CONFIG_NF_NAT=y
# NAT framework. Required for masquerading and pod egress in many setups.
CONFIG_NF_TABLES=y
# nftables framework. Modern Linux packet filtering backend.
CONFIG_NFT_CT=y
# nftables conntrack expressions.
CONFIG_NFT_COUNTER=y
# nftables packet/byte counters
CONFIG_NFT_CHAIN_NAT=y
# nftables NAT chain support.
CONFIG_NFT_MASQ=y
# nftables masquerade support. Often needed for pod egress NAT.
CONFIG_NFT_REDIR=y
# nftables redirect target.
CONFIG_NFT_NAT=y
# nftables NAT support.
CONFIG_NF_NAT_IPV4=y
# IPv4 NAT helper support. Some kernels still expose this separately.
CONFIG_NF_NAT_IPV6=y
# IPv6 NAT helper support.
CONFIG_NF_CT_NETLINK=y
# userspace netlink access to the conntrack table; kube-proxy uses this for conntrack listing/cleanup
CONFIG_NF_CT_NETLINK_TIMEOUT=y
# userspace netlink support for conntrack timeout objects
CONFIG_NF_CT_NETLINK_HELPER=y
# userspace netlink support for conntrack helper objects
CONFIG_IP_NF_IPTABLES=y
# iptables compatibility for IPv4. Still useful because lots of CNI/plugin code
# still expects iptables even on nft-backed systems.
CONFIG_IP_NF_NAT=y
# IPv4 NAT support for iptables compatibility.
CONFIG_IP6_NF_IPTABLES=y
# ip6tables compatibility.
CONFIG_IP6_NF_FILTER=y
# IPv6 "filter" table (same as above but for IPv6)
CONFIG_NF_REJECT_IPV4=y
# core IPv4 reject logic used by netfilter/iptables/nftables
CONFIG_NF_REJECT_IPV6=y
# Do not re-add these stale / absent symbols for this NXP 6.18 tree:
#
# CONFIG_NF_NAT_IPV4
# CONFIG_NF_NAT_IPV6
# CONFIG_NFT_CHAIN_NAT
# CONFIG_NFT_COUNTER
# CONFIG_NETFILTER_XT_TARGET_REJECT
#
# Use the currently valid symbols instead:
#
# CONFIG_NF_NAT
# CONFIG_IP_NF_NAT
# CONFIG_IP6_NF_NAT
# CONFIG_IP_NF_TARGET_REJECT
# CONFIG_IP6_NF_TARGET_REJECT
# CONFIG_NFT_REJECT
# CONFIG_NFT_REJECT_INET
#
# Also avoid enabling these unless there is a real need:
#
# CONFIG_NF_CT_NETLINK_TIMEOUT
# CONFIG_NF_CT_NETLINK_HELPER
#
# They exist in this tree, but pull in extra dependencies and are not required
# for basic Kubernetes/Cilium bring-up.
###############################################################################
# nftables backend
###############################################################################
CONFIG_NF_TABLES=y
CONFIG_NF_TABLES_INET=y
CONFIG_NFT_CT=y
CONFIG_NFT_MASQ=y
CONFIG_NFT_REDIR=y
CONFIG_NFT_NAT=y
CONFIG_NFT_REJECT=y
# nftables equivalent of REJECT (needed for nf_tables backend compatibility)
CONFIG_NFT_REJECT_INET=y
###############################################################################
# nftables FIB expression
#
# Required by CNI hostport nftables rules such as:
# fib daddr type local goto hostports
###############################################################################
CONFIG_NFT_FIB=y
CONFIG_NFT_FIB_INET=y
CONFIG_NFT_FIB_IPV4=y
CONFIG_NFT_FIB_IPV6=y
###############################################################################
# IPv4 iptables compatibility
###############################################################################
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_IPTABLES_LEGACY=y
CONFIG_IP_NF_FILTER=y
# IPv4 "filter" table (INPUT/FORWARD/OUTPUT chains for iptables)
CONFIG_IP_NF_MANGLE=y
CONFIG_IP_NF_NAT=y
CONFIG_IP_NF_TARGET_REJECT=y
# IPv4-specific REJECT target for legacy iptables
###############################################################################
# IPv6 iptables compatibility
###############################################################################
CONFIG_IP6_NF_IPTABLES=y
CONFIG_IP6_NF_IPTABLES_LEGACY=y
CONFIG_IP6_NF_FILTER=y
CONFIG_IP6_NF_MANGLE=y
CONFIG_IP6_NF_NAT=y
CONFIG_IP6_NF_TARGET_REJECT=y
# IPv6-specific REJECT target for legacy iptables
CONFIG_IP_SET=m
# IP sets. Useful for some network policies / firewalling toolchains.
CONFIG_NETFILTER_NETLINK_ACCT=y
# netfilter accounting subsystem used for nfacct-based kube-proxy metrics
CONFIG_NETFILTER_XT_MATCH_NFACCT=y
# iptables nfacct match that hooks rules into the netfilter accounting subsystem
###############################################################################
# xtables matches / targets
###############################################################################
CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=y
# xtables match for address types. Often used in iptables rules.
CONFIG_NETFILTER_XT_TARGET_REJECT=y
# iptables REJECT target (actively reject packets instead of silently dropping)
CONFIG_NETFILTER_XT_MATCH_COMMENT=y
# Allows comments in iptables rules. Not critical, but harmless and useful.
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
# xtables conntrack matching.
CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
# iptables "statistic" match used for probabilistic packet matching / load balancing
CONFIG_NETFILTER_XT_MATCH_MARK=y
CONFIG_NETFILTER_XT_MATCH_MULTIPORT=y
# Match multiple ports in one rule.
CONFIG_NETFILTER_XT_MATCH_SOCKET=y
CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
CONFIG_NETFILTER_XT_MATCH_TCPMSS=y
# Useful for TCP MSS clamping in some network paths.
CONFIG_NETFILTER_XT_TARGET_MASQUERADE=y
# iptables MASQUERADE target. Very commonly needed for pod outbound NAT.
CONFIG_NETFILTER_XT_TARGET_REDIRECT=y
# Redirect target.
CONFIG_NETFILTER_XT_TARGET_MARK=y
# Packet marking support. Useful for advanced networking/routing rules.
CONFIG_NETFILTER_XT_TARGET_CT=y
# Connection tracking target for xtables.
CONFIG_NETFILTER_XT_TARGET_MARK=y
CONFIG_NETFILTER_XT_TARGET_MASQUERADE=y
CONFIG_NETFILTER_XT_TARGET_REDIRECT=y
CONFIG_NETFILTER_XT_TARGET_TPROXY=y
# Optional. Good only if you know you need transparent proxying.
# Not required for initial CRI-O bring-up.
# CONFIG_NETFILTER_XT_TARGET_TPROXY=m
CONFIG_NETFILTER_NETLINK_ACCT=y
CONFIG_NETFILTER_XT_MATCH_NFACCT=y
###############################################################################
# Bridge / container interface plumbing
###############################################################################
CONFIG_VETH=y
# Virtual Ethernet pairs. This is how container interfaces are commonly connected
# to the host/network namespace.
CONFIG_BRIDGE=y
# Ethernet bridge support. Needed by bridge-based CNIs.
CONFIG_BRIDGE_NETFILTER=y
# Allows bridged traffic to pass through netfilter/iptables/nftables hooks.
# Important for Kubernetes networking behavior.
# Optional / version-dependent:
# Some kernels expose additional ebtables/bridge netfilter pieces separately.
# Keep this if your kernel has it, but don't panic if it doesn't.
CONFIG_BRIDGE_NF_EBTABLES=y
# Bridge filtering via ebtables compatibility. Sometimes useful, not always critical.
CONFIG_IP_SET=m
###############################################################################
@@ -281,24 +209,10 @@ CONFIG_BRIDGE_NF_EBTABLES=y
###############################################################################
CONFIG_SECCOMP=y
# Secure computing mode. Lets runtimes restrict syscall surface.
CONFIG_SECCOMP_FILTER=y
# BPF-based seccomp filters. This is the useful seccomp mode for containers.
# AppArmor / SELinux are optional depending on distro/security model.
# Alpine often won't use AppArmor by default; that's fine for first bring-up.
# If your kernel tree has these and you care later:
# CONFIG_SECURITY=y
# CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
# enables Security Module (LSM) hooks for network operations. CoreDNS needs this
CONFIG_SECURITY_PATH=y
# Recommended for container isolation
CONFIG_SECURITY_NETWORK_XFRM=y
@@ -307,59 +221,28 @@ CONFIG_SECURITY_NETWORK_XFRM=y
###############################################################################
CONFIG_POSIX_MQUEUE=y
# POSIX message queues. Containers/apps sometimes rely on this.
CONFIG_EPOLL=y
# Event polling. Usually already enabled; standard modern userspace feature.
CONFIG_SIGNALFD=y
# File-descriptor-based signal delivery. Common Linux userspace feature.
CONFIG_TIMERFD=y
# File-descriptor timers. Common Linux userspace feature.
CONFIG_EVENTFD=y
# Event notification file descriptors. Common Linux userspace feature.
CONFIG_MEMFD_CREATE=y
# Anonymous memory-backed file creation. Widely used by modern software.
CONFIG_FHANDLE=y
# File handle support. Useful for container/runtime operations.
###############################################################################
# Disable unused platform/virtualization pieces
###############################################################################
CONFIG_DMIID=n
# Optional on embedded boards; usually not needed unless your tree selects it.
###############################################################################
# Storage / block / other practical container bits
###############################################################################
CONFIG_BLK_DEV_LOOP=y
# Loop devices. Often useful for image/layer tooling or debugging.
# Could be =m too, but built-in is harmless and often convenient.
CONFIG_AUTOFS_FS=y
# Automount filesystem support. Not strictly required for CRI-O, but harmless.
CONFIG_PROC_FS=y
# /proc support. Essential.
CONFIG_SYSFS=y
# /sys support. Essential.
CONFIG_DEVTMPFS=y
# Kernel-managed /dev population support.
CONFIG_DEVTMPFS_MOUNT=y
# Automatically mount devtmpfs. Very practical on small/custom systems.
### Disable XEN because it breaks our build and we don't need it
CONFIG_XEN=n
CONFIG_XEN_DOM0=n
CONFIG_VHOST_XEN=n
### For Disk IO diagnostics
###############################################################################
# Disk IO diagnostics
###############################################################################
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_IO_ACCOUNTING=y
CONFIG_TASKSTATS=y

View File

@@ -1,4 +1,5 @@
include build.env
-include build.env.work
export
TAG ?= dev
@@ -26,9 +27,8 @@ CLITOOLS_BIN := bin/ctl-linux-$(ARCH)-$(TAG)
CONFIGS_DIR := configs
SCRIPTS_DIR := scripts
CLUSTER_ENV_DEFAULT := $(CONFIGS_DIR)/cluster.env.default
CLUSTER_ENV_WORK := $(CONFIGS_DIR)/cluster.env.work
CLUSTER_ENV := $(OUT_DIR)/cluster.env
NODE_ENV_DEFAULT := configs/node.env.default
NODE_ENV := $(OUT_DIR)/node.env
BOARD_ITB := $(OUT_DIR)/board.itb
INITRAMFS := $(OUT_DIR)/initramfs.cpio.gz
@@ -147,7 +147,11 @@ $(BUILD_BASE_STAMP): $(BUILD_BASE_DEPS) | $(OUT_DIR)
--build-arg APT_PROXY=$(APT_PROXY) \
--build-arg TAG=$(TAG) \
-t $(DOCKER_IMAGE_ROOT)/build-base:$(TAG) .
@iid=$$(docker image inspect monok8s/build-base:$(TAG) | jq -r '.[].Id' | cut -d':' -f2 | cut -c -8); \
@iid=$$(docker image inspect \
--format '{{.Id}}' \
$(DOCKER_IMAGE_ROOT)/build-base:$(TAG) \
| cut -d':' -f2 \
| cut -c -8); \
docker tag monok8s/build-base:$(TAG) monok8s/build-base:$$iid; \
touch $@
@@ -177,10 +181,14 @@ $(INITRAMFS): $(INITRAMFS_DEPS) $(DOWNLOAD_PACKAGES_STAMP) | $(OUT_DIR)
test -f $@
$(CLITOOLS_BIN): $(CLITOOLS_SRCS)
$(MAKE) -C clitools build-agent
$(MAKE) -C clitools build-local VERSION="$(TAG)"
vpp: $(BUILD_BASE_STAMP) $(VPP_TAR) $(DPDK_TAR) $(FMLIB_TAR) $(FMC_TAR) $(NXP_TAR)
@build_base_tag=$$(docker image inspect $(DOCKER_IMAGE_ROOT)/build-base:$(TAG) | jq -r '.[0].Id' | cut -d':' -f2 | cut -c -8); \
@build_base_tag=$$(docker image inspect \
--format '{{.Id}}' \
$(DOCKER_IMAGE_ROOT)/build-base:$(TAG) \
| cut -d':' -f2 \
| cut -c -8); \
@mkdir -p $(OUT_DIR)/vpp
docker build \
-f docker/vpp.Dockerfile \
@@ -219,7 +227,11 @@ $(BOARD_ITB): $(ITB_DEPS) | $(OUT_DIR)
test -f $@
$(RELEASE_IMAGE): $(RELEASE_DEPS) $(DOWNLOAD_PACKAGES_STAMP) | $(OUT_DIR)
@build_base_tag=$$(docker image inspect $(DOCKER_IMAGE_ROOT)/build-base:$(TAG) | jq -r '.[0].Id' | cut -d':' -f2 | cut -c -8); \
@build_base_tag=$$(docker image inspect \
--format '{{.Id}}' \
$(DOCKER_IMAGE_ROOT)/build-base:$(TAG) \
| cut -d':' -f2 \
| cut -c -8); \
docker build \
-f docker/alpine.Dockerfile \
--no-cache \
@@ -264,8 +276,15 @@ $(RELEASE_IMAGE): $(RELEASE_DEPS) $(DOWNLOAD_PACKAGES_STAMP) | $(OUT_DIR)
# ---- config targets ------------------------------------------------------------
cluster-config: $(CLUSTER_ENV_DEFAULT) $(SCRIPTS_DIR)/merge-env.sh | $(OUT_DIR)
cluster-config: $(CLUSTER_ENV_DEFAULT) $(CLUSTER_ENV_WORK) $(SCRIPTS_DIR)/merge-env.sh | $(OUT_DIR)
@rm -f $(CLUSTER_ENV)
sh $(SCRIPTS_DIR)/merge-env.sh $(CLUSTER_ENV_DEFAULT) $(CLUSTER_ENV)
@if [ -f "$(CLUSTER_ENV_WORK)" ]; then \
echo "Merging $(CLUSTER_ENV_WORK) into $(CLUSTER_ENV)"; \
sh $(SCRIPTS_DIR)/merge-env.sh $(CLUSTER_ENV_WORK) $(CLUSTER_ENV); \
else \
echo "No $(CLUSTER_ENV_WORK), using defaults only"; \
fi
cluster-defconfig: $(CLUSTER_ENV_DEFAULT) | $(OUT_DIR)
cp $(CLUSTER_ENV_DEFAULT) $(CLUSTER_ENV)

View File

@@ -6,6 +6,25 @@ OUTPUT="${2:?output file required}"
mkdir -p "$(dirname "$OUTPUT")"
TMP="$(mktemp)"
BASE_CREATED=0
if [ -f "$OUTPUT" ]; then
BASE="$OUTPUT"
else
BASE="$(mktemp)"
BASE_CREATED=1
: > "$BASE"
fi
cleanup() {
rm -f "$TMP"
if [ "$BASE_CREATED" = "1" ]; then
rm -f "$BASE"
fi
}
trap cleanup EXIT INT TERM
awk '
function trim(s) {
sub(/^[[:space:]]+/, "", s)
@@ -13,33 +32,76 @@ function trim(s) {
return s
}
BEGIN {
for (k in ENVIRON) {
env[k] = ENVIRON[k]
}
}
/^[[:space:]]*#/ || /^[[:space:]]*$/ {
print
next
}
{
line = $0
function parse_key(line, eq, key) {
eq = index(line, "=")
if (eq == 0) {
print line
next
return ""
}
key = trim(substr(line, 1, eq - 1))
val = substr(line, eq + 1)
if (key in env) {
print key "=" env[key]
} else {
print line
if (key !~ /^[A-Za-z_][A-Za-z0-9_]*$/) {
return ""
}
return key
}
function merged_line(key, line) {
if (key ~ /^MKS_/ && key in ENVIRON) {
return key "=" ENVIRON[key]
}
return line
}
# First file: INPUT
phase == 1 {
line = $0
key = parse_key(line)
if (key != "") {
incoming[key] = merged_line(key, line)
if (!(key in input_seen)) {
input_order[++input_count] = key
input_seen[key] = 1
}
}
next
}
# Second file: existing OUTPUT / BASE
phase == 2 {
line = $0
key = parse_key(line)
if (key != "" && key in incoming) {
print incoming[key]
written[key] = 1
next
}
print line
if (key != "") {
written[key] = 1
}
next
}
END {
for (i = 1; i <= input_count; i++) {
key = input_order[i]
if (!(key in written)) {
print incoming[key]
written[key] = 1
}
}
}
' "$INPUT" > "$OUTPUT"
' phase=1 "$INPUT" phase=2 "$BASE" > "$TMP"
mv "$TMP" "$OUTPUT"