This commit is contained in:
2026-05-01 01:39:04 +08:00
parent db6e2db502
commit 7167e36bf2
5 changed files with 638 additions and 0 deletions

View File

@@ -0,0 +1,717 @@
# Reproducible Docker Build System for a Tarball-Supplied Upstream Project
## Executive summary
The right way to containerize a tarball-supplied upstream build is to make the build **closed over the build context**: put the upstream tarball and every third-party source archive in `packages/`, pass only **paths** as build arguments, verify checksums inside the build, extract archives explicitly, and remove every build-time `git clone`, `wget`, or other network fetch from the Dockerfile and the upstream Makefile. For the final output, use a **multi-stage build** and export only artifacts, not the entire toolchain image. Dockers official docs support exactly this pattern: build arguments to parameterize builds, multi-stage builds to shrink outputs, `.dockerignore` to keep the context tight, and the local exporter to write build artifacts directly to the host filesystem. citeturn9view6turn9view10turn15view0turn10view1turn9view12
For the supplied ASK archive specifically, direct inspection of the uploaded tarballs shows a C/C++ and kernel-module build for entity["company","NXP","semiconductor company"] Layerscape hardware, a Debian-oriented cross-build setup, hardcoded network fetches for `fmlib`, `fmc`, `libnfnetlink`, and `libnetfilter_conntrack`, and a separate kernel tree requirement through `KDIR`. The ASK tarball also contains a `.git/` directory, so version generation must be normalized if you want reproducible tarball-based results. The supplied kernel archive expands as `linux-lf-6.12.49-2.2.0/`. Those facts drive the concrete implementation below.
The Alpine conversion is practical, but only if you treat musl seriously. Alpine uses musl, not glibc, and Alpines own docs warn that musl does not implement most glibc locale behavior. The musl docs also call out common failure modes: glibc-specific `#ifdef`s, GNU `getopt` expectations, iconv assumptions, small default thread stacks, and different dynamic-loader behavior. For simple glibc-runtime gaps, Alpine recommends `gcompat`; for harder runtime compatibility issues, Alpine explicitly points you to containers or chroots running a glibc distribution instead of pretending musl is a drop-in replacement. citeturn18view0turn18view1turn18view5turn18view7turn18view8
## What the supplied tarballs imply
Direct inspection of the supplied ASK tarball shows all of the following.
The top-level Makefile includes `build/toolchain.mk` and `build/sources.mk`, sets `HOST := aarch64-linux-gnu`, defaults `KDIR` to `$(HOME)/Mono/linux`, and fetches upstream dependencies in the build itself. `fmlib` and `fmc` are cloned at tag `lf-6.12.49-2.2.0`; `libnfnetlink-1.0.2` and `libnetfilter_conntrack-1.1.0` are downloaded as tarballs and then patched. That is reproducibility-hostile because the build is not closed over the supplied source archive.
The provided ASK source already contains one musl-aware clue: `cmm/src/cmm.c` gates `execinfo.h`, `backtrace()`, and `backtrace_symbols()` behind `__GLIBC__` checks. That means a pure musl build is plausible, but it does **not** prove the whole userspace is glibc-independent.
The ASK build is not just userspace. It also builds out-of-tree kernel modules (`cdx`, `fci`, `auto_bridge`), and those require a matching kernel source tree. So the build system needs a **second tarball input** for full module builds. If that kernel tarball is absent, the build should degrade cleanly to `BUILD_TARGET=userspace`.
There are **no Dockerfiles** in the supplied archive, so the Alpine conversion below is not a line-by-line rewrite of existing container files. It is a concrete replacement build design based on the actual source tree that was provided.
### Assumptions where the upstream is unspecified
The concrete code below is tailored to the supplied ASK archive. Where the users request is broader than the archive, these are the assumptions:
- The upstream source enters the build as `packages/ASK.tar.gz`.
- Every third-party upstream source the build needs is also vendored into `packages/`.
- Kernel module builds require a matching kernel source tarball, exposed as `KERNEL_TAR`.
- If the project were not ASK but some other tarball-based upstream, the same pattern would still apply: replace in-build network fetches with tarball extraction; keep a closed build context; use a builder stage plus a minimal final stage.
- If the language/build system were unspecified, a reasonable default is:
- C/C++/make/cmake/autotools: Alpine builder with `build-base` and the needed `-dev` packages.
- Go: Alpine builder with Go toolchain, then copy the compiled binary into a minimal runtime stage.
- Python: Alpine builder with Python and build dependencies, then wheel install into a slim runtime stage.
- Node: Alpine builder with Node toolchain, then copy built assets or production-only install into runtime.
- No CPU/arch was specified in the prompt, but the supplied ASK sources are arm64-oriented, so the examples default to `linux/arm64`.
That base-image strategy follows Dockers guidance to use trusted minimal bases and multi-stage builds; the specific image family for Go, Python, or Node is a practical recommendation layered on top of that principle. citeturn15view3turn9view10
## Recommended design
The design should be blunt and boring.
Put `ASK.tar.gz`, the kernel tarball, and every dependency tarball under `packages/`. Add a root `.dockerignore` that excludes everything else. Replace the upstream Makefiles `git clone` and `wget` logic with extraction from paths passed as build args. Verify `SHA256SUMS` in the builder stage. Remove `.git` after extraction or override the version-generation path so the build does not depend on VCS metadata. Set `SOURCE_DATE_EPOCH`, `LC_ALL=C`, and `TZ=UTC` so timestamps and locale-sensitive outputs stop drifting. Pin the base image by digest in CI, because Dockers docs are explicit that image tags are mutable; if you still permit BuildKit-managed remote source resolution, Docker also documents `EXPERIMENTAL_BUILDKIT_SOURCE_POLICY` for reproducible builds with pinned dependencies. citeturn15view0turn15view3turn10view4turn14search1turn10view5
For ASK, the best Alpine port is **native Alpine build per target platform** using `docker buildx build --platform=linux/arm64`, not a Debian-style `crossbuild-essential-arm64` clone. Debian has an official cross meta-package for that workflow; Alpine stable does not give you the same one-shot cross package story, so Buildx plus platform selection is the cleaner solution here. Dockers buildx docs explicitly layer platform selection onto the whole Dockerfile, and Debians own package page makes clear what `crossbuild-essential-arm64` actually is: a convenience list of cross-build essentials for Debian, not a universal pattern you must reproduce on Alpine. citeturn23view0turn23view1turn6view4
```mermaid
flowchart TD
A[packages/ASK.tar.gz] --> B[builder stage]
A2[packages/linux.tar.gz] --> B
A3[vendored dependency tarballs] --> B
B --> C[verify SHA256SUMS]
C --> D[extract ASK tarball]
D --> E[replace upstream fetch logic with tarball extraction]
E --> F[build patched third-party deps]
F --> G[build ASK userspace]
G --> H{kernel tarball present?}
H -->|yes| I[build kernel modules]
H -->|no| J[skip modules and build userspace only]
I --> K[stage dist artifacts]
J --> K
K --> L[scratch artifacts stage]
L --> M[buildx local exporter writes out/ask]
```
## Concrete implementation
### Root `.dockerignore`
```dockerignore
**
!Makefile
!docker/**
!packages/**
!scripts/**
```
### Host vendoring script
**`scripts/vendor-sources.sh`**
```sh
#!/usr/bin/env sh
set -eu
PACKAGES_DIR="${1:-packages}"
ASK_SRC="${ASK_SRC:-/absolute/path/to/ASK.tar.gz}"
KERNEL_SRC="${KERNEL_SRC:-/absolute/path/to/lf-6.12.49-2.2.0.tar.gz}"
NXP_TAG="lf-6.12.49-2.2.0"
mkdir -p "${PACKAGES_DIR}"
install -m 0644 "${ASK_SRC}" "${PACKAGES_DIR}/ASK.tar.gz"
install -m 0644 "${KERNEL_SRC}" "${PACKAGES_DIR}/linux.tar.gz"
curl -L --fail -o "${PACKAGES_DIR}/fmlib-${NXP_TAG}.tar.gz" \
"https://github.com/nxp-qoriq/fmlib/archive/refs/tags/${NXP_TAG}.tar.gz"
curl -L --fail -o "${PACKAGES_DIR}/fmc-${NXP_TAG}.tar.gz" \
"https://github.com/nxp-qoriq/fmc/archive/refs/tags/${NXP_TAG}.tar.gz"
curl -L --fail -o "${PACKAGES_DIR}/libnfnetlink-1.0.2.tar.bz2" \
"https://www.netfilter.org/projects/libnfnetlink/files/libnfnetlink-1.0.2.tar.bz2"
curl -L --fail -o "${PACKAGES_DIR}/libnetfilter_conntrack-1.1.0.tar.xz" \
"https://www.netfilter.org/projects/libnetfilter_conntrack/files/libnetfilter_conntrack-1.1.0.tar.xz"
curl -L --fail -o "${PACKAGES_DIR}/libcli-1.10.7.tar.gz" \
"https://github.com/dparrish/libcli/archive/refs/tags/V1.10.7.tar.gz"
(
cd "${PACKAGES_DIR}"
find . -maxdepth 1 -type f \
\( -name '*.tar.gz' -o -name '*.tar.xz' -o -name '*.tar.bz2' \) \
-print0 | sort -z | xargs -0 sha256sum > SHA256SUMS
)
```
### Host-side extraction and normalization snippet
```sh
mkdir -p packages _inspect/ASK
cp /path/to/ASK.tar.gz packages/ASK.tar.gz
cp /path/to/lf-6.12.49-2.2.0.tar.gz packages/linux.tar.gz
tar -xf packages/ASK.tar.gz --strip-components=1 -C _inspect/ASK
export SOURCE_DATE_EPOCH=1704067200
tar --sort=name \
--mtime="@${SOURCE_DATE_EPOCH}" \
--owner=0 --group=0 --numeric-owner \
-czf packages/ASK.normalized.tar.gz \
-C _inspect ASK
```
### Root Makefile
**`Makefile`**
```make
.RECIPEPREFIX := >
DOCKER_PLATFORM ?= linux/arm64
ALPINE_VERSION ?= 3.22
BUILD_TARGET ?= dist
OUT_DIR ?= out/ask
IMAGE ?= ask-build:local
ASK_TAR ?= packages/ASK.tar.gz
KERNEL_TAR ?= packages/linux.tar.gz
FMLIB_TAR ?= packages/fmlib-lf-6.12.49-2.2.0.tar.gz
FMC_TAR ?= packages/fmc-lf-6.12.49-2.2.0.tar.gz
LIBNFNETLINK_TAR ?= packages/libnfnetlink-1.0.2.tar.bz2
LIBNFCT_TAR ?= packages/libnetfilter_conntrack-1.1.0.tar.xz
LIBCLI_TAR ?= packages/libcli-1.10.7.tar.gz
SOURCE_DATE_EPOCH ?= 1704067200
JOBS ?= 0
USERSPACE_CFLAGS ?=
USERSPACE_LDFLAGS ?=
.PHONY: ASK ASK_IMAGE
ASK:
> docker buildx build \
> --platform="$(DOCKER_PLATFORM)" \
> --file docker/ask.Dockerfile \
> --build-arg "ALPINE_VERSION=$(ALPINE_VERSION)" \
> --build-arg "ASK_TAR=$(ASK_TAR)" \
> --build-arg "KERNEL_TAR=$(KERNEL_TAR)" \
> --build-arg "FMLIB_TAR=$(FMLIB_TAR)" \
> --build-arg "FMC_TAR=$(FMC_TAR)" \
> --build-arg "LIBNFNETLINK_TAR=$(LIBNFNETLINK_TAR)" \
> --build-arg "LIBNFCT_TAR=$(LIBNFCT_TAR)" \
> --build-arg "LIBCLI_TAR=$(LIBCLI_TAR)" \
> --build-arg "BUILD_TARGET=$(BUILD_TARGET)" \
> --build-arg "SOURCE_DATE_EPOCH=$(SOURCE_DATE_EPOCH)" \
> --build-arg "JOBS=$(JOBS)" \
> --build-arg "USERSPACE_CFLAGS=$(USERSPACE_CFLAGS)" \
> --build-arg "USERSPACE_LDFLAGS=$(USERSPACE_LDFLAGS)" \
> --target artifacts \
> --output "type=local,dest=$(OUT_DIR)" \
> .
ASK_IMAGE:
> docker buildx build \
> --platform="$(DOCKER_PLATFORM)" \
> --file docker/ask.Dockerfile \
> --build-arg "ALPINE_VERSION=$(ALPINE_VERSION)" \
> --build-arg "ASK_TAR=$(ASK_TAR)" \
> --build-arg "KERNEL_TAR=$(KERNEL_TAR)" \
> --build-arg "FMLIB_TAR=$(FMLIB_TAR)" \
> --build-arg "FMC_TAR=$(FMC_TAR)" \
> --build-arg "LIBNFNETLINK_TAR=$(LIBNFNETLINK_TAR)" \
> --build-arg "LIBNFCT_TAR=$(LIBNFCT_TAR)" \
> --build-arg "LIBCLI_TAR=$(LIBCLI_TAR)" \
> --build-arg "BUILD_TARGET=$(BUILD_TARGET)" \
> --build-arg "SOURCE_DATE_EPOCH=$(SOURCE_DATE_EPOCH)" \
> --build-arg "JOBS=$(JOBS)" \
> --build-arg "USERSPACE_CFLAGS=$(USERSPACE_CFLAGS)" \
> --build-arg "USERSPACE_LDFLAGS=$(USERSPACE_LDFLAGS)" \
> --load \
> --tag "$(IMAGE)" \
> .
# Examples:
# make ASK
# make ASK BUILD_TARGET=userspace
# make ASK DOCKER_PLATFORM=linux/amd64 BUILD_TARGET=userspace
# make ASK USERSPACE_LDFLAGS="-static-libgcc -static-libstdc++"
# make ASK ASK_TAR=packages/ASK.tar.gz KERNEL_TAR=packages/linux.tar.gz
```
The prompts example command is syntactically incomplete. The correct `buildx` form needs a Dockerfile specified with `--file`, one `--build-arg KEY=VALUE` flag per argument, and a final positional build context such as `.`. For artifact builds, `--output type=local,dest=...` is the right default; `--load` is only for a single-platform image result. citeturn9view6turn23view0turn10view0turn10view1
### Upstream override files
**`docker/overrides/toolchain.mk`**
```make
CROSS_COMPILE ?=
ARCH ?= arm64
PLATFORM ?= LS1043A
CC ?= $(if $(CROSS_COMPILE),$(CROSS_COMPILE)gcc,gcc)
CXX ?= $(if $(CROSS_COMPILE),$(CROSS_COMPILE)g++,g++)
AR ?= $(if $(CROSS_COMPILE),$(CROSS_COMPILE)ar,ar)
STRIP ?= $(if $(CROSS_COMPILE),$(CROSS_COMPILE)strip,strip)
KDIR ?= /opt/kernel
```
**`docker/overrides/Makefile`**
```make
.RECIPEPREFIX := >
include build/toolchain.mk
include build/sources.mk
DEFCONFIG := $(CURDIR)/config/kernel/defconfig
DIST := $(CURDIR)/dist
SRCDIR := $(CURDIR)/sources
PATCHES := $(CURDIR)/patches
HOST ?= $(shell $(CC) -dumpmachine 2>/dev/null || echo aarch64-alpine-linux-musl)
FMLIB_DIR := $(SRCDIR)/fmlib
FMC_DIR := $(SRCDIR)/fmc/source
LIBFCI_DIR := $(CURDIR)/fci/lib
SYSROOT := $(SRCDIR)/sysroot
ABM_DIR := $(CURDIR)/auto_bridge
KBUILD_ARGS := CROSS_COMPILE=$(CROSS_COMPILE) ARCH=$(ARCH)
CDX_ARGS := $(KBUILD_ARGS) KERNELDIR=$(KDIR) PLATFORM=$(PLATFORM)
FCI_ARGS := $(KBUILD_ARGS) KERNEL_SOURCE=$(KDIR) BOARD_ARCH=$(ARCH) \
KBUILD_EXTRA_SYMBOLS=$(CURDIR)/cdx/Module.symvers
ABM_ARGS := $(KBUILD_ARGS) KERNEL_SOURCE=$(KDIR) PLATFORM=$(PLATFORM)
ASK_TAR ?=
FMLIB_TAR ?= /vendor/packages/fmlib-$(NXP_TAG).tar.gz
FMC_TAR ?= /vendor/packages/fmc-$(NXP_TAG).tar.gz
LIBNFNETLINK_TAR ?= /vendor/packages/libnfnetlink-$(LIBNFNETLINK_VER).tar.bz2
LIBNFCT_TAR ?= /vendor/packages/libnetfilter_conntrack-$(LIBNFCT_VER).tar.xz
S := $(SRCDIR)/.stamps
$(shell mkdir -p $(S))
JOBS ?= 1
USERSPACE_CFLAGS ?=
USERSPACE_LDFLAGS ?=
.PHONY: all setup sources modules userspace kernel dist clean clean-all help \
cdx fci auto_bridge fmc cmm dpa_app
all: modules userspace
setup:
> @echo "Container build: setup target intentionally disabled."
sources: $(S)/fmlib $(S)/fmc $(S)/libfci $(S)/libnfnetlink $(S)/libnfct
$(S)/fmlib:
> @echo "==> fmlib: extract + patch + build"
> rm -rf $(FMLIB_DIR)
> mkdir -p $(FMLIB_DIR)
> tar -xf "$(FMLIB_TAR)" --strip-components=1 -C $(FMLIB_DIR)
> cd $(FMLIB_DIR) && patch -p1 -i "$(PATCHES)/fmlib/01-mono-ask-extensions.patch"
> $(MAKE) -C $(FMLIB_DIR) CROSS_COMPILE=$(CROSS_COMPILE) KERNEL_SRC=$(KDIR) libfm-arm.a
> ln -sf libfm-arm.a $(FMLIB_DIR)/libfm.a
> touch $@
$(S)/fmc: $(S)/fmlib
> @echo "==> fmc: extract + patch + build"
> rm -rf $(SRCDIR)/fmc
> mkdir -p $(SRCDIR)/fmc
> tar -xf "$(FMC_TAR)" --strip-components=1 -C $(SRCDIR)/fmc
> cd $(SRCDIR)/fmc && patch -p1 -i "$(PATCHES)/fmc/01-mono-ask-extensions.patch"
> $(MAKE) -C $(FMC_DIR) \
> CC=$(CC) CXX=$(CXX) AR=$(AR) \
> MACHINE=ls1046 \
> FMD_USPACE_HEADER_PATH=$(FMLIB_DIR)/include/fmd \
> FMD_USPACE_LIB_PATH=$(FMLIB_DIR) \
> LIBXML2_HEADER_PATH=/usr/include/libxml2 \
> TCLAP_HEADER_PATH=/usr/include
> touch $@
$(S)/libfci:
> @echo "==> libfci: build"
> $(MAKE) -C $(LIBFCI_DIR) CC=$(CC) AR=$(AR)
> touch $@
$(S)/libnfnetlink:
> @echo "==> libnfnetlink: extract + patch + build"
> rm -rf $(SRCDIR)/libnfnetlink-$(LIBNFNETLINK_VER)
> mkdir -p $(SRCDIR) $(SYSROOT)
> tar -xf "$(LIBNFNETLINK_TAR)" -C $(SRCDIR)
> cd $(SRCDIR)/libnfnetlink-$(LIBNFNETLINK_VER) && \
> patch -p1 -i "$(PATCHES)/libnfnetlink/01-nxp-ask-nonblocking-heap-buffer.patch" && \
> ./configure --host=$(HOST) --prefix=$(SYSROOT) --enable-static --disable-shared && \
> $(MAKE) -j$(JOBS) && \
> $(MAKE) install
> touch $@
$(S)/libnfct: $(S)/libnfnetlink
> @echo "==> libnetfilter_conntrack: extract + patch + build"
> rm -rf $(SRCDIR)/libnetfilter_conntrack-$(LIBNFCT_VER)
> mkdir -p $(SRCDIR) $(SYSROOT)
> tar -xf "$(LIBNFCT_TAR)" -C $(SRCDIR)
> cd $(SRCDIR)/libnetfilter_conntrack-$(LIBNFCT_VER) && \
> patch -p1 -i "$(PATCHES)/libnetfilter-conntrack/01-nxp-ask-comcerto-fp-extensions.patch" && \
> PKG_CONFIG_PATH=$(SYSROOT)/lib/pkgconfig \
> ./configure --host=$(HOST) --prefix=$(SYSROOT) --enable-static --disable-shared \
> CFLAGS="-I$(SYSROOT)/include" LDFLAGS="-L$(SYSROOT)/lib" && \
> $(MAKE) -j$(JOBS) && \
> $(MAKE) install
> touch $@
modules: cdx fci auto_bridge
cdx:
> $(MAKE) -C cdx $(CDX_ARGS) modules
fci: cdx
> $(MAKE) -C fci $(FCI_ARGS) modules
auto_bridge:
> $(MAKE) -C auto_bridge $(ABM_ARGS)
userspace: fmc cmm dpa_app
fmc: $(S)/fmc
> @true
cmm: $(S)/libfci $(S)/libnfct
> $(MAKE) -C cmm CC=$(CC) \
> LIBFCI_DIR=$(LIBFCI_DIR) \
> ABM_DIR=$(ABM_DIR) \
> SYSROOT=$(SYSROOT) \
> CFLAGS="$(USERSPACE_CFLAGS) -I/usr/local/include" \
> LDFLAGS="$(USERSPACE_LDFLAGS) -L/usr/local/lib"
dpa_app: $(S)/fmc
> $(MAKE) -C dpa_app CC=$(CC) \
> CFLAGS="-DDPAA_DEBUG_ENABLE -DNCSW_LINUX $(USERSPACE_CFLAGS) \
> -I/usr/local/include \
> -I$(FMC_DIR) -I$(CURDIR)/cdx \
> -I$(FMLIB_DIR)/include/fmd \
> -I$(FMLIB_DIR)/include/fmd/Peripherals \
> -I$(FMLIB_DIR)/include/fmd/integrations" \
> LDFLAGS="-L/usr/local/lib -lpthread -lcli \
> -L$(FMC_DIR) -lfmc \
> -L$(FMLIB_DIR) -lfm \
> -lstdc++ -lxml2 -lm $(USERSPACE_LDFLAGS)"
kernel:
> cp $(DEFCONFIG) $(KDIR)/.config
> $(MAKE) -C $(KDIR) $(KBUILD_ARGS) olddefconfig
> $(MAKE) -C $(KDIR) $(KBUILD_ARGS) -j$(JOBS) Image modules
dist: all
> mkdir -p $(DIST)
> cp cdx/cdx.ko $(DIST)/
> cp fci/fci.ko $(DIST)/
> cp auto_bridge/auto_bridge.ko $(DIST)/
> cp $(FMC_DIR)/fmc $(DIST)/
> cp cmm/src/cmm $(DIST)/
> cp dpa_app/dpa_app $(DIST)/
> @echo "Artifacts staged in $(DIST)/"
clean:
> -$(MAKE) -C cdx $(CDX_ARGS) clean
> -$(MAKE) -C fci $(FCI_ARGS) clean
> -$(MAKE) -C auto_bridge $(ABM_ARGS) clean
> -$(MAKE) -C $(LIBFCI_DIR) clean
> -$(MAKE) -C cmm clean
> -$(MAKE) -C dpa_app clean
> rm -f $(S)/*
> rm -rf $(DIST)
clean-all: clean
> rm -rf $(SRCDIR)
help:
> @echo "make - build everything from vendored tarballs"
> @echo "make userspace - build userspace only"
> @echo "make modules - build out-of-tree kernel modules"
> @echo "make kernel - build kernel Image + in-tree modules"
> @echo "make dist - stage artifacts into dist/"
> @echo "make clean - clean local build artifacts"
> @echo "make clean-all - clean everything including extracted sources"
```
### Alpine multi-stage Dockerfile
**`docker/ask.Dockerfile`**
```dockerfile
# syntax=docker/dockerfile:1.7
#
# In CI, prefer pinning the base image by digest, for example:
# FROM alpine:3.22@sha256:<digest> AS base-build
ARG ALPINE_VERSION=3.22
FROM alpine:${ALPINE_VERSION} AS base-build
ARG ALPINE_VERSION
WORKDIR /work
RUN set -eux; \
printf '%s\n' \
"https://dl-cdn.alpinelinux.org/alpine/v${ALPINE_VERSION}/main" \
"https://dl-cdn.alpinelinux.org/alpine/v${ALPINE_VERSION}/community" \
> /etc/apk/repositories; \
apk add --no-cache \
bash \
bc \
bison \
build-base \
bzip2 \
coreutils \
file \
findutils \
flex \
gawk \
libmnl-dev \
libpcap-dev \
libxml2-dev \
linux-headers \
openssl-dev \
patch \
perl \
pkgconf \
python3 \
tar \
tclap-dev \
xz \
zlib-dev
COPY packages/ /vendor/packages/
COPY docker/overrides/ /docker-overrides/
RUN set -eux; \
if [ -f /vendor/packages/SHA256SUMS ]; then \
cd /vendor/packages && sha256sum -c SHA256SUMS; \
fi
FROM base-build AS libcli-builder
ARG LIBCLI_TAR=packages/libcli-1.10.7.tar.gz
RUN set -eux; \
mkdir -p /tmp/libcli; \
tar -xf "/vendor/${LIBCLI_TAR}" --strip-components=1 -C /tmp/libcli; \
make -C /tmp/libcli; \
make -C /tmp/libcli install; \
rm -rf /tmp/libcli
FROM base-build AS builder
ARG ASK_TAR=packages/ASK.tar.gz
ARG KERNEL_TAR=packages/linux.tar.gz
ARG FMLIB_TAR=packages/fmlib-lf-6.12.49-2.2.0.tar.gz
ARG FMC_TAR=packages/fmc-lf-6.12.49-2.2.0.tar.gz
ARG LIBNFNETLINK_TAR=packages/libnfnetlink-1.0.2.tar.bz2
ARG LIBNFCT_TAR=packages/libnetfilter_conntrack-1.1.0.tar.xz
ARG BUILD_TARGET=dist
ARG SOURCE_DATE_EPOCH=1704067200
ARG JOBS=0
ARG USERSPACE_CFLAGS=
ARG USERSPACE_LDFLAGS=
COPY --from=libcli-builder /usr/local/ /usr/local/
ENV LC_ALL=C
ENV TZ=UTC
ENV SOURCE_DATE_EPOCH=${SOURCE_DATE_EPOCH}
RUN set -eux; \
test -f "/vendor/${ASK_TAR}"; \
mkdir -p /work/src; \
tar -xf "/vendor/${ASK_TAR}" -C /work/src; \
test -d /work/src/ASK; \
rm -rf /work/src/ASK/.git; \
install -m 0644 /docker-overrides/Makefile /work/src/ASK/Makefile; \
install -m 0644 /docker-overrides/toolchain.mk /work/src/ASK/build/toolchain.mk; \
if [ ! -f /work/src/ASK/cmm/src/version.h ]; then \
printf '/* Auto-generated */\n#ifndef VERSION_H\n#define VERSION_H\n#define CMM_VERSION "%s"\n#endif\n' \
"tarball" > /work/src/ASK/cmm/src/version.h; \
fi
RUN set -eux; \
case "${BUILD_TARGET}" in \
all|modules|kernel|dist) \
test -f "/vendor/${KERNEL_TAR}" || { echo "KERNEL_TAR is required for BUILD_TARGET=${BUILD_TARGET}"; exit 2; }; \
mkdir -p /opt/kernel; \
tar -xf "/vendor/${KERNEL_TAR}" --strip-components=1 -C /opt/kernel; \
;; \
*) : ;; \
esac
RUN set -eux; \
if [ "${JOBS}" = "0" ]; then JOBS="$(getconf _NPROCESSORS_ONLN)"; fi; \
make -C /work/src/ASK \
FMLIB_TAR="/vendor/${FMLIB_TAR}" \
FMC_TAR="/vendor/${FMC_TAR}" \
LIBNFNETLINK_TAR="/vendor/${LIBNFNETLINK_TAR}" \
LIBNFCT_TAR="/vendor/${LIBNFCT_TAR}" \
KDIR=/opt/kernel \
JOBS="${JOBS}" \
USERSPACE_CFLAGS="${USERSPACE_CFLAGS}" \
USERSPACE_LDFLAGS="${USERSPACE_LDFLAGS}" \
"${BUILD_TARGET}"
RUN set -eux; \
mkdir -p /out; \
if [ -d /work/src/ASK/dist ]; then \
cp -a /work/src/ASK/dist/. /out/; \
else \
[ -f /work/src/ASK/cmm/src/cmm ] && cp /work/src/ASK/cmm/src/cmm /out/ || true; \
[ -f /work/src/ASK/dpa_app/dpa_app ] && cp /work/src/ASK/dpa_app/dpa_app /out/ || true; \
[ -f /work/src/ASK/sources/fmc/source/fmc ] && cp /work/src/ASK/sources/fmc/source/fmc /out/ || true; \
[ -f /work/src/ASK/cdx/cdx.ko ] && cp /work/src/ASK/cdx/cdx.ko /out/ || true; \
[ -f /work/src/ASK/fci/fci.ko ] && cp /work/src/ASK/fci/fci.ko /out/ || true; \
[ -f /work/src/ASK/auto_bridge/auto_bridge.ko ] && cp /work/src/ASK/auto_bridge/auto_bridge.ko /out/ || true; \
fi
FROM scratch AS artifacts
COPY --from=builder /out/ /
```
### Example invocations
```sh
# Full ASK build, including modules, exporting files into out/ask/
make ASK
# Userspace-only build when the kernel tarball is unavailable
make ASK BUILD_TARGET=userspace
# Single-platform image load instead of local artifact export
make ASK_IMAGE BUILD_TARGET=userspace IMAGE=ask-build:dev
# Alpine userspace with selective static GCC/C++ runtime linkage
make ASK BUILD_TARGET=userspace \
USERSPACE_LDFLAGS="-static-libgcc -static-libstdc++"
```
Sources and notes for the implementation above: `.dockerignore` behavior, Dockerfile-specific ignore precedence, build-context minimization, multi-stage builds, `buildx build` arguments, `--output type=local`, `--load`, `--platform`, and `ADD`/`COPY` semantics all come from Dockers official docs. The recommendation to use `COPY` for ordinary context files and reserve `ADD` for special cases is also straight from Dockers best-practices page. The Alpine repository split between `main`, `community`, and `testing` is official Alpine guidance, which matters here because `tclap-dev` lives in `community`. The `libcli` helper stage follows the upstream libcli README, which documents `make` and `make install` into `/usr/local/lib`. citeturn15view0turn15view1turn9view10turn10view0turn10view1turn10view3turn23view0turn23view1turn24view2turn24view1turn18view11turn1search1turn21search0
## Debian-to-Alpine conversion guide
The important conversion is not just `apt` to `apk`. It is **glibc-centric Debian cross-build assumptions to musl-centric Alpine native-per-target builds**.
On Debian Trixie, `build-essential` and `crossbuild-essential-arm64` are official convenience packages. `build-essential` pulls in the default GCC, G++, libc development headers, and `make`. `crossbuild-essential-arm64` is Debians official one-shot cross-build convenience package for arm64. Alpines nearest equivalent for native builds is `build-base`, which explicitly depends on `gcc`, `g++`, `make`, and `libc-dev`; but Alpine does **not** provide an equivalent stable one-package story matching Debians arm64 cross meta-package. That is why the report recommends `docker buildx build --platform=...` rather than rebuilding Debians cross-toolchain pattern inside Alpine. citeturn4view0turn6view4turn2view0turn23view1
Alpine repository selection matters. Alpines official repositories are `main`, `community`, and `testing`; stable branches normally use `main` and `community`, while `testing` is edge-only and unsupported as a stable dependency source. That is directly relevant here because `tclap-dev` is in Alpine `community`, while the visible `libcli` package in Alpines official package index is in edge/testing rather than a normal stable `-dev` package flow. For a reproducible stable build, vendoring `libcli` as a tarball is cleaner than dragging edge/testing into a stable builder image. citeturn18view11turn1search1turn3view3
### Debian and Alpine package mapping table
| Capability | Debian Trixie install command | Alpine 3.22 install command | Practical note |
|---|---|---|---|
| meta build toolchain | `apt-get update && apt-get install -y build-essential` | `apk add --no-cache build-base` | Rough native-build equivalents |
| arm64 cross meta-package | `apt-get install -y crossbuild-essential-arm64` | no stable one-package equivalent | Prefer `buildx --platform=linux/arm64` for Alpine |
| GCC C compiler | `apt-get install -y gcc` | `apk add --no-cache gcc` | Alpine `build-base` already brings it in |
| GCC C++ compiler | `apt-get install -y g++` | `apk add --no-cache g++` | Alpine `build-base` already brings it in |
| make | `apt-get install -y make` | `apk add --no-cache make` | Alpine `build-base` already brings it in |
| CMake | `apt-get install -y cmake` | `apk add --no-cache cmake` | Same user-facing package name |
| pkg-config tooling | `apt-get install -y pkgconf` | `apk add --no-cache pkgconf` | `pkgconf` is the practical package on both sides |
| OpenSSL headers/libs | `apt-get install -y libssl-dev` | `apk add --no-cache openssl-dev` | Direct development-package equivalents |
| zlib headers/libs | `apt-get install -y zlib1g-dev` | `apk add --no-cache zlib-dev` | Direct development-package equivalents |
| C++ runtime | `apt-get install -y libstdc++6` | `apk add --no-cache libstdc++` | Runtime package names differ |
| libc development headers | `apt-get install -y libc6-dev` | `apk add --no-cache musl-dev` | Alpine uses musl, not glibc |
| TCLAP headers | `apt-get install -y libtclap-dev` | `apk add --no-cache tclap-dev` | Alpine package is in `community` |
| libcli development files | `apt-get install -y libcli-dev` | vendor the libcli tarball and build it | Alpine stable does not give a clean `libcli-dev` replacement path |
Sources and notes for the table: Debian package naming and dependency roles come from the official Debian package pages for `build-essential`, `crossbuild-essential-arm64`, `pkgconf`, `libssl-dev`, `zlib1g-dev`, `libc6-dev`, `libcli-dev`, `g++`, and the Trixie package index entry showing `libtclap-dev`. Alpine naming and repository placement come from the official Alpine package database pages for `build-base`, `gcc`, `g++`, `make`, `cmake`, `pkgconf`, `openssl-dev`, `zlib-dev`, `libstdc++`, `musl-dev`, `tclap-dev`, `libpcap-dev`, `libmnl-dev`, `libxml2-dev`, and the official `libcli` package page in edge/testing. citeturn4view0turn6view4turn4view1turn6view1turn6view2turn6view3turn6view0turn19view2turn5search0turn2view0turn1search17turn1search5turn2view1turn2view2turn2view3turn3view0turn3view1turn3view2turn1search13turn1search1turn6view6turn6view7turn6view8turn3view3
## Passing tarballs safely and reproducibly
There are two different problems here, and they should not be confused.
If the tarball is **not secret**, the correct pattern is to pass a **path selector** as a build arg and copy the tarball into the build context. Dockers docs explicitly show `--build-arg` as the right mechanism for build-time parameterization, and the Dockerfile reference explicitly documents that local tar archives added with `ADD` are decompressed and extracted automatically. For normal reproducible CI, the cleaner pattern is still `COPY packages/ ...` plus explicit `tar -xf`, because it gives you better validation and less magic. citeturn9view1turn9view6turn24view2turn9view4turn9view5
If the tarball **is secret or confidential**, do **not** rely on `ARG`. Dockers docs warn that build args and environment variables are inappropriate for secrets and point you to secret mounts instead. That is the secure answer. citeturn9view0turn9view3turn16view1turn16view2
### Non-secret build-arg pattern
```dockerfile
ARG ASK_TAR=packages/ASK.tar.gz
COPY packages/ /vendor/packages/
RUN test -f "/vendor/${ASK_TAR}" && mkdir -p /src && tar -xf "/vendor/${ASK_TAR}" -C /src
```
```make
ASK:
> docker buildx build \
> --file docker/ask.Dockerfile \
> --build-arg "ASK_TAR=$(ASK_TAR)" \
> .
```
### Short `ADD` pattern for a local tar archive
```dockerfile
ARG ASK_TAR=packages/ASK.tar.gz
ADD ${ASK_TAR} /src/
```
### Secret-mount pattern for confidential tarballs
```dockerfile
# syntax=docker/dockerfile:1.7
RUN --mount=type=secret,id=ask,target=/run/secrets/ASK.tar.gz \
mkdir -p /src && \
tar -xf /run/secrets/ASK.tar.gz -C /src
```
```sh
docker buildx build \
--secret id=ask,src=packages/ASK.tar.gz \
--file docker/ask.Dockerfile \
.
```
### Reproducibility controls worth enabling
Treat these as the minimum serious set:
- Pin the base image by digest in CI.
- Verify `SHA256SUMS` for every vendored tarball.
- Set `SOURCE_DATE_EPOCH`.
- Set `LC_ALL=C` and `TZ=UTC`.
- Remove `.git` after extraction or replace VCS-derived version generation.
- If any remote BuildKit source resolution remains, consider Dockers documented source policy feature.
The reason is simple: Docker tags are mutable, and `SOURCE_DATE_EPOCH` was created precisely so build systems can share a stable timestamp rather than leaking wall-clock time into artifacts. citeturn10view4turn15view3turn14search0turn14search1turn10view5
## Musl troubleshooting and decision rules
Most Alpine ports do not fail because of package names. They fail because the code assumes glibc semantics.
Alpines own musl page says Alpine uses musl as its C standard library and that musl does not implement most locale features that glibc implements. The musl FAQ then gets more specific: common breakage comes from glibc-specific assumptions or wrong `#ifdef`s, GNU `getopt` behavior, iconv BOM and UCS2 assumptions, `off_t` width assumptions, and expectations that `pthread_create` gives you big glibc-sized stacks by default. That is not theory. Those are the actual usual failure modes. citeturn18view0turn18view1turn18view5turn18view6
For this ASK source tree, one obvious example is already handled: `cmm.c` only includes `execinfo.h` and uses `backtrace()` on glibc. That is the right pattern. If other source files still assume glibc-only headers or symbols, patch them the same way: guard them with `#if defined(__GLIBC__)`, provide a musl-safe fallback, or compile out the diagnostic-only path.
Threading and loader behavior are the next big traps. The musl docs say the default thread stack is much smaller than glibcs and can be increased explicitly with `pthread_attr_setstacksize`, or, since newer musl, via `-Wl,-z,stack-size=N`. The musl functional-differences page also says `dlclose()` semantics differ materially: under musl, constructors run only once and destructors run on exit, not on each unload/reload cycle as many glibc-focused programs implicitly assume. If the upstream relies on library unloading or reinitialization as a runtime feature, that is not an Alpine packaging bug; it is a portability bug in the application. citeturn18view3turn18view2
Some glibc gaps are small enough for compatibility shims. Alpines official software-management page says that for simpler binaries you can install `gcompat`, which provides glibc-compatible APIs on musl systems. On aarch64, Alpines package contents show that `gcompat` provides the glibc-style loader name and a `libc.so.6` compatibility path. That makes `gcompat` a legitimate option when the problem is a narrow runtime ABI expectation rather than deep glibc dependence. citeturn18view7turn18view8turn13search6
When `gcompat` is not enough, stop wasting time. Alpines own docs explicitly recommend containers or chroots for running glibc programs. If the upstream depends on glibc-only locale behavior, NSS/plugins, loader behavior, or opaque vendor binaries built for glibc, the pragmatic answer is a **glibc builder or runtime stage** for that component, even if the rest of your pipeline uses Alpine. citeturn18view8
Alpine-specific shell and utility differences also matter. Alpine is built around musl and BusyBox, and Alpines docs warn that BusyBox tools tend to implement only standard options and often lack GNU-specific extensions. If the upstream scripts assume GNU `sed`, GNU `find`, `bash`, or other non-POSIX behavior, install the needed packages explicitly or patch the scripts. The Dockerfile above does exactly that by installing `bash`, `coreutils`, `findutils`, `gawk`, and related tools. citeturn18view10
One more gotcha: musl does not implement `utmp`; Alpines docs say those functions are stubbed. If the upstream or its tests use `wall`, `who`, `w`, or similar libc-backed session accounting behavior, expect that to differ on Alpine. citeturn18view9
### Practical fix list
Use these fixes in this order:
- **Wrong `#ifdef`s or missing glibc headers**
Patch to feature checks or `__GLIBC__` guards. ASK already does this for `execinfo.h`/`backtrace()` in `cmm.c`.
- **Tiny musl thread stacks**
Patch the app to call `pthread_attr_setstacksize`, or pass a linker stack-size hint with `-Wl,-z,stack-size=N` when appropriate. citeturn18view3
- **Userspace portability while keeping dynamic linking**
Prefer normal musl dynamic linking first.
- **GCC runtime portability only**
Try `-static-libgcc -static-libstdc++` first. This is often enough when the program is otherwise musl-clean but you want to reduce deployment friction around the GCC runtimes.
- **Simple glibc ABI gaps at runtime**
Try `apk add gcompat`. citeturn18view7turn18view8turn13search6
- **Deep glibc dependencies**
Use a glibc runtime image or Alpine-documented container/chroot strategy. citeturn18view8
- **Full static linking**
Use only if every dependency is available static and you have checked licensing and target runtime needs. Do not default to `-static` blindly.
```mermaid
flowchart TD
A[Need Alpine runtime?] -->|No| B[Use Alpine builder only and export artifacts]
A -->|Yes| C{Does the binary run correctly on musl?}
C -->|Yes| D[Use Alpine runtime stage]
C -->|Minor glibc ABI loader gap| E[Try gcompat]
C -->|glibc-specific behavior or binary blob| F[Use glibc runtime image or chroot/container]
```
## Open questions and limitations
The supplied archive did not contain any original Dockerfiles, so the Alpine design above is a concrete replacement architecture, not a textual translation of preexisting container files.
The full module build depends on a matching kernel source tarball. The uploaded kernel archive makes that possible here, but in a generic tarball-only scenario you must state that dependency explicitly rather than silently assuming a host kernel tree exists.
The final deployment environment for ASK userspace binaries was not specified. That matters. If the target root filesystem is glibc-based and you want drop-in userspace binaries, musl-built artifacts may not be the right end state even if the build itself succeeds.
The implementation above is intentionally strict: no in-build `git clone`, no `wget`, no unpinned hidden source fetches, no dependence on host package managers inside the source tree, and no ambiguity about whether the tarball or the network is the source of truth. That is the reproducible answer.

View File

@@ -0,0 +1,747 @@
# Reproducible Docker Build System for a Tarball-Supplied Upstream Project
## Executive summary
The right design is to treat the upstream project tarball as an input artifact, not something the build fetches for itself. That means: vendor every required source tarball into the build context, pin the container base image by digest, verify checksums before extraction, remove `.git` metadata after unpacking, set `SOURCE_DATE_EPOCH`, and export build artifacts with `docker buildx build --output type=local` rather than baking them into an opaque image layer. For container construction, the correct pattern is **“copy a vendored packages directory, select the tarball path with `ARG`, extract in `RUN`, build in a throwaway stage, and export only the outputs.”** Docker explicitly documents that multi-stage builds reduce final image size, that build arguments parameterize the Dockerfile but are not appropriate for secrets, that `.dockerignore` should keep the context small, and that `buildx` supports local artifact export and build secrets. Docker also recommends digest pinning for deterministic base-image selection. citeturn12view3turn12view0turn12view1turn16view0turn16view1turn13view0turn16view3
For Alpine conversion, the key issue is not package-manager syntax; it is the libc boundary. Alpine uses musl, not glibc, and that changes package names, runtime behavior, and sometimes build logic. Alpines own documentation is blunt that compiling codebases may be harder on Alpine because of musl, that `build-base` is the standard compiler meta-package, that stable reproducible builds should use stable repositories rather than edge/testing, and that `gcompat` is only a compatibility layer for some glibc-built binaries. musls own documentation highlights the load-bearing differences: smaller default thread stacks, limited symbol-versioning support, `dlclose()` as a no-op, and common bugs caused by glibc-specific `#ifdef` logic. citeturn17view3turn17view0turn17view1turn17view2turn17view5turn17view6turn17view7turn17view8
Inspection of the supplied files changes the recommendation in one important way. The uploaded ASK source tarball is not self-contained: its upstream `Makefile` still clones `fmlib` and `fmc`, and downloads `libnfnetlink` and `libnetfilter_conntrack`. So a **reproducible/offline** ASK build requires vendoring **all** of those dependency tarballs as inputs alongside `ASK.tar.gz`, and overriding the upstream fetch logic to extract vendored tarballs instead of using `git clone` or `wget`. The supplied board build files also show a current kernel build flow that already merges `kernel-extra.config` into an NXP 6.12 kernel tree and checks the resolved `.config`. That is the right shape; one fragment line is wrong for this kernel. The reported error occurs because `CONFIG_NETFILTER_XTABLES_LEGACY=y` is required by the uploaded fragment, but that symbol does **not** exist in the uploaded `lf-6.12.49-2.2.0` kernel tree. In that tree, the relevant legacy iptables symbols are `CONFIG_IP_NF_IPTABLES_LEGACY` and `CONFIG_IP6_NF_IPTABLES_LEGACY`, and both are already present in the supplied fragment. Comparing the supplied fragment against the uploaded kernel tree shows that `CONFIG_NETFILTER_XTABLES_LEGACY` is the only missing symbol. The fix is to drop or version-gate that single line, not to weaken the checker globally.
## Findings from the supplied files
The supplied board-specific build files target an entity["company","NXP","semiconductor company"] Layerscape-oriented arm64 build with `NXP_VERSION=lf-6.12.49-2.2.0`, `ARCH=arm64`, `CROSS_COMPILE=aarch64-linux-gnu-`, and `DEVICE_TREE_TARGET=mono-gateway-dk-sdk`. The current root build uses a Debian Trixie-based builder image, downloads many artifacts into `packages/`, builds the kernel in a container, and exports artifacts locally with `--output type=local`. That overall shape is sound; the main weaknesses are network fetches during build and glibc-specific base images.
The supplied ASK tarball is a C/C++/Make-based project. Its upstream `Makefile` builds kernel modules and userspace components, but its “sources” phase still clones `fmlib` and `fmc` and downloads `libnfnetlink` and `libnetfilter_conntrack`. That means `ASK.tar.gz` alone is **not** enough for a fully reproducible build. You must vendor those dependency tarballs too, or the build will still depend on the network.
The uploaded ASK tree also contains a `.git/` directory inside the tarball. That is unusual for release tarballs and bad for reproducibility unless you intentionally need VCS-derived version stamping. In this tree, `cmm` already falls back to `"unknown"` when `git describe` is unavailable, but `cdx` only generates `version.h` if `.git` exists. That means a tarball-only build should remove `.git` for determinism **and** generate a stable placeholder `cdx/version.h` before building.
The supplied `cmm/src/cmm.c` already guards `execinfo.h` / `backtrace()` under `#if defined(__GLIBC__)`, which is exactly the right direction for musl portability: glibc-only code stays behind glibc checks, and non-glibc systems take the portable path. That is a good sign for Alpine migration.
### Kernel mismatch diagnosis
The specific kernel error you reported:
```text
kconfig mismatch: CONFIG_NETFILTER_XTABLES_LEGACY
expected: CONFIG_NETFILTER_XTABLES_LEGACY=y
actual: <missing>
error: resolved kernel config does not satisfy /tmp/kernel-extra.config
```
is not a generic dependency failure. It is a **fragment/tree mismatch**.
The supplied `kernel-extra.config` contains all three of these settings:
```text
CONFIG_NETFILTER_XTABLES_LEGACY=y
CONFIG_IP_NF_IPTABLES_LEGACY=y
CONFIG_IP6_NF_IPTABLES_LEGACY=y
```
In the uploaded NXP 6.12 kernel tree, `CONFIG_IP_NF_IPTABLES_LEGACY` and `CONFIG_IP6_NF_IPTABLES_LEGACY` exist, but `CONFIG_NETFILTER_XTABLES_LEGACY` does not. That newer top-level gate belongs to later kernels; it is not valid for this tree. So the direct fix is:
```diff
-CONFIG_NETFILTER_XTABLES_LEGACY=y
```
and nothing more. If you want one fragment to span multiple kernel lines, you must filter unsupported symbols before `olddefconfig`, or enable Kconfigs unknown-symbol warnings and treat them as fatal. The kernel docs explicitly provide `KCONFIG_WARN_UNKNOWN_SYMBOLS` and `KCONFIG_WERROR` for this purpose, and also recommend `make listnewconfig` / `scripts/diffconfig` to inspect config drift. citeturn21view0turn8search1
## Reproducible build design
The build should be split into four distinct responsibilities.
First, a **host vendoring step** collects all source archives into `packages/` and writes `SHA256SUMS`. That step is allowed to touch the network if your policy permits; the container build should not. If the tarballs are private or sensitive, pass them as BuildKit secrets instead of regular build args. Dockers docs are explicit: build args are not for secrets, because they may appear in image history and provenance metadata, while `RUN --mount=type=secret` makes the file available only for that instruction. citeturn12view0turn12view1turn12view2
Second, the container build should copy the vendored package directory into the builder stage, verify checksums, and extract only what the build needs. This is why the build arg should select a **path inside the already-copied vendor directory**, not try to trigger host-side fetching logic. In practice, that means `ARG ASK_TAR=packages/ASK.tar.gz`, then `tar -xf "/vendor/${ASK_TAR}" ...` inside `RUN`.
Third, for module builds, a matching `KERNEL_TAR` must be supplied and configured inside the build. The Linux kernel documentation is clear that external modules need a kernel tree with the right configuration and headers, and that `modules_prepare` is not enough when `CONFIG_MODVERSIONS` matters because it does not produce `Module.symvers`. If you are building out-of-tree modules against a board kernel, the safe rule is: **if the project builds kernel modules, require a matching kernel source tarball and either run a full kernel build or at least a preparation step appropriate to your versioning model.** citeturn22search0turn22search2
Fourth, export artifacts directly to the host filesystem using `--output type=local,dest=...`. Docker documents this as a first-class `buildx` output mode, and it is the cleanest way to make build results explicit and inspectable. citeturn16view0turn16view1
```mermaid
flowchart LR
A[Vendor tarballs on host\nASK + dependency tarballs + optional KERNEL_TAR] --> B[Generate SHA256SUMS]
B --> C[docker buildx build]
C --> D[base-build stage\napk add toolchain]
D --> E[Verify SHA256SUMS]
E --> F[Extract ASK tarball]
F --> G[Replace fetch logic with vendored tarball extraction]
C --> H{KERNEL_TAR supplied?}
H -- yes --> I[Extract kernel tree]
I --> J[Filter kernel fragment\nmerge_config.sh\nolddefconfig]
J --> K[Build kernel or modules_prepare]
K --> L[Build ASK kernel modules]
H -- no --> M[Build userspace only]
G --> M
M --> N[Collect dist/]
L --> N
N --> O[artifacts stage]
O --> P[--output type=local]
O --> Q[ASK_IMAGE minimal artifact image]
```
## Concrete implementation
### Assumptions
Where the prompt does not specify details, these are the assumptions I am making rather than inventing facts:
- The supplied ASK project is a C/C++/Make project because that is what the uploaded tarball contains.
- The generic design below is still valid for CMake, Autotools, Go, Python, and Node projects, but the builder stage package set and build commands should be adjusted accordingly.
- The examples default to `linux/arm64` because your uploaded board build files target arm64.
- Tarballs are assumed to have a single top-level directory, which is normal for release archives and `git archive` outputs.
- GNU tar semantics are assumed for the normalization snippet.
- `KERNEL_TAR` is optional for userspace-only builds and mandatory for kernel-module builds.
- Base-image digests are shown as placeholders because you asked for the pattern, not a locked digest for one exact tag.
### Makefile
The example in your prompt needs one correction: the valid flag is `--build-arg`, not `--args`, and the Dockerfile is specified with `-f/--file`. This Makefile gives you a tarball-driven `ASK` target and an `ASK_IMAGE` target that loads a minimal artifact image locally.
```make
.RECIPEPREFIX := >
DOCKER_BUILDKIT ?= 1
export DOCKER_BUILDKIT
PLATFORM ?= linux/arm64
ALPINE_VERSION ?= 3.22
ASK_NAME ?= ask
ASK_OUT ?= out/ask
ASK_IMAGE_NAME ?= local/$(ASK_NAME):dev
# Required vendored inputs
ASK_TAR ?= packages/ASK.tar.gz
# Optional vendored inputs for offline/reproducible dependency builds
FMLIB_TAR ?= packages/fmlib-lf-6.12.49-2.2.0.tar.gz
FMC_TAR ?= packages/fmc-lf-6.12.49-2.2.0.tar.gz
LIBNFNETLINK_TAR?= packages/libnfnetlink-1.0.2.tar.bz2
LIBNFCT_TAR ?= packages/libnetfilter_conntrack-1.1.0.tar.xz
LIBCLI_TAR ?= packages/libcli-1.10.7.tar.gz
# Required only if you build kernel modules
KERNEL_TAR ?=
KERNEL_EXTRA ?= docker/kernel-extra.config
ARCH ?= arm64
CROSS_COMPILE ?=
BUILD_TARGET ?= dist
SOURCE_DATE_EPOCH ?= 1715731200
PROGRESS ?= plain
COMMON_BUILD_ARGS = \
--build-arg ALPINE_VERSION=$(ALPINE_VERSION) \
--build-arg ASK_TAR=$(ASK_TAR) \
--build-arg FMLIB_TAR=$(FMLIB_TAR) \
--build-arg FMC_TAR=$(FMC_TAR) \
--build-arg LIBNFNETLINK_TAR=$(LIBNFNETLINK_TAR) \
--build-arg LIBNFCT_TAR=$(LIBNFCT_TAR) \
--build-arg LIBCLI_TAR=$(LIBCLI_TAR) \
--build-arg KERNEL_TAR=$(KERNEL_TAR) \
--build-arg ARCH=$(ARCH) \
--build-arg CROSS_COMPILE=$(CROSS_COMPILE) \
--build-arg BUILD_TARGET=$(BUILD_TARGET) \
--build-arg SOURCE_DATE_EPOCH=$(SOURCE_DATE_EPOCH)
.PHONY: ASK ASK_IMAGE clean
ASK:
> mkdir -p $(ASK_OUT)
> docker buildx build \
> --platform $(PLATFORM) \
> --progress=$(PROGRESS) \
> -f docker/ask.Dockerfile \
> $(COMMON_BUILD_ARGS) \
> --target artifacts \
> --output type=local,dest=$(ASK_OUT) \
> .
ASK_IMAGE:
> docker buildx build \
> --platform $(PLATFORM) \
> --progress=$(PROGRESS) \
> -f docker/ask.Dockerfile \
> $(COMMON_BUILD_ARGS) \
> --target runtime \
> --load \
> -t $(ASK_IMAGE_NAME) \
> .
clean:
> rm -rf $(ASK_OUT)
```
Example invocations:
```bash
# Userspace-only artifact export
make ASK ASK_TAR=packages/ASK.tar.gz BUILD_TARGET=userspace
# Full artifact export, including module build against a matching kernel tree
make ASK \
ASK_TAR=packages/ASK.tar.gz \
KERNEL_TAR=packages/lf-6.12.49-2.2.0.tar.gz \
BUILD_TARGET=dist \
ARCH=arm64 \
CROSS_COMPILE=
# Load a minimal artifact image into the local Docker image store
make ASK_IMAGE ASK_TAR=packages/ASK.tar.gz
# Override SOURCE_DATE_EPOCH for a release
make ASK SOURCE_DATE_EPOCH=1735689600
```
### `docker/ask.Dockerfile`
This Dockerfile does four things that matter: it verifies vendored source archives, extracts the selected tarball based on a build arg, replaces fetch logic with vendored extraction through an override Makefile, and supports optional kernel-tree injection for module builds.
```dockerfile
# syntax=docker/dockerfile:1.7
ARG ALPINE_VERSION=3.22
FROM alpine:${ALPINE_VERSION} AS base-build
ARG SOURCE_DATE_EPOCH
ENV SOURCE_DATE_EPOCH=${SOURCE_DATE_EPOCH}
ENV LC_ALL=C.UTF-8
ENV TZ=UTC
# Replace the tag with a digest in CI/release builds:
# FROM alpine:${ALPINE_VERSION}@sha256:<digest> AS base-build
RUN apk add --no-cache \
bash bc bison build-base cpio coreutils diffutils file findutils flex \
gawk git jq kmod libelf-dev libmnl-dev libpcap-dev libxml2-dev \
linux-headers openssl-dev patch perl pkgconf python3 rsync tar tclap-dev \
xz zlib-dev
WORKDIR /work
# Copy vendored packages once; select the specific tarball path with ARG later.
COPY packages/ /vendor/packages/
COPY docker/overrides/ /docker-overrides/
COPY scripts/filter-kconfig-fragment.sh /usr/local/bin/filter-kconfig-fragment.sh
COPY docker/kernel-extra.config /tmp/kernel-extra.config
RUN chmod +x /usr/local/bin/filter-kconfig-fragment.sh && \
if [ -f /vendor/packages/SHA256SUMS ]; then \
cd /vendor/packages && sha256sum -c SHA256SUMS; \
fi
FROM base-build AS builder
ARG ASK_TAR=packages/ASK.tar.gz
ARG FMLIB_TAR=packages/fmlib-lf-6.12.49-2.2.0.tar.gz
ARG FMC_TAR=packages/fmc-lf-6.12.49-2.2.0.tar.gz
ARG LIBNFNETLINK_TAR=packages/libnfnetlink-1.0.2.tar.bz2
ARG LIBNFCT_TAR=packages/libnetfilter_conntrack-1.1.0.tar.xz
ARG LIBCLI_TAR=packages/libcli-1.10.7.tar.gz
ARG KERNEL_TAR=
ARG ARCH=arm64
ARG CROSS_COMPILE=
ARG BUILD_TARGET=dist
ARG SOURCE_DATE_EPOCH
ENV SOURCE_DATE_EPOCH=${SOURCE_DATE_EPOCH}
ENV KBUILD_BUILD_TIMESTAMP=@${SOURCE_DATE_EPOCH}
ENV KBUILD_BUILD_USER=repro
ENV KBUILD_BUILD_HOST=repro-host
# Optional vendored libcli because Alpine stable does not have a dependable stable libcli-dev path.
RUN if [ -n "${LIBCLI_TAR}" ] && [ -f "/vendor/${LIBCLI_TAR}" ]; then \
mkdir -p /tmp/libcli-src /usr/local && \
tar -xf "/vendor/${LIBCLI_TAR}" --strip-components=1 -C /tmp/libcli-src && \
make -C /tmp/libcli-src && \
make -C /tmp/libcli-src PREFIX=/usr/local install; \
fi
# Extract ASK from the provided tarball and replace upstream fetch logic.
RUN mkdir -p /work/src/ASK && \
tar -xf "/vendor/${ASK_TAR}" --strip-components=1 -C /work/src/ASK && \
rm -rf /work/src/ASK/.git && \
install -m 0644 /docker-overrides/Makefile /work/src/ASK/Makefile && \
install -m 0644 /docker-overrides/toolchain.mk /work/src/ASK/build/toolchain.mk && \
mkdir -p /work/src/ASK/cdx && \
printf '%s\n' \
'/* Auto-generated for tarball builds */' \
'#ifndef CDX_VERSION_H' \
'#define CDX_VERSION_H' \
'#define CDX_GIT_VERSION "tarball"' \
'#endif' \
> /work/src/ASK/cdx/version.h
# Optional matching kernel tree, required for module builds.
RUN if [ -n "${KERNEL_TAR}" ]; then \
mkdir -p /opt/kernel && \
tar -xf "/vendor/${KERNEL_TAR}" --strip-components=1 -C /opt/kernel && \
/usr/local/bin/filter-kconfig-fragment.sh /opt/kernel /tmp/kernel-extra.config > /tmp/kernel-extra.effective.config && \
make -C /opt/kernel ARCH="${ARCH}" CROSS_COMPILE="${CROSS_COMPILE}" defconfig && \
/opt/kernel/scripts/kconfig/merge_config.sh -m /opt/kernel/.config /tmp/kernel-extra.effective.config && \
KCONFIG_WARN_UNKNOWN_SYMBOLS=1 KCONFIG_WERROR=1 \
make -C /opt/kernel ARCH="${ARCH}" CROSS_COMPILE="${CROSS_COMPILE}" olddefconfig && \
make -C /opt/kernel ARCH="${ARCH}" CROSS_COMPILE="${CROSS_COMPILE}" -j"$(nproc)" modules_prepare; \
fi
WORKDIR /work/src/ASK
ENV KDIR=/opt/kernel
ENV TAR_DIR=/vendor/packages
ENV FMLIB_TAR=/vendor/${FMLIB_TAR}
ENV FMC_TAR=/vendor/${FMC_TAR}
ENV LIBNFNETLINK_TAR=/vendor/${LIBNFNETLINK_TAR}
ENV LIBNFCT_TAR=/vendor/${LIBNFCT_TAR}
RUN case "${BUILD_TARGET}" in \
userspace) make userspace ;; \
modules) test -n "${KERNEL_TAR}" && make modules ;; \
dist) test -n "${KERNEL_TAR}" && make dist ;; \
*) echo "unsupported BUILD_TARGET=${BUILD_TARGET}" >&2; exit 2 ;; \
esac && \
mkdir -p /out && \
cp -a dist/. /out/
FROM scratch AS artifacts
COPY --from=builder /out/ /
# Minimal artifact image, not a runtime service image.
FROM scratch AS runtime
COPY --from=builder /out/ /opt/ask/
```
### `docker/overrides/toolchain.mk`
This keeps the upstream layout but makes the toolchain configurable and points the kernel directory at the injected kernel tree.
```make
# Toolchain override for reproducible container builds
CROSS_COMPILE ?=
ARCH ?= arm64
PLATFORM ?= LS1043A
CC := $(if $(CROSS_COMPILE),$(CROSS_COMPILE)gcc,gcc)
CXX := $(if $(CROSS_COMPILE),$(CROSS_COMPILE)g++,g++)
AR := $(if $(CROSS_COMPILE),$(CROSS_COMPILE)ar,ar)
STRIP := $(if $(CROSS_COMPILE),$(CROSS_COMPILE)strip,strip)
# Injected by Dockerfile when building modules
KDIR ?= /opt/kernel
```
### `docker/overrides/Makefile`
This is the critical part. It preserves the upstream build shape but replaces network fetches with vendored tarball extraction. The intent is simple: **no `git clone`, no `wget`, no `curl` inside the build.**
```make
include build/toolchain.mk
include build/sources.mk
DEFCONFIG := $(KDIR)/.config
DIST := $(CURDIR)/dist
SRCDIR := $(CURDIR)/sources
PATCHES := $(CURDIR)/patches
HOST := $(if $(CROSS_COMPILE),$(patsubst %-,-%,$(CROSS_COMPILE:%-=%)),)
SYSROOT := $(SRCDIR)/sysroot
FMLIB_DIR := $(SRCDIR)/fmlib
FMC_DIR := $(SRCDIR)/fmc/source
LIBFCI_DIR := $(CURDIR)/fci/lib
ABM_DIR := $(CURDIR)/auto_bridge
S := $(SRCDIR)/.stamps
$(shell mkdir -p $(S))
FMLIB_TAR ?= /vendor/packages/fmlib-$(NXP_TAG).tar.gz
FMC_TAR ?= /vendor/packages/fmc-$(NXP_TAG).tar.gz
LIBNFNETLINK_TAR ?= /vendor/packages/libnfnetlink-$(LIBNFNETLINK_VER).tar.bz2
LIBNFCT_TAR ?= /vendor/packages/libnetfilter_conntrack-$(LIBNFCT_VER).tar.xz
KBUILD_ARGS := CROSS_COMPILE=$(CROSS_COMPILE) ARCH=$(ARCH)
CDX_ARGS := $(KBUILD_ARGS) KERNELDIR=$(KDIR) PLATFORM=$(PLATFORM)
FCI_ARGS := $(KBUILD_ARGS) KERNEL_SOURCE=$(KDIR) BOARD_ARCH=$(ARCH) \
KBUILD_EXTRA_SYMBOLS=$(CURDIR)/cdx/Module.symvers
ABM_ARGS := $(KBUILD_ARGS) KERNEL_SOURCE=$(KDIR) PLATFORM=$(PLATFORM)
.PHONY: all sources modules userspace dist clean clean-all
all: modules userspace
sources: $(S)/fmlib $(S)/fmc $(S)/libfci $(S)/libnfnetlink $(S)/libnfct
$(S)/fmlib:
test -f "$(FMLIB_TAR)"
rm -rf "$(FMLIB_DIR)"
mkdir -p "$(FMLIB_DIR)"
tar -xf "$(FMLIB_TAR)" --strip-components=1 -C "$(FMLIB_DIR)"
cd "$(FMLIB_DIR)" && patch -p1 < "$(PATCHES)/fmlib/01-mono-ask-extensions.patch"
$(MAKE) -C "$(FMLIB_DIR)" CROSS_COMPILE="$(CROSS_COMPILE)" KERNEL_SRC="$(KDIR)" libfm-arm.a
ln -sf libfm-arm.a "$(FMLIB_DIR)/libfm.a"
touch $@
$(S)/fmc: $(S)/fmlib
test -f "$(FMC_TAR)"
rm -rf "$(SRCDIR)/fmc"
mkdir -p "$(SRCDIR)/fmc"
tar -xf "$(FMC_TAR)" --strip-components=1 -C "$(SRCDIR)/fmc"
cd "$(SRCDIR)/fmc" && patch -p1 < "$(PATCHES)/fmc/01-mono-ask-extensions.patch"
$(MAKE) -C "$(FMC_DIR)" \
CC="$(CC)" CXX="$(CXX)" AR="$(AR)" \
MACHINE=ls1046 \
FMD_USPACE_HEADER_PATH="$(FMLIB_DIR)/include/fmd" \
FMD_USPACE_LIB_PATH="$(FMLIB_DIR)" \
LIBXML2_HEADER_PATH=/usr/include/libxml2 \
TCLAP_HEADER_PATH=/usr/include
touch $@
$(S)/libfci:
$(MAKE) -C "$(LIBFCI_DIR)" CC="$(CC)" AR="$(AR)"
touch $@
$(S)/libnfnetlink:
test -f "$(LIBNFNETLINK_TAR)"
rm -rf "$(SRCDIR)/libnfnetlink-$(LIBNFNETLINK_VER)"
mkdir -p "$(SRCDIR)/libnfnetlink-$(LIBNFNETLINK_VER)"
tar -xf "$(LIBNFNETLINK_TAR)" --strip-components=1 -C "$(SRCDIR)/libnfnetlink-$(LIBNFNETLINK_VER)"
cd "$(SRCDIR)/libnfnetlink-$(LIBNFNETLINK_VER)" && \
patch -p1 < "$(PATCHES)/libnfnetlink/01-nxp-ask-nonblocking-heap-buffer.patch" && \
./configure --host="$(HOST)" --prefix="$(SYSROOT)" --enable-static --disable-shared -q && \
$(MAKE) -j$$(nproc) -s && $(MAKE) install -s
touch $@
$(S)/libnfct: $(S)/libnfnetlink
test -f "$(LIBNFCT_TAR)"
rm -rf "$(SRCDIR)/libnetfilter_conntrack-$(LIBNFCT_VER)"
mkdir -p "$(SRCDIR)/libnetfilter_conntrack-$(LIBNFCT_VER)"
tar -xf "$(LIBNFCT_TAR)" --strip-components=1 -C "$(SRCDIR)/libnetfilter_conntrack-$(LIBNFCT_VER)"
cd "$(SRCDIR)/libnetfilter_conntrack-$(LIBNFCT_VER)" && \
patch -p1 < "$(PATCHES)/libnetfilter-conntrack/01-nxp-ask-comcerto-fp-extensions.patch" && \
PKG_CONFIG_PATH="$(SYSROOT)/lib/pkgconfig" \
./configure --host="$(HOST)" --prefix="$(SYSROOT)" --enable-static --disable-shared -q \
CFLAGS="-I$(SYSROOT)/include" LDFLAGS="-L$(SYSROOT)/lib" && \
$(MAKE) -j$$(nproc) -s && $(MAKE) install -s
touch $@
modules: sources
test -d "$(KDIR)"
$(MAKE) -C cdx $(CDX_ARGS)
$(MAKE) -C fci $(FCI_ARGS)
$(MAKE) -C auto_bridge $(ABM_ARGS)
userspace: sources
$(MAKE) -C "$(FMC_DIR)" \
CC="$(CC)" CXX="$(CXX)" AR="$(AR)" \
MACHINE=ls1046 \
FMD_USPACE_HEADER_PATH="$(FMLIB_DIR)/include/fmd" \
FMD_USPACE_LIB_PATH="$(FMLIB_DIR)" \
LIBXML2_HEADER_PATH=/usr/include/libxml2 \
TCLAP_HEADER_PATH=/usr/include
$(MAKE) -C cmm \
CC="$(CC)" \
PKG_CONFIG=pkg-config \
PKG_CONFIG_PATH="$(SYSROOT)/lib/pkgconfig:/usr/lib/pkgconfig:/usr/share/pkgconfig" \
LDFLAGS="-L$(SYSROOT)/lib -L/usr/local/lib" \
CFLAGS="-I$(SYSROOT)/include -I/usr/local/include"
$(MAKE) -C dpa_app \
CC="$(CC)" \
CFLAGS="-I$(FMLIB_DIR)/include -I$(FMC_DIR)/libfci_cli/src" \
LDFLAGS="-lpthread -lcli -L/usr/local/lib -L$(FMC_DIR) -lfmc -L$(FMLIB_DIR) -lfm -lstdc++ -lxml2 -lm"
dist: modules userspace
mkdir -p "$(DIST)"
cp cdx/cdx.ko "$(DIST)/"
cp fci/fci.ko "$(DIST)/"
cp auto_bridge/auto_bridge.ko "$(DIST)/"
cp "$(FMC_DIR)/fmc" "$(DIST)/"
cp cmm/src/cmm "$(DIST)/"
cp dpa_app/dpa_app "$(DIST)/"
clean:
-$(MAKE) -C cdx $(CDX_ARGS) clean
-$(MAKE) -C fci $(FCI_ARGS) clean
-$(MAKE) -C auto_bridge $(ABM_ARGS) clean
-$(MAKE) -C "$(LIBFCI_DIR)" clean
-$(MAKE) -C cmm clean
-$(MAKE) -C dpa_app clean
rm -rf "$(DIST)"
rm -f "$(S)"/*
clean-all: clean
rm -rf "$(SRCDIR)"
```
### `.dockerignore`
Keep the context brutally small. That improves cache behavior and reduces the chance that unrelated files perturb reproducibility.
```dockerignore
**
!docker/**
!scripts/**
!packages/**
!Makefile
```
### Host vendoring script
This is the clean place to fetch and lock dependency tarballs. It can pull from upstream or from your own artifact mirror; the container build stays offline either way.
```bash
#!/usr/bin/env bash
set -euo pipefail
mkdir -p packages
# Required
cp "${ASK_TAR_SRC:?set ASK_TAR_SRC}" packages/ASK.tar.gz
# Optional but required for full offline ASK builds
[ -n "${FMLIB_TAR_SRC:-}" ] && cp "${FMLIB_TAR_SRC}" packages/fmlib-lf-6.12.49-2.2.0.tar.gz
[ -n "${FMC_TAR_SRC:-}" ] && cp "${FMC_TAR_SRC}" packages/fmc-lf-6.12.49-2.2.0.tar.gz
[ -n "${LIBNFNETLINK_TAR_SRC:-}" ] && cp "${LIBNFNETLINK_TAR_SRC}" packages/libnfnetlink-1.0.2.tar.bz2
[ -n "${LIBNFCT_TAR_SRC:-}" ] && cp "${LIBNFCT_TAR_SRC}" packages/libnetfilter_conntrack-1.1.0.tar.xz
[ -n "${LIBCLI_TAR_SRC:-}" ] && cp "${LIBCLI_TAR_SRC}" packages/libcli-1.10.7.tar.gz
[ -n "${KERNEL_TAR_SRC:-}" ] && cp "${KERNEL_TAR_SRC}" packages/lf-6.12.49-2.2.0.tar.gz
(
cd packages
rm -f SHA256SUMS
sha256sum *.tar.gz *.tar.xz *.tar.bz2 2>/dev/null | sort -k2 > SHA256SUMS
)
```
### Tarball normalization snippet
Use this on the host if you need to strip VCS noise and normalize timestamps before vendoring.
```bash
#!/usr/bin/env bash
set -euo pipefail
: "${SOURCE_DATE_EPOCH:?set SOURCE_DATE_EPOCH}"
: "${IN_TAR:?set IN_TAR}"
: "${OUT_TAR:?set OUT_TAR}"
tmp="$(mktemp -d)"
trap 'rm -rf "$tmp"' EXIT
mkdir -p "$tmp/src"
tar -xf "$IN_TAR" -C "$tmp/src"
root="$(find "$tmp/src" -mindepth 1 -maxdepth 1 -type d | head -n1)"
rm -rf "$root/.git"
find "$root" -exec touch -h -d "@${SOURCE_DATE_EPOCH}" {} +
tar --sort=name \
--mtime="@${SOURCE_DATE_EPOCH}" \
--owner=0 --group=0 --numeric-owner \
-czf "$OUT_TAR" -C "$tmp/src" "$(basename "$root")"
```
### Kernel fragment filtering and concrete fix for `CONFIG_NETFILTER_XTABLES_LEGACY`
This script solves the exact mismatch you reported without weakening verification for valid symbols.
```bash
#!/usr/bin/env bash
set -euo pipefail
KERNEL_DIR="${1:?kernel dir required}"
FRAGMENT="${2:?fragment required}"
symbol_exists() {
local sym="${1#CONFIG_}"
grep -RqsE "^[[:space:]]*(menu)?config[[:space:]]+${sym}\b" \
"$KERNEL_DIR"/Kconfig \
"$KERNEL_DIR"/arch \
"$KERNEL_DIR"/drivers \
"$KERNEL_DIR"/fs \
"$KERNEL_DIR"/init \
"$KERNEL_DIR"/kernel \
"$KERNEL_DIR"/lib \
"$KERNEL_DIR"/mm \
"$KERNEL_DIR"/net \
"$KERNEL_DIR"/security \
"$KERNEL_DIR"/sound \
"$KERNEL_DIR"/virt
}
while IFS= read -r line || [ -n "$line" ]; do
stripped="$(printf '%s\n' "$line" | sed 's/^[[:space:]]*//; s/[[:space:]]*$//')"
[ -z "$stripped" ] && { printf '\n'; continue; }
case "$stripped" in
\#\ CONFIG_*" is not set")
sym="$(printf '%s' "$stripped" | sed 's/^# \(CONFIG_[^ ]*\) is not set$/\1/')"
symbol_exists "$sym" && printf '%s\n' "$stripped"
;;
CONFIG_*=*)
sym="${stripped%%=*}"
symbol_exists "$sym" && printf '%s\n' "$stripped"
;;
\#*)
printf '%s\n' "$stripped"
;;
*)
printf '%s\n' "$stripped"
;;
esac
done < "$FRAGMENT"
```
For your exact fragment, the direct patch is just this:
```diff
--- a/docker/kernel-extra.config
+++ b/docker/kernel-extra.config
@@
-CONFIG_NETFILTER_XTABLES_LEGACY=y
```
If you want to inject and verify a corrected kernel config inside the Docker build before module compilation, use this sequence:
```dockerfile
RUN mkdir -p /opt/kernel && \
tar -xf "/vendor/${KERNEL_TAR}" --strip-components=1 -C /opt/kernel && \
/usr/local/bin/filter-kconfig-fragment.sh /opt/kernel /tmp/kernel-extra.config > /tmp/kernel-extra.effective.config && \
make -C /opt/kernel ARCH="${ARCH}" CROSS_COMPILE="${CROSS_COMPILE}" defconfig && \
/opt/kernel/scripts/kconfig/merge_config.sh -m /opt/kernel/.config /tmp/kernel-extra.effective.config && \
KCONFIG_WARN_UNKNOWN_SYMBOLS=1 KCONFIG_WERROR=1 \
make -C /opt/kernel ARCH="${ARCH}" CROSS_COMPILE="${CROSS_COMPILE}" olddefconfig && \
grep -E '^CONFIG_(NETFILTER_XTABLES|IP_NF_IPTABLES|IP_NF_IPTABLES_LEGACY|IP6_NF_IPTABLES|IP6_NF_IPTABLES_LEGACY)=' /opt/kernel/.config
```
If you prefer imperative fixes over fragment files, `scripts/config` is fine too:
```bash
cd /opt/kernel
scripts/config --file .config \
-e NETFILTER \
-e NETFILTER_ADVANCED \
-e NETFILTER_XTABLES \
-e IP_NF_IPTABLES \
-e IP_NF_IPTABLES_LEGACY \
-e IP_NF_FILTER \
-e IP_NF_MANGLE \
-e IP_NF_NAT \
-e IP6_NF_IPTABLES \
-e IP6_NF_IPTABLES_LEGACY \
-e IP6_NF_FILTER \
-e IP6_NF_MANGLE \
-e IP6_NF_NAT
# Only enable if the symbol actually exists in this tree.
grep -RqsE '^[[:space:]]*(menu)?config[[:space:]]+NETFILTER_XTABLES_LEGACY\b' . && \
scripts/config --file .config -e NETFILTER_XTABLES_LEGACY || true
KCONFIG_WARN_UNKNOWN_SYMBOLS=1 KCONFIG_WERROR=1 \
make ARCH=arm64 CROSS_COMPILE="${CROSS_COMPILE}" olddefconfig
```
### Build-arg versus BuildKit secret mount
For ordinary source tarballs, build args are fine:
```bash
docker buildx build \
-f docker/ask.Dockerfile \
--build-arg ASK_TAR=packages/ASK.tar.gz \
--target artifacts \
--output type=local,dest=out/ask \
.
```
For **private** source archives or credentials, do not use build args. Use a secret mount:
```bash
docker buildx build \
-f docker/ask.Dockerfile \
--secret id=ask_tar,src=packages/ASK.tar.gz \
--target artifacts \
--output type=local,dest=out/ask \
.
```
```dockerfile
RUN --mount=type=secret,id=ask_tar,target=/run/secrets/ASK.tar.gz \
mkdir -p /work/src/ASK && \
tar -xf /run/secrets/ASK.tar.gz --strip-components=1 -C /work/src/ASK
```
Notes on the code above: Dockers documented behavior is that build args are for Dockerfile parameterization, may persist in image metadata/history, and are therefore not appropriate for secrets; build secrets are mounted temporarily for the duration of a single `RUN`; multi-stage builds are the recommended way to shrink outputs; `.dockerignore` is the standard way to keep context small; and `buildx` supports `type=local` output directly to the client filesystem. Docker also recommends digest pinning for deterministic base-image selection. citeturn12view0turn12view1turn12view2turn12view3turn13view0turn16view0turn16view1turn16view3
Kernel-side notes on the config flow above: the kernel docs provide `KCONFIG_WARN_UNKNOWN_SYMBOLS`, `KCONFIG_WERROR`, `KCONFIG_ALLCONFIG`, `listnewconfig`, and `scripts/diffconfig` for config hygiene; they also show `merge_config.sh` in use; and they document that external modules need a prepared kernel tree, with `modules_prepare` not being sufficient for `Module.symvers` when module versioning is involved. Reproducible kernel builds should also set `KBUILD_BUILD_TIMESTAMP`, `KBUILD_BUILD_USER`, `KBUILD_BUILD_HOST`, and `SOURCE_DATE_EPOCH` where external code embeds timestamps. citeturn21view0turn8search1turn22search0turn22search4turn9view1turn11search0
## Debian Trixie to Alpine conversion
The conversion rule is simple: translate package names, then re-evaluate libc assumptions. Do **not** assume a one-line mechanical translation is enough. Alpine stable gives you `build-base`, `gcc`, `g++`, `musl-dev`, `pkgconf`, `openssl-dev`, `zlib-dev`, `cmake`, and `tclap-dev` through its normal repositories, but repository choice matters: Alpine explicitly warns that `testing` is edge-only and should not be used for deterministic production-like container builds. That matters for `libcli`, where the official package index shows a `libcli` package in edge/testing, but not a stable `libcli-dev` path for the v3.22 line. In other words: for `libcli`, vendoring the source tarball is the safe answer. citeturn17view1turn17view3turn18view8
| Tool / library | Debian Trixie package(s) | Debian install command | Alpine package(s) | Alpine install command | Practical note |
|---|---|---|---|---|---|
| gcc | `gcc` | `apt-get update && apt-get install -y gcc` | `gcc` | `apk add --no-cache gcc` | Same compiler family, different libc target by default |
| g++ | `g++` | `apt-get update && apt-get install -y g++` | `g++` | `apk add --no-cache g++` | Same caveat as gcc |
| make | `make` | `apt-get update && apt-get install -y make` | `make` | `apk add --no-cache make` | Straight mapping |
| cmake | `cmake` | `apt-get update && apt-get install -y cmake` | `cmake` | `apk add --no-cache cmake` | Straight mapping |
| pkg-config / pkgconf | `pkgconf` or `pkg-config` | `apt-get update && apt-get install -y pkgconf` | `pkgconf` | `apk add --no-cache pkgconf` | Prefer `pkgconf` on both |
| OpenSSL development | `libssl-dev` | `apt-get update && apt-get install -y libssl-dev` | `openssl-dev` | `apk add --no-cache openssl-dev` | Use `openssl` separately if you need the CLI |
| zlib development | `zlib1g-dev` | `apt-get update && apt-get install -y zlib1g-dev` | `zlib-dev` | `apk add --no-cache zlib-dev` | Straight mapping |
| libstdc++ runtime | `libstdc++6` | `apt-get update && apt-get install -y libstdc++6` | `libstdc++` | `apk add --no-cache libstdc++` | Headers/dev pieces still come from `g++` / toolchain |
| libc development files | `libc6-dev` | `apt-get update && apt-get install -y libc6-dev` | `musl-dev` | `apk add --no-cache musl-dev` | This is the real ABI divide |
| build meta-package | `build-essential` | `apt-get update && apt-get install -y build-essential` | `build-base` | `apk add --no-cache build-base` | Debian and Alpine standard meta-packages |
| arm64 cross meta-package | `crossbuild-essential-arm64` | `apt-get update && apt-get install -y crossbuild-essential-arm64` | **no direct equivalent** | `apk add --no-cache build-base` plus Buildx/QEMU, or install a dedicated cross toolchain | Prefer Buildx `--platform` unless you truly need a host cross toolchain |
| TCLAP development | `libtclap-dev` | `apt-get update && apt-get install -y libtclap-dev` | `tclap-dev` | `apk add --no-cache tclap-dev` | Alpine package is in `community` |
| libcli development | `libcli-dev` | `apt-get update && apt-get install -y libcli-dev` | **no stable equivalent surfaced for v3.22** | **vendor and build from tarball** | edge/testing shows `libcli`, not a dependable stable `libcli-dev` path |
Sources and notes for the table: Debians official package pages document `build-essential`, `crossbuild-essential-arm64`, and the default `g++` metapackage; the Debian package index and source pages also show `libtclap-dev` and `libcli-dev`. Alpines official package pages and wiki document `build-base`, `gcc`, `g++`, `cmake`, `musl-dev`, `libstdc++`, `openssl-dev`, `zlib-dev`, `tclap-dev`, and repository support rules. Alpines repository documentation explicitly says `testing` is edge-only and not appropriate for deterministic production-like use, which is why vendoring `libcli` is the better answer than leaning on edge/testing in a stable builder. citeturn18view10turn18view9turn18view11turn7search0turn18view12turn18view0turn19search2turn18view3turn18view4turn18view1turn18view2turn19search5turn19search9turn18view7turn17view1turn17view3turn18view8
## Musl compatibility and troubleshooting
The musl migration problem is mostly about identifying which failures are **toolchain/configuration** issues and which are **libc/ABI** issues.
At the toolchain level, Alpines own documentation says `build-base` is the standard compiler meta-package and warns that ordinary Alpine systems use BusyBox, which means some GNU utility behavior is absent or reduced; if your build scripts assume full GNU userland, add `bash`, `coreutils`, `findutils`, `diffutils`, `gawk`, and similar packages explicitly instead of assuming Debian defaults. citeturn17view3turn17view4
At the libc level, musls documentation highlights the differences that actually break software. The big four are: smaller default thread stacks, limited symbol-versioning support compared with glibcs versioned symbol model, `dlclose()` being a no-op, and code taking the wrong path because the project wrote glibc-specific preprocessor logic backwards. The musl FAQ explicitly calls out `getopt` assumptions, `iconv` assumptions, `off_t` width assumptions, and bad `#ifdef` logic as common failure modes, and recommends making the preprocessor logic check for `__GLIBC__` correctly rather than defaulting to glibc behavior. citeturn17view5turn17view6
For the supplied ASK sources specifically, the good news is that one of the classic glibc-only pain points is already handled. `cmm/src/cmm.c` wraps `execinfo.h` and `backtrace()` behind `__GLIBC__` guards. Keep that pattern. Do not “fix” it by forcing musl through the glibc path.
A practical troubleshooting order is:
- If the failure is missing headers or package names, fix package mappings first.
- If the failure is undefined references or runtime loader errors, inspect whether the code or binary assumes glibc versioned symbols.
- If the failure is random crashes in worker threads, suspect musls smaller default thread stack and either reduce stack usage or set an explicit stack size.
- If the failure is plugin-reload behavior, remember that musl treats `dlclose()` differently.
- If the failure is from a third-party binary blob, try `gcompat` only for simple runtime compatibility. Do not assume it is a full glibc replacement.
Recommended fixes, in descending order of cleanliness:
- **Patch the source** so it is libc-agnostic.
- **Adjust link flags** for more self-contained outputs where appropriate, such as `-static-libgcc` and, if acceptable for your deployment model, `-static-libstdc++`.
- **Use `gcompat`** only for relatively simple glibc-linked runtime binaries on Alpine, not as a blanket cure-all. Alpine documents it as a compatibility layer, not a full glibc runtime. citeturn17view2turn17view7turn17view8
- **Use a glibc builder stage** for the one component that genuinely cannot be ported, while keeping the rest of the system on Alpine.
- **Keep a glibc runtime image** for that component if you hit hard symbol-versioning or loader constraints that `gcompat` cannot cover.
```mermaid
flowchart TD
A[Component fails on Alpine/musl] --> B{Package/toolchain issue?}
B -- yes --> C[Fix apk package names\nadd GNU userland tools\nrebuild]
C --> D{Now links and runs?}
D -- yes --> E[Stay on Alpine]
D -- no --> F{Source has glibc-only ifdefs or APIs?}
B -- no --> F
F -- yes --> G[Patch source\nuse __GLIBC__ guards\nfix flags]
G --> H{Still failing?}
H -- no --> E
H -- yes --> I{Simple glibc runtime dependency only?}
F -- no --> I
I -- yes --> J[Try gcompat]
J --> K{Reliable enough?}
K -- yes --> E
K -- no --> L[Use glibc builder or runtime for that component]
I -- no --> L
L --> M{Kernel module / ABI-coupled piece?}
M -- yes --> N[Require matching KERNEL_TAR\nand matching kernel config]
M -- no --> O[Split build:\nAlpine for musl-safe parts,\nglibc stage for offender]
```
## Sources and notes
This report prioritizes official documentation. Docker guidance comes from the official Docker docs on best practices, multi-stage builds, build variables, build secrets, build context, `.dockerignore`, and `buildx` outputs and secret handling. Alpine guidance comes from the Alpine wiki and package index for musl, repositories, software management, BusyBox, `build-base`, `tclap-dev`, `gcompat`, and related packages. musl/glibc compatibility notes come from the musl wikis FAQ and functional-differences pages. Kernel configuration and reproducibility guidance comes from the official kernel docs on configuration targets, Kbuild environment variables, external modules, and reproducible builds. citeturn12view3turn12view0turn12view1turn12view4turn12view5turn16view0turn16view1turn16view3turn17view0turn17view1turn17view2turn17view3turn17view4turn17view7turn17view8turn17view5turn17view6turn21view0turn9view1turn22search0turn22search4turn11search0
Open questions and limitations: I did not pin one exact final base-image digest because you did not specify one exact image tag policy; the code uses placeholders for digest pinning. I also did not assume one exact ASK dependency inventory beyond what is visible in the supplied tarball: the report covers the dependencies the uploaded ASK `Makefile` demonstrably fetches (`fmlib`, `fmc`, `libnfnetlink`, `libnetfilter_conntrack`, and practically `libcli`). If your unrevealed local patches or downstream packaging add more source fetches, those need the same vendoring treatment.

View File

@@ -0,0 +1,261 @@
# `filter-kconfig-fragment.sh` for Reproducible Kernel-Config Filtering
## Executive summary
I did **not** find a file named `filter-kconfig-fragment.sh` in the uploaded project files or in the supplied ASK/kernel tarballs. What **is** present is the standard kernel-side machinery you would use around such a filter: the Linux kernels own `scripts/kconfig/merge_config.sh` for fragment merging, `scripts/config` for imperative `.config` edits, and documented Kconfig controls such as `KCONFIG_WARN_UNKNOWN_SYMBOLS`, `KCONFIG_WERROR`, and `KCONFIG_ALLCONFIG`. The kernel tree itself also uses `merge_config.sh` followed by `olddefconfig` when it builds from config fragments. citeturn3view1turn0search2turn3view2
A small pre-filter is still useful, because neither `merge_config.sh` nor `scripts/config` is meant to silently rewrite a fragment for a **different kernel generation**. `merge_config.sh` merges and warns; `scripts/config` edits `.config`; Kconfig enforces symbol existence and dependency rules. When a fragment contains a symbol that simply does not exist in the target tree, such as `CONFIG_NETFILTER_XTABLES_LEGACY` in your reported 6.12-based tree, a pre-filter avoids an avoidable mismatch by dropping only the absent symbol while preserving ordering, comments, and all valid settings. That is the narrow job of the replacement script below. citeturn3view1turn3view2turn3view6
The replacement script in this report is POSIX `sh`, offline-safe, requires only standard Unix tools (`find`, `grep`, `sed`, `mktemp`, `cat`), validates inputs, checks for `scripts/kconfig/merge_config.sh`, preserves comments and ordering, emits only symbols that exist in the target Kconfig tree, and exits non-zero if nothing usable remains after filtering. It is designed specifically for Docker builder stages and reproducible kernel-config flows. citeturn3view1turn3view2turn3view4
## What I found and what to use instead
The Linux kernel already ships the two main utilities you should treat as authoritative.
`merge_config.sh` is the standard fragment-merging tool in the kernel tree. Its own header says it “takes a list of config fragment values, and merges them one by one,” and that it warns about overridden values and symbols that did not make it into the final `.config` because of dependencies or symbol removal. The kernel build system also invokes `scripts/kconfig/merge_config.sh -m $(KCONFIG_CONFIG) ...` and then runs `olddefconfig` for fragment-based config targets. citeturn3view1turn0search2
`scripts/config` is the standard imperative editor for `.config`. Its built-in usage text documents `--enable`, `--disable`, `--module`, `--set-str`, `--set-val`, `--undefine`, `--state`, and `--file`, and also states that it does **not** validate `.config` immediately; that validation happens at the next `make` step. The kernel docs also contain concrete examples of `./scripts/config -e ...` and `-m ...` being used to enable required options. citeturn3view0turn3view5
Kconfig itself exposes the exact controls you want for drift detection: `KCONFIG_WARN_UNKNOWN_SYMBOLS` warns on unrecognized config symbols, `KCONFIG_WERROR` turns those warnings into errors, and `KCONFIG_ALLCONFIG` provides a documented “mini-config” mechanism for all*config targets. The same docs also recommend `make listnewconfig` to surface newly introduced symbols. citeturn3view2turn5view0
For external module builds, the kernel docs also matter here: `modules_prepare` can prepare a tree for external modules, but it does **not** produce `Module.symvers` if `CONFIG_MODVERSIONS` matters, so a full kernel build is still required in that case. That is the main reason a strict builder pipeline should validate kernel config and preparation before invoking out-of-tree module builds. citeturn3view4
## Minimal portable replacement script
Assumptions for this script:
- `KERNEL_DIR` is a Linux kernel source tree root.
- The tree contains `scripts/kconfig/merge_config.sh`.
- Config symbols are defined in `Kconfig*` files somewhere under `KERNEL_DIR`.
- The fragment format is standard Kconfig fragment syntax: `CONFIG_FOO=...` and `# CONFIG_FOO is not set`.
- The builder environment has POSIX `sh`, `find`, `grep`, `sed`, `mktemp`, `cat`, and `rm`.
**`scripts/filter-kconfig-fragment.sh`**
```sh
#!/bin/sh
# filter-kconfig-fragment.sh
#
# Purpose:
# Emit an "effective" Kconfig fragment that preserves comments and ordering
# but drops CONFIG symbols that do not exist in the target kernel tree.
#
# Usage:
# filter-kconfig-fragment.sh KERNEL_DIR FRAGMENT_FILE > effective.config
#
# Exit codes:
# 2 invalid usage / missing required files
# 3 no Kconfig files were found in KERNEL_DIR
# 4 fragment had no surviving CONFIG lines after filtering
set -eu
usage() {
echo "usage: $0 KERNEL_DIR FRAGMENT_FILE" >&2
exit 2
}
[ "$#" -eq 2 ] || usage
KERNEL_DIR=$1
FRAGMENT_FILE=$2
MERGE_SCRIPT=$KERNEL_DIR/scripts/kconfig/merge_config.sh
[ -d "$KERNEL_DIR" ] || {
echo "error: KERNEL_DIR is not a directory: $KERNEL_DIR" >&2
exit 2
}
[ -r "$FRAGMENT_FILE" ] || {
echo "error: FRAGMENT_FILE is not readable: $FRAGMENT_FILE" >&2
exit 2
}
[ -s "$FRAGMENT_FILE" ] || {
echo "error: FRAGMENT_FILE is empty: $FRAGMENT_FILE" >&2
exit 2
}
[ -r "$MERGE_SCRIPT" ] || {
echo "error: required kernel utility missing: $MERGE_SCRIPT" >&2
exit 2
}
TMP_KCONFIGS=$(mktemp)
TMP_OUT=$(mktemp)
trap 'rm -f "$TMP_KCONFIGS" "$TMP_OUT"' EXIT HUP INT TERM
# Index all Kconfig files once. Searching the whole tree is simpler and more
# robust than hardcoding a small directory subset.
find "$KERNEL_DIR" -type f \( -name 'Kconfig' -o -name 'Kconfig*' \) | sort > "$TMP_KCONFIGS"
[ -s "$TMP_KCONFIGS" ] || {
echo "error: no Kconfig files found under $KERNEL_DIR" >&2
exit 3
}
symbol_exists() {
# Accept either CONFIG_FOO or FOO.
sym=$1
case "$sym" in
CONFIG_*) sym=${sym#CONFIG_} ;;
esac
# We treat both "config FOO" and "menuconfig FOO" as symbol definitions.
# GNU grep and busybox grep both support -E.
while IFS= read -r kf; do
if grep -Eq "^[[:space:]]*(menu)?config[[:space:]]+$sym([[:space:]]|\$)" "$kf"; then
return 0
fi
done < "$TMP_KCONFIGS"
return 1
}
kept_symbols=0
while IFS= read -r line || [ -n "$line" ]; do
# Preserve blank lines exactly.
if [ -z "$line" ]; then
printf '\n' >> "$TMP_OUT"
continue
fi
case "$line" in
'# 'CONFIG_*' is not set')
# Example: "# CONFIG_FOO is not set"
sym=$(printf '%s\n' "$line" | sed -n 's/^# \(CONFIG_[A-Za-z0-9_][A-Za-z0-9_]*\) is not set$/\1/p')
if [ -n "$sym" ] && symbol_exists "$sym"; then
printf '%s\n' "$line" >> "$TMP_OUT"
kept_symbols=$((kept_symbols + 1))
fi
;;
CONFIG_*=*)
# Example: "CONFIG_FOO=y" or "CONFIG_BAR=\"abc\""
sym=${line%%=*}
if symbol_exists "$sym"; then
printf '%s\n' "$line" >> "$TMP_OUT"
kept_symbols=$((kept_symbols + 1))
fi
;;
'#'*)
# Preserve comments and headings.
printf '%s\n' "$line" >> "$TMP_OUT"
;;
*)
# Preserve any other non-empty line verbatim; fragments normally
# shouldn't contain these, but preserving them is safer than rewriting.
printf '%s\n' "$line" >> "$TMP_OUT"
;;
esac
done < "$FRAGMENT_FILE"
if [ "$kept_symbols" -eq 0 ]; then
echo "error: no CONFIG entries remain after filtering $FRAGMENT_FILE" >&2
exit 4
fi
cat "$TMP_OUT"
```
This script is intentionally narrow. It does **not** try to solve dependency resolution, force values, or replace `merge_config.sh`. It only answers one question: “Does this symbol exist anywhere in the target kernels Kconfig tree?” That is the right preflight step when a fragment spans multiple kernel generations and you need a portable Docker-safe filter before Kconfig proper enforces dependencies. citeturn3view1turn3view2turn3view6
## Integration examples
### Dockerfile usage
```dockerfile
COPY scripts/filter-kconfig-fragment.sh /usr/local/bin/filter-kconfig-fragment.sh
COPY docker/kernel-extra.config /tmp/kernel-extra.config
RUN chmod +x /usr/local/bin/filter-kconfig-fragment.sh && \
test -d /opt/kernel && \
/usr/local/bin/filter-kconfig-fragment.sh /opt/kernel /tmp/kernel-extra.config \
> /tmp/kernel-extra.effective.config && \
/opt/kernel/scripts/kconfig/merge_config.sh -m /opt/kernel/.config /tmp/kernel-extra.effective.config && \
KCONFIG_WARN_UNKNOWN_SYMBOLS=1 KCONFIG_WERROR=1 \
make -C /opt/kernel olddefconfig && \
make -C /opt/kernel modules_prepare
```
If `CONFIG_MODVERSIONS=y` or your external modules depend on `Module.symvers`, replace the final `modules_prepare` with a full kernel build step for the relevant targets. The kernel docs explicitly warn that `modules_prepare` does not produce `Module.symvers`. citeturn3view4
### Makefile usage
```make
KERNEL_DIR ?= /opt/kernel
KERNEL_FRAGMENT ?= docker/kernel-extra.config
KERNEL_EFFECTIVE?= build/kernel-extra.effective.config
KCONFIG_FILTER ?= scripts/filter-kconfig-fragment.sh
$(KERNEL_EFFECTIVE): $(KERNEL_FRAGMENT) $(KCONFIG_FILTER)
mkdir -p $(dir $@)
$(KCONFIG_FILTER) $(KERNEL_DIR) $(KERNEL_FRAGMENT) > $@
kernel-merge: $(KERNEL_EFFECTIVE)
$(KERNEL_DIR)/scripts/kconfig/merge_config.sh -m $(KERNEL_DIR)/.config $(KERNEL_EFFECTIVE)
KCONFIG_WARN_UNKNOWN_SYMBOLS=1 KCONFIG_WERROR=1 $(MAKE) -C $(KERNEL_DIR) olddefconfig
kernel-prepare: kernel-merge
$(MAKE) -C $(KERNEL_DIR) modules_prepare
```
### Why the filter is needed
This filter exists to handle **kernel-version mismatches in fragments**, not to bypass Kconfig. A fragment can be perfectly valid for kernel line A and partially invalid for kernel line B because symbols were renamed, split, removed, or introduced later. Your concrete example, `CONFIG_NETFILTER_XTABLES_LEGACY`, is exactly that class of problem: the symbol may be meaningful in one kernel generation and absent in another. Filtering it out before `merge_config.sh` and `olddefconfig` prevents a false failure from one stale line while still allowing Kconfig to validate all remaining, real symbols and dependencies. Kconfigs own docs explain that symbols live in a dependency-aware tree and that visibility and legality depend on those relationships, which is why the filter should check **existence only** and leave dependency resolution to Kconfig itself. citeturn3view6turn3view2
```mermaid
flowchart LR
A[kernel-extra.config] --> B[filter-kconfig-fragment.sh]
B --> C[kernel-extra.effective.config]
C --> D[merge_config.sh -m .config effective.fragment]
D --> E[make olddefconfig]
E --> F[make modules_prepare or full kernel build]
```
## Quick checks and validation commands
Use these checks before or after wiring the script into Docker:
### Check whether a symbol exists in the target tree
```sh
grep -Rnw --include='Kconfig*' \
-E '^[[:space:]]*(menu)?config[[:space:]]+NETFILTER_XTABLES_LEGACY([[:space:]]|$)' \
"$KERNEL_DIR" || echo "missing"
```
### Show which lines survive filtering
```sh
scripts/filter-kconfig-fragment.sh "$KERNEL_DIR" docker/kernel-extra.config \
| grep -E '^(CONFIG_|# CONFIG_)'
```
### Confirm the filtered fragment is non-empty
```sh
scripts/filter-kconfig-fragment.sh "$KERNEL_DIR" docker/kernel-extra.config \
> /tmp/kernel-extra.effective.config && \
test -s /tmp/kernel-extra.effective.config
```
### Validate after merge and `olddefconfig`
```sh
"$KERNEL_DIR"/scripts/kconfig/merge_config.sh -m "$KERNEL_DIR"/.config /tmp/kernel-extra.effective.config
KCONFIG_WARN_UNKNOWN_SYMBOLS=1 KCONFIG_WERROR=1 \
make -C "$KERNEL_DIR" olddefconfig
```
### Inspect config drift
```sh
make -C "$KERNEL_DIR" listnewconfig
```
The kernel docs explicitly recommend `listnewconfig` when older configs are reused against newer kernels, and they document `KCONFIG_WARN_UNKNOWN_SYMBOLS` / `KCONFIG_WERROR` for exactly the “unknown symbol in config input” problem this workflow is guarding against. citeturn5view0turn3view2
## Sources and notes
I did not find `filter-kconfig-fragment.sh` in the provided project files, so the replacement above is intentionally conservative and built around **official kernel interfaces** instead of inventing a parallel config system. The authoritative equivalents are: the kernels own `scripts/kconfig/merge_config.sh`, whose header states that it merges config fragments and warns about overridden or unresolved values; `scripts/config`, whose built-in usage documents `--enable`, `--disable`, `--module`, `--file`, and related options; the Kconfig documentation for `KCONFIG_WARN_UNKNOWN_SYMBOLS`, `KCONFIG_WERROR`, `KCONFIG_ALLCONFIG`, and `listnewconfig`; the Kconfig language docs for symbol definitions and dependency semantics; and the external-modules docs for `modules_prepare` and the `Module.symvers` caveat. The reproducible-builds kernel docs are also relevant when the surrounding Docker pipeline sets `SOURCE_DATE_EPOCH`, `KBUILD_BUILD_TIMESTAMP`, `KBUILD_BUILD_USER`, and `KBUILD_BUILD_HOST`. citeturn3view1turn3view0turn3view2turn5view0turn3view6turn3view4turn3view3

View File

@@ -0,0 +1,638 @@
# Cross-Compiling ASK for Arm64 in Docker Without Full Emulation
## Executive summary
The best default for ASK is **not** QEMU and **not** an Alpine-first builder. It is a native-host Docker build stage pinned to `--platform=$BUILDPLATFORM`, using a real arm64 GNU/Linux cross toolchain plus a matching target sysroot and kernel tree. Docker explicitly recommends cross-compilation with multi-stage builds as one of the three multi-platform strategies, and it explicitly warns that QEMU emulation is much slower for compilation-heavy workloads. The key pattern is: keep the build container native to the builder host, and let the compiler target arm64. citeturn10view4turn17search2
For a Linux arm64/glibc target, the cleanest path is a Debian-based builder with `crossbuild-essential-arm64`, `gcc-aarch64-linux-gnu`, `g++-aarch64-linux-gnu`, `libc6-dev-arm64-cross`, and `linux-libc-dev-arm64-cross`. Debians package metadata makes the intent explicit: `crossbuild-essential-arm64` is the cross-build meta-package, `gcc-aarch64-linux-gnu` is the default arm64 GNU C cross-compiler, and `libc6-dev-arm64-cross` provides the arm64 glibc development headers and objects for cross-compiling. That stack is the shortest route to reproducible arm64 binaries without emulation. citeturn10view0turn10view1turn10view2
For kernel modules, a matching kernel source tree is not optional. The kernel docs are explicit: external modules need the kernels configuration and headers, `modules_prepare` exists for that purpose, and `modules_prepare` does **not** generate `Module.symvers` when `CONFIG_MODVERSIONS` is in play. If your module ABI depends on symbol versioning, you need a full kernel build of the matching tree/config to produce `Module.symvers`. citeturn10view5
For ASK specifically, the practical recommendation is:
- Use a **Debian cross-toolchain builder** for both userspace and module builds.
- Use a **glibc target sysroot** if the runtime target is Debian/Ubuntu or another glibc-based rootfs.
- Only use **musl-cross** or Alpine when the deployment target is actually musl-based.
- Use **Clang/LLVM** only if ASK and its module path are known to be Clang-clean.
- Treat **crosstool-ng** and custom toolchain builds as a last-mile optimization for pinned enterprise toolchains, not the day-one setup. citeturn10view8turn14view0turn10view10turn10view11turn10view12
## Assumptions and the recommended path
Because some project details remain unspecified, I am making these assumptions instead of inventing facts:
- **ASK is a C/C++ project with both userspace code and out-of-tree kernel modules.**
- **The arm64 target runtime is more likely glibc than musl**, because the current build direction and requested Debian-cross path point that way.
- **The source enters the build as tarballs**, not Git checkouts.
- **A matching kernel source tarball is available** as `KERNEL_TAR` when module builds are required.
- **Third-party target libraries are either vendored as source tarballs or staged into a target sysroot** before ASK is compiled.
- **The build host may be amd64 or arm64**, but the goal is to avoid full target emulation either way.
Under those assumptions, the recommended path is:
- Use `docker buildx build --platform=linux/arm64` for target metadata and artifact export.
- Pin every `FROM` that actually runs build commands to `--platform=$BUILDPLATFORM`, so those stages run natively on the builder and do **not** invoke emulation.
- Install the Debian arm64 cross toolchain in the builder.
- Stage a target sysroot for arm64 glibc.
- Build vendored target libraries into that sysroot.
- Build ASK userspace with `CC/CXX` pointing at the cross compiler and `--sysroot` or an equivalent staged sysroot layout.
- Build ASK modules with `ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- KDIR=/opt/kernel`.
- Export artifacts with `--output type=local`. citeturn17search1turn17search2turn11view0turn11view1
```mermaid
flowchart LR
A[ASK.tar.gz + dependency tarballs + optional KERNEL_TAR] --> B[buildx build request --platform=linux/arm64]
B --> C[build stages pinned to --platform=$BUILDPLATFORM]
C --> D[install Debian arm64 cross toolchain]
D --> E[stage arm64 glibc sysroot]
E --> F[build vendored target libs into sysroot]
F --> G[extract ASK tarball]
G --> H[build userspace with aarch64-linux-gnu-gcc/g++]
E --> I[extract matching kernel tree]
I --> J[merge config fragments and olddefconfig]
J --> K[modules_prepare or full kernel build]
K --> L[build external modules]
H --> M[collect artifacts]
L --> M
M --> N[scratch artifacts stage]
N --> O[--output type=local]
```
## Toolchain choices without emulation
The main choices are not equal.
### Debian cross packages
This is the best default for ASK if the target runtime is arm64 GNU/Linux with glibc. Debian already publishes a coherent arm64 GNU cross stack: `crossbuild-essential-arm64`, `gcc-aarch64-linux-gnu`, `g++-aarch64-linux-gnu`, `libc6-dev-arm64-cross`, `linux-libc-dev-arm64-cross`, and the matching arm64 `libstdc++` development package. That gives you a native-host compilation environment that directly targets arm64 GNU/Linux without needing emulation or a self-built compiler. citeturn10view0turn10view1turn10view2turn10view3turn15view0turn15view1
### Arm GNU Toolchain and Linaro-delivered releases
This is the right fallback when you need a prebuilt, distro-independent GNU cross compiler outside Debians cadence, or you want the exact Arm-distributed toolchain family. Linaros current downloads page explicitly points users to the Arm Developer site for the official prebuilt GNU cross-toolchain releases for AArch64 and A-profile Arm targets. That makes Arm GNU Toolchain a legitimate alternative to Debian packages, especially when you want one pinned tarball instead of distro package resolution. citeturn10view10turn5search3
### Clang and LLVM cross-targeting
Clang is viable when you want one compiler binary that can emit code for many targets, and the official Clang cross-compilation docs explain the exact model: use `-target` and, where needed, `--sysroot`, plus explicit include/library paths. For the kernel specifically, the official kernel LLVM docs say arm64 is supported, `make LLVM=1 ARCH=arm64` is the standard form, and if you use only LLVM tools then `CROSS_COMPILE` becomes unnecessary; if you mix GNU binutils with LLVM, you set those utilities explicitly. This is a strong option, but only if ASKs userspace and module code are already known to behave under Clang. citeturn14view0turn14view1
### crosstool-ng
`crosstool-ng` is a toolchain generator, not a fast path. Its official site describes it as a versatile cross-toolchain generator with a menuconfig-style interface. That is useful when you need a custom-pinned compiler, libc, binutils, and threading model combination, but it is slower to bootstrap and heavier to maintain than using Debians built packages. Use it when you need that control, not because you think you are being clever. citeturn10view8
### musl-cross and Alpine-based targeting
Use this only when the **target** is musl-based. Alpines own docs say Alpine uses musl, and musl does not implement most glibc locale behavior. Alpine also documents `gcompat` as a compatibility layer for simpler glibc programs, not as a universal solution. musls own site links to `musl-cross-make` as the automated cross toolchain builder, and the `musl-cross-make` project describes itself as a simple, relocatable way to produce musl-targeting cross compilers. That is good for a musl target. It is the wrong default for a glibc target. citeturn10view11turn10view12turn11view8turn11view9
### Decision table
| Option | Best when | Strengths | Weaknesses | Verdict for ASK |
|---|---|---|---|---|
| Debian cross packages | Target is arm64 GNU/Linux with glibc | Fast setup, distro-integrated sysroot, easiest CI | Tied to Debian package cadence | **Best default** |
| Arm GNU Toolchain | You want a pinned prebuilt toolchain tarball | Portable, explicit toolchain versioning | You still need a matching sysroot | Strong alternative |
| Clang/LLVM + GNU sysroot | Codebase is Clang-clean and you want one compiler binary | Good cross model, kernel arm64 support | Tool/library path tuning can be fussier | Good if already validated |
| crosstool-ng | You need a custom compiler/libc/binutils combo | Maximum control | Slow bootstrap, more maintenance | Use only if necessary |
| musl-cross / Alpine | Target runtime is musl | Small runtimes, relocatable toolchains possible | glibc mismatch risk | Use only for musl targets |
| QEMU emulation | You need runtime smoke tests or no cross path exists | Easy conceptually | Slow for compilation | Avoid for primary builds |
The table above is grounded in Dockers documented multi-platform strategies, Debians cross-package metadata, Clangs cross-compilation docs, the kernels LLVM docs, `crosstool-ng`s own documentation, and Alpine/musl documentation. Docker explicitly states that QEMU is usually much slower for compilation-heavy workloads and recommends cross-compilation or native multi-node builders when possible. citeturn10view4turn10view0turn10view1turn14view0turn14view1turn10view8turn10view11turn10view12
### Debian and Alpine comparison for arm64-targeted Linux builds
| Need | Debian / glibc path | Alpine / musl path | Bottom line |
|---|---|---|---|
| Native build meta-package | `build-essential` | `build-base` | Straight equivalents for native builds |
| Arm64 Linux cross meta-package | `crossbuild-essential-arm64` | no close stable equivalent | Debian is much better here |
| Arm64 Linux GNU C compiler | `gcc-aarch64-linux-gnu` | no obvious stable `aarch64-linux-gnu-gcc` package surfaced | Debian wins for glibc Linux targets |
| Arm64 Linux GNU C++ compiler | `g++-aarch64-linux-gnu` | no obvious stable `aarch64-linux-gnu-g++` package surfaced | Debian wins |
| glibc target sysroot headers/libs | `libc6-dev-arm64-cross`, `linux-libc-dev-arm64-cross`, `libstdc++-14-dev-arm64-cross` | Alpine is musl-first, not glibc-first | Use Debian for glibc sysroots |
| musl target compiler | extra work on Debian or `musl-cross-make` | Alpine naturally targets musl | Alpine or musl-cross only if target is musl |
| glibc compatibility on musl | n/a | `gcompat` for simpler cases | Useful only as a runtime compatibility hack |
This comparison comes directly from Debian package pages and Alpines own package/wiki materials. Alpine clearly documents `build-base` as the standard build meta-package and documents musl as the system libc. The Alpine package index examples that do surface clear cross packages in this space are `*-none-elf` embedded toolchains, which are not the same thing as a glibc/Linux arm64 cross stack. citeturn10view0turn10view1turn10view2turn15view0turn10view13turn11view6turn11view7turn19view0turn10view11
## Concrete Docker implementation
The primary design below has two Dockerfiles.
- `docker/arm64-sysroot.Dockerfile` builds and caches a reusable arm64 target sysroot.
- `docker/ask.Dockerfile` uses that sysroot, the Debian arm64 cross compiler, and an optional matching kernel tree to build ASK userspace and modules without emulation.
The key trick is that both Dockerfiles pin real build stages to `FROM --platform=$BUILDPLATFORM ...`. That means the build steps run natively on the builder host even when the overall build request targets `linux/arm64`. Dockers own docs explicitly describe this pattern and the automatic `BUILDPLATFORM` / `TARGET*` build args that make it work. citeturn17search1turn17search2turn10view4
### Root Makefile
```make
.RECIPEPREFIX := >
DOCKER_BUILDX ?= docker buildx build
TARGET_PLATFORM ?= linux/arm64
DEBIAN_SUITE ?= trixie
ASK_TAR ?= packages/ASK.tar.gz
KERNEL_TAR ?=
FMLIB_TAR ?= packages/fmlib-lf-6.12.49-2.2.0.tar.gz
FMC_TAR ?= packages/fmc-lf-6.12.49-2.2.0.tar.gz
LIBNFNETLINK_TAR ?= packages/libnfnetlink-1.0.2.tar.bz2
LIBNFCT_TAR ?= packages/libnetfilter_conntrack-1.1.0.tar.xz
LIBCLI_TAR ?= packages/libcli-1.10.7.tar.gz
ASK_OUT ?= out/ask
ASK_IMAGE_NAME ?= local/ask:dev
SYSROOT_IMAGE ?= local/ask-arm64-sysroot:dev
TARGET_ARCH ?= arm64
TARGET_TRIPLE ?= aarch64-linux-gnu
BUILD_TARGET ?= dist
KERNEL_FULL_BUILD ?= 0
SOURCE_DATE_EPOCH ?= 1714521600
COMMON_ARGS = \
> --build-arg DEBIAN_SUITE=$(DEBIAN_SUITE) \
> --build-arg TARGET_TRIPLE=$(TARGET_TRIPLE) \
> --build-arg SOURCE_DATE_EPOCH=$(SOURCE_DATE_EPOCH)
.PHONY: ASK_SYSROOT ASK ASK_IMAGE
ASK_SYSROOT:
> $(DOCKER_BUILDX) \
> --platform $(TARGET_PLATFORM) \
> -f docker/arm64-sysroot.Dockerfile \
> $(COMMON_ARGS) \
> --build-arg FMLIB_TAR=$(FMLIB_TAR) \
> --build-arg FMC_TAR=$(FMC_TAR) \
> --build-arg LIBNFNETLINK_TAR=$(LIBNFNETLINK_TAR) \
> --build-arg LIBNFCT_TAR=$(LIBNFCT_TAR) \
> --build-arg LIBCLI_TAR=$(LIBCLI_TAR) \
> --load \
> -t $(SYSROOT_IMAGE) \
> .
ASK: ASK_SYSROOT
> mkdir -p $(ASK_OUT)
> $(DOCKER_BUILDX) \
> --platform $(TARGET_PLATFORM) \
> -f docker/ask.Dockerfile \
> $(COMMON_ARGS) \
> --build-arg SYSROOT_IMAGE=$(SYSROOT_IMAGE) \
> --build-arg ASK_TAR=$(ASK_TAR) \
> --build-arg KERNEL_TAR=$(KERNEL_TAR) \
> --build-arg BUILD_TARGET=$(BUILD_TARGET) \
> --build-arg KERNEL_FULL_BUILD=$(KERNEL_FULL_BUILD) \
> --target artifacts \
> --output type=local,dest=$(ASK_OUT) \
> .
ASK_IMAGE: ASK_SYSROOT
> $(DOCKER_BUILDX) \
> --platform $(TARGET_PLATFORM) \
> -f docker/ask.Dockerfile \
> $(COMMON_ARGS) \
> --build-arg SYSROOT_IMAGE=$(SYSROOT_IMAGE) \
> --build-arg ASK_TAR=$(ASK_TAR) \
> --build-arg KERNEL_TAR=$(KERNEL_TAR) \
> --build-arg BUILD_TARGET=$(BUILD_TARGET) \
> --build-arg KERNEL_FULL_BUILD=$(KERNEL_FULL_BUILD) \
> --target runtime \
> --load \
> -t $(ASK_IMAGE_NAME) \
> .
```
Example invocations:
```bash
# Userspace-only build
make ASK ASK_TAR=packages/ASK.tar.gz BUILD_TARGET=userspace
# Full build with matching kernel tree and full kernel build to get Module.symvers
make ASK \
ASK_TAR=packages/ASK.tar.gz \
KERNEL_TAR=packages/lf-6.12.49-2.2.0.tar.gz \
BUILD_TARGET=dist \
KERNEL_FULL_BUILD=1
# Minimal artifact image instead of local export
make ASK_IMAGE ASK_TAR=packages/ASK.tar.gz BUILD_TARGET=userspace
```
### Helper sysroot Dockerfile
```dockerfile
# syntax=docker/dockerfile:1.7
ARG DEBIAN_SUITE=trixie
FROM --platform=$BUILDPLATFORM debian:${DEBIAN_SUITE}-slim AS sysroot-build
ARG TARGET_TRIPLE=aarch64-linux-gnu
ARG SOURCE_DATE_EPOCH
ARG FMLIB_TAR=
ARG FMC_TAR=
ARG LIBNFNETLINK_TAR=
ARG LIBNFCT_TAR=
ARG LIBCLI_TAR=
ENV DEBIAN_FRONTEND=noninteractive
ENV LC_ALL=C.UTF-8
ENV TZ=UTC
ENV SOURCE_DATE_EPOCH=${SOURCE_DATE_EPOCH}
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
build-essential \
crossbuild-essential-arm64 \
gcc-aarch64-linux-gnu \
g++-aarch64-linux-gnu \
binutils-aarch64-linux-gnu \
libc6-dev-arm64-cross \
linux-libc-dev-arm64-cross \
libstdc++-14-dev-arm64-cross \
autoconf automake libtool pkgconf make patch perl python3 rsync xz-utils bzip2 file \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /work
COPY packages/ /vendor/packages/
# Build a clean target sysroot rooted at /opt/sysroot.
RUN mkdir -p /opt/sysroot/usr && rsync -a /usr/aarch64-linux-gnu/ /opt/sysroot/usr/
# Optional: cross-build vendored target libraries into the sysroot.
RUN if [ -n "${LIBNFNETLINK_TAR}" ] && [ -f "/vendor/${LIBNFNETLINK_TAR}" ]; then \
mkdir -p /tmp/libnfnetlink && \
tar -xf "/vendor/${LIBNFNETLINK_TAR}" --strip-components=1 -C /tmp/libnfnetlink && \
cd /tmp/libnfnetlink && \
./configure --host=${TARGET_TRIPLE} --prefix=/usr --libdir=/usr/lib/aarch64-linux-gnu && \
make -j"$(nproc)" && \
make DESTDIR=/opt/sysroot install; \
fi
RUN if [ -n "${LIBNFCT_TAR}" ] && [ -f "/vendor/${LIBNFCT_TAR}" ]; then \
mkdir -p /tmp/libnfct && \
tar -xf "/vendor/${LIBNFCT_TAR}" --strip-components=1 -C /tmp/libnfct && \
cd /tmp/libnfct && \
PKG_CONFIG_SYSROOT_DIR=/opt/sysroot \
PKG_CONFIG_LIBDIR=/opt/sysroot/usr/lib/aarch64-linux-gnu/pkgconfig:/opt/sysroot/usr/share/pkgconfig \
./configure --host=${TARGET_TRIPLE} --prefix=/usr --libdir=/usr/lib/aarch64-linux-gnu && \
make -j"$(nproc)" && \
make DESTDIR=/opt/sysroot install; \
fi
RUN if [ -n "${LIBCLI_TAR}" ] && [ -f "/vendor/${LIBCLI_TAR}" ]; then \
mkdir -p /tmp/libcli && \
tar -xf "/vendor/${LIBCLI_TAR}" --strip-components=1 -C /tmp/libcli && \
make -C /tmp/libcli \
CC="${TARGET_TRIPLE}-gcc --sysroot=/opt/sysroot" \
AR="${TARGET_TRIPLE}-ar" && \
make -C /tmp/libcli PREFIX=/usr DESTDIR=/opt/sysroot install; \
fi
FROM scratch AS sysroot
COPY --from=sysroot-build /opt/sysroot/ /
```
### Main ASK Dockerfile
```dockerfile
# syntax=docker/dockerfile:1.7
ARG DEBIAN_SUITE=trixie
ARG SYSROOT_IMAGE=local/ask-arm64-sysroot:dev
FROM ${SYSROOT_IMAGE} AS sysroot
FROM --platform=$BUILDPLATFORM debian:${DEBIAN_SUITE}-slim AS build
ARG TARGETPLATFORM
ARG TARGETOS
ARG TARGETARCH
ARG TARGET_TRIPLE=aarch64-linux-gnu
ARG ASK_TAR=packages/ASK.tar.gz
ARG KERNEL_TAR=
ARG BUILD_TARGET=dist
ARG KERNEL_FULL_BUILD=0
ARG SOURCE_DATE_EPOCH
ENV DEBIAN_FRONTEND=noninteractive
ENV LC_ALL=C.UTF-8
ENV TZ=UTC
ENV SOURCE_DATE_EPOCH=${SOURCE_DATE_EPOCH}
ENV KBUILD_BUILD_TIMESTAMP=@${SOURCE_DATE_EPOCH}
ENV KBUILD_BUILD_USER=repro
ENV KBUILD_BUILD_HOST=repro-host
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
build-essential \
crossbuild-essential-arm64 \
gcc-aarch64-linux-gnu \
g++-aarch64-linux-gnu \
binutils-aarch64-linux-gnu \
libc6-dev-arm64-cross \
linux-libc-dev-arm64-cross \
libstdc++-14-dev-arm64-cross \
bc bison cpio file flex kmod libelf-dev make openssl patch perl pkgconf python3 rsync xz-utils \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /work
COPY --from=sysroot / /opt/sysroot/
COPY packages/ /vendor/packages/
COPY scripts/filter-kconfig-fragment.sh /usr/local/bin/filter-kconfig-fragment.sh
COPY docker/kernel-extra.config /tmp/kernel-extra.config
COPY docker/overrides/ /docker-overrides/
RUN chmod +x /usr/local/bin/filter-kconfig-fragment.sh
# Extract ASK tarball and inject cross-build overrides.
RUN mkdir -p /src/ASK && \
tar -xf "/vendor/${ASK_TAR}" --strip-components=1 -C /src/ASK && \
rm -rf /src/ASK/.git && \
install -m 0644 /docker-overrides/Makefile /src/ASK/Makefile && \
install -m 0644 /docker-overrides/toolchain.mk /src/ASK/build/toolchain.mk
# Optional matching kernel tree for out-of-tree module builds.
RUN if [ -n "${KERNEL_TAR}" ]; then \
mkdir -p /opt/kernel && \
tar -xf "/vendor/${KERNEL_TAR}" --strip-components=1 -C /opt/kernel && \
make -C /opt/kernel ARCH=arm64 CROSS_COMPILE=${TARGET_TRIPLE}- defconfig && \
/usr/local/bin/filter-kconfig-fragment.sh /opt/kernel /tmp/kernel-extra.config > /tmp/kernel-extra.effective.config && \
/opt/kernel/scripts/kconfig/merge_config.sh -m /opt/kernel/.config /tmp/kernel-extra.effective.config && \
KCONFIG_WARN_UNKNOWN_SYMBOLS=1 KCONFIG_WERROR=1 \
make -C /opt/kernel ARCH=arm64 CROSS_COMPILE=${TARGET_TRIPLE}- olddefconfig && \
if [ "${KERNEL_FULL_BUILD}" = "1" ]; then \
make -C /opt/kernel ARCH=arm64 CROSS_COMPILE=${TARGET_TRIPLE}- -j"$(nproc)" Image modules dtbs; \
else \
make -C /opt/kernel ARCH=arm64 CROSS_COMPILE=${TARGET_TRIPLE}- -j"$(nproc)" modules_prepare; \
fi; \
fi
WORKDIR /src/ASK
RUN export SYSROOT=/opt/sysroot; \
export CROSS_COMPILE=${TARGET_TRIPLE}-; \
export ARCH=arm64; \
export CC="${TARGET_TRIPLE}-gcc --sysroot=${SYSROOT}"; \
export CXX="${TARGET_TRIPLE}-g++ --sysroot=${SYSROOT}"; \
export AR="${TARGET_TRIPLE}-ar"; \
export STRIP="${TARGET_TRIPLE}-strip"; \
export PKG_CONFIG=pkg-config; \
export PKG_CONFIG_SYSROOT_DIR="${SYSROOT}"; \
export PKG_CONFIG_LIBDIR="${SYSROOT}/usr/lib/aarch64-linux-gnu/pkgconfig:${SYSROOT}/usr/share/pkgconfig"; \
case "${BUILD_TARGET}" in \
userspace) make userspace ;; \
modules) test -n "${KERNEL_TAR}" && make KDIR=/opt/kernel modules ;; \
dist) test -n "${KERNEL_TAR}" && make KDIR=/opt/kernel dist ;; \
*) echo "unsupported BUILD_TARGET=${BUILD_TARGET}" >&2; exit 2 ;; \
esac && \
mkdir -p /out && cp -a dist/. /out/
FROM scratch AS artifacts
COPY --from=build /out/ /
FROM scratch AS runtime
COPY --from=build /out/ /opt/ask/
```
### Cross-aware Makefile override excerpt
If ASKs upstream Makefile already has tarball-only source handling from your earlier reproducible-build work, the critical cross additions are these flags and environment variables:
```make
ARCH ?= arm64
TARGET_TRIPLE ?= aarch64-linux-gnu
CROSS_COMPILE ?= $(TARGET_TRIPLE)-
SYSROOT ?= /opt/sysroot
CC ?= $(TARGET_TRIPLE)-gcc --sysroot=$(SYSROOT)
CXX ?= $(TARGET_TRIPLE)-g++ --sysroot=$(SYSROOT)
AR ?= $(TARGET_TRIPLE)-ar
STRIP ?= $(TARGET_TRIPLE)-strip
PKG_CONFIG ?= pkg-config
export PKG_CONFIG_SYSROOT_DIR := $(SYSROOT)
export PKG_CONFIG_LIBDIR := \
$(SYSROOT)/usr/lib/aarch64-linux-gnu/pkgconfig:$(SYSROOT)/usr/share/pkgconfig
KBUILD_ARGS := ARCH=$(ARCH) CROSS_COMPILE=$(CROSS_COMPILE)
userspace:
$(MAKE) -C cmm CC="$(CC)" CXX="$(CXX)"
$(MAKE) -C dpa_app CC="$(CC)" CXX="$(CXX)"
modules:
test -d "$(KDIR)"
$(MAKE) -C cdx KERNELDIR="$(KDIR)" $(KBUILD_ARGS)
$(MAKE) -C fci KERNEL_SOURCE="$(KDIR)" $(KBUILD_ARGS)
$(MAKE) -C auto_bridge KERNEL_SOURCE="$(KDIR)" $(KBUILD_ARGS)
```
### Optional Arm GNU Toolchain stage
If you prefer a prebuilt Arm-distributed toolchain tarball over Debian packages, swap in a stage like this and keep the rest of the sysroot/kernel logic the same:
```dockerfile
FROM --platform=$BUILDPLATFORM debian:trixie-slim AS armgnu
ARG ARM_GNU_TARBALL=packages/arm-gnu-toolchain-aarch64-linux-gnu.tar.xz
COPY packages/ /vendor/packages/
RUN mkdir -p /opt/toolchain && \
tar -xf "/vendor/${ARM_GNU_TARBALL}" --strip-components=1 -C /opt/toolchain
ENV PATH=/opt/toolchain/bin:${PATH}
```
The Docker pieces above rely on Dockers documented multi-stage builds, automatic platform args, native-stage `--platform=$BUILDPLATFORM` pattern, local artifact exporter, and build-arg semantics. For private tarballs or credentials, Docker explicitly says to use secret mounts rather than `ARG` or `ENV`. citeturn11view1turn17search1turn17search2turn11view0turn11view2turn11view3
## Kernel modules, sysroots, and config handling
The kernel side is where most “cross-compilation” guides turn to mush. The correct model is sharper than that.
If ASK builds only **userspace**, you need:
- a target compiler,
- a target libc/sysroot,
- target `.pc` metadata or explicit include/library paths.
If ASK builds **external kernel modules**, you additionally need:
- the **exact target kernel source tree** or a prepared build tree,
- the **exact target `.config`** after fragment merging,
- generated headers under `include/generated`,
- and, when `CONFIG_MODVERSIONS=y`, a matching `Module.symvers` from a **full kernel build**, not merely `modules_prepare`. citeturn10view5turn11view5
That distinction matters because `linux-libc-dev-arm64-cross` is for **userspace development headers**. Debians own package metadata says those are Linux kernel headers for cross-compiling development, not a substitute for the actual configured kernel build tree you need for external modules. So: use Debian cross libc/sysroot packages for userspace, and use `KERNEL_TAR` for modules. citeturn10view2turn10view3
### Minimal sysroot extraction patterns
If you build the sysroot from Debian cross packages:
```bash
mkdir -p /opt/sysroot/usr
rsync -a /usr/aarch64-linux-gnu/ /opt/sysroot/usr/
```
If you receive a prebuilt sysroot tarball instead:
```bash
mkdir -p /opt/sysroot
tar -xf packages/arm64-glibc-sysroot.tar.xz -C /opt/sysroot
```
If you receive a matching kernel tree tarball:
```bash
mkdir -p /opt/kernel
tar -xf packages/lf-6.12.49-2.2.0.tar.gz --strip-components=1 -C /opt/kernel
```
### Kernel config merge and mismatch handling
Use this sequence every time:
```bash
make -C /opt/kernel ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- defconfig
filter-kconfig-fragment.sh /opt/kernel docker/kernel-extra.config \
> /tmp/kernel-extra.effective.config
/opt/kernel/scripts/kconfig/merge_config.sh -m \
/opt/kernel/.config \
/tmp/kernel-extra.effective.config
KCONFIG_WARN_UNKNOWN_SYMBOLS=1 KCONFIG_WERROR=1 \
make -C /opt/kernel ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- olddefconfig
```
Why the filter exists is simple: config fragments drift across kernel lines. `merge_config.sh` is the right merge tool, but it does not magically make a symbol exist in a tree that no longer defines it. Filtering before merge prevents stale fragment entries from poisoning the build.
The kernel docs explicitly document `KCONFIG_WARN_UNKNOWN_SYMBOLS` and `KCONFIG_WERROR`, and the kernel tree ships `merge_config.sh` explicitly for fragment merging. citeturn16search1turn11view5
### Auto-gating missing symbols with `scripts/config`
For features that are optional or kernel-version-dependent, gate them before `olddefconfig`:
```bash
cd /opt/kernel
if grep -RqsE '^[[:space:]]*(menu)?config[[:space:]]+NETFILTER_XTABLES_LEGACY([[:space:]]|$)' .; then
scripts/config --file .config -e NETFILTER_XTABLES_LEGACY
fi
if grep -RqsE '^[[:space:]]*(menu)?config[[:space:]]+IP_NF_IPTABLES_LEGACY([[:space:]]|$)' .; then
scripts/config --file .config -e IP_NF_IPTABLES_LEGACY
fi
KCONFIG_WARN_UNKNOWN_SYMBOLS=1 KCONFIG_WERROR=1 \
make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- olddefconfig
```
This is the right cure for errors of the form “fragment expects `CONFIG_X=y` but the resolved config says `<missing>`”: first check whether the symbol exists in the target tree, then enable it only when it exists, then let Kconfig resolve dependencies. `scripts/config` is the official in-tree command-line `.config` manipulator. citeturn16search0turn16search1turn16search3
### When to stop at `modules_prepare` and when to do a full kernel build
Use `modules_prepare` when:
- you only need generated headers and basic preparation,
- and `CONFIG_MODVERSIONS` is not required for module ABI matching.
Do a full kernel build when:
- `CONFIG_MODVERSIONS=y`,
- you need `Module.symvers`,
- or you want the strongest ABI match signal against the actual shipping board kernel.
A practical decision rule:
```text
If you need only compile-time headers: modules_prepare may be enough.
If you need symbol versioning correctness: build the kernel fully.
```
That rule is not opinion; it is straight from the kernels external-modules documentation. citeturn10view5
## Verification, CI, and trade-offs
The verification story should be mechanical, not aspirational.
To verify arm64 userspace binaries:
```bash
file out/ask/cmm
readelf -h out/ask/cmm | grep 'Machine:'
readelf -l out/ask/cmm | grep 'Requesting program interpreter'
readelf -d out/ask/cmm | grep NEEDED
```
You want:
- `file` to report an AArch64 ELF,
- `readelf -h` to report `Machine: AArch64`,
- the ELF interpreter to match the target runtimes loader,
- and `NEEDED` entries to resolve against the target sysroot or rootfs, not your host.
To verify kernel modules:
```bash
file out/ask/cdx.ko
readelf -h out/ask/cdx.ko | grep 'Machine:'
modinfo -F vermagic out/ask/cdx.ko
readelf -S out/ask/cdx.ko | grep __versions || true
```
You want:
- `Machine: AArch64`,
- `vermagic` matching the target kernel release/build flags,
- and, when `CONFIG_MODVERSIONS=y`, version sections consistent with the kernel build products.
For stricter ABI checks, compare the module against the exact `Module.symvers` and shipping kernel release, not a hand-wavy “same major version” guess.
A light smoke test under QEMU is acceptable **after** build if you want one, but it should be optional and narrow. The primary build should remain non-emulated. Dockers own docs explicitly recommend cross-compilation or native multi-node builders over QEMU where possible because QEMU is slower for compute-heavy work. citeturn10view4
### CI guidance that actually matters
Use these practices:
- **Pin builder base images by digest.**
- **Keep toolchain and sysroot in separate reusable stages or images.**
- **Verify `SHA256SUMS` for every vendored tarball before extraction.**
- **Set `SOURCE_DATE_EPOCH`, `KBUILD_BUILD_TIMESTAMP`, `KBUILD_BUILD_USER`, and `KBUILD_BUILD_HOST`.**
- **Remove `.git` from tarball-extracted sources unless VCS metadata is a deliberate build input.**
- **Use BuildKit cache mounts for `apt` and, if applicable, compiler caches.**
- **Use `--output type=local` for artifacts rather than hiding everything inside an image layer.**
Those recommendations are directly aligned with Dockers best-practices guidance and the kernels reproducible-build guidance. The kernel docs are explicit that timestamps, user, and host leakage must be overridden for reproducible output, and Dockers docs explicitly recommend multi-stage builds and local exporters for clean build outputs. citeturn11view1turn11view0turn11view4
### Trade-offs
If you want the blunt version:
- **Debian cross packages** are the best speed-to-value option for ASK.
- **Arm GNU Toolchain** is best when you want a pinned vendor-distributed compiler tarball.
- **Clang/LLVM** is attractive if you already know the project and module path build cleanly with it.
- **crosstool-ng** is for teams that truly need custom toolchains and are willing to own them.
- **musl-cross** only makes sense when the target runtime is musl.
- **QEMU** is a fallback or a spot-check tool, not the backbone of a serious CI build.
```mermaid
flowchart TD
A[Need arm64 ASK artifacts in Docker] --> B{Target runtime glibc?}
B -- yes --> C{Need fastest reliable setup?}
C -- yes --> D[Debian cross packages + glibc sysroot]
C -- no --> E{Need custom pinned toolchain?}
E -- yes --> F[Arm GNU Toolchain or crosstool-ng]
E -- no --> D
B -- no --> G{Target runtime musl?}
G -- yes --> H[musl-cross or Alpine/musl sysroot]
G -- no --> I[Clarify runtime first]
D --> J{Kernel modules involved?}
F --> J
H --> J
J -- no --> K[userspace cross build only]
J -- yes --> L[provide KERNEL_TAR + config + headers]
L --> M{CONFIG_MODVERSIONS?}
M -- no --> N[modules_prepare may be enough]
M -- yes --> O[full kernel build to get Module.symvers]
K --> P[verify ELF headers]
N --> Q[verify vermagic and symbols]
O --> Q
```
## Sources and notes
This report prioritizes official or project-authoritative sources. Docker guidance is from the official Docker documentation on multi-platform builds, `BUILDPLATFORM`/`TARGET*` build arguments, build secrets, local exporters, and Dockerfile best practices. Debian package metadata is from the official Debian package pages for `crossbuild-essential-arm64`, `gcc-aarch64-linux-gnu`, `g++-aarch64-linux-gnu`, `libc6-dev-arm64-cross`, `linux-libc-dev-arm64-cross`, and `libstdc++-14-dev-arm64-cross`. Kernel guidance is from the official Linux kernel docs on external modules, `modules_prepare`, `Module.symvers`, reproducible builds, Kconfig controls, Clang/LLVM kernel builds, and the in-tree `merge_config.sh` / `scripts/config` utilities. Alternative cross-toolchain options are grounded in the official `crosstool-ng` site, the musl site and `musl-cross-make`, and Linaros downloads page pointing to the Arm Developer site for official Arm GNU toolchain releases. Alpine references are from the Alpine wiki and package index for `build-base`, musl, and `gcompat`. citeturn10view4turn17search1turn17search2turn11view0turn11view1turn11view2turn11view3turn10view0turn10view1turn10view2turn10view3turn15view0turn15view1turn10view5turn11view4turn11view5turn16search0turn16search1turn14view1turn10view8turn10view10turn11view8turn11view9turn10view11turn10view12turn11view6turn11view7turn19view0turn19view1

View File

@@ -0,0 +1,536 @@
# Reproducible Container Build System for ASK Tarball Sources
## Executive summary
The supplied ASK archive is buildable in containers, but in its current form it is not hermetic or reproducible enough for serious CI use. Inspection of the archive shows a Debian-oriented arm64 build that still fetches sources during the build, assumes a GNU/glibc cross toolchain, and expects a separate kernel tree for module builds. That combination is fine for local hacking and bad for supply-chain control.
The clean fix is straightforward: close the build over a `packages/` directory that contains **every** source archive the build needs, pass only archive **paths** as build arguments, verify checksums, replace every `git clone` and `wget` step with tarball extraction plus patch application, and use a multi-stage Dockerfile that exports only artifacts. Dockers own docs explicitly support build arguments for parameterizing instructions, recommend multi-stage builds to keep final outputs small, and provide local exporters so you can pull files out of a build without shipping the entire toolchain image. citeturn9view0turn9view3turn9view5turn9view6turn23view0
The bad news is that a pure Alpine conversion is not free. Alpine uses musl, not glibc, and that changes ABI and runtime behavior. If the final ASK userspace binaries must run unchanged on a glibc-based target, Alpine-only is the wrong default unless you deliberately introduce a glibc sysroot, static-link what you can, or use compatibility mechanisms such as `gcompat` for simple cases. Alpines own docs and the musl docs both make that point indirectly but unmistakably. citeturn13view2turn13view3turn14view0turn13view1turn13view0
## What the supplied ASK archive implies
Inspection of the supplied ASK tarball shows a C and kernel-module build for entity["company","NXP","semiconductor company"] Layerscape hardware. The root Makefile already pins `fmlib` and `fmc` to `lf-6.12.49-2.2.0`, downloads `libnfnetlink-1.0.2` and `libnetfilter_conntrack-1.1.0` during the build, and assumes a separate kernel source tree through `KDIR`. The included `build/setup.sh` is Debian host bootstrap logic; inside a container, that script should be retired, not executed.
For Alpine, the dependency picture is mixed but manageable. Official Alpine packages exist for `libmnl-dev`, `libpcap-dev`, `libxml2-dev`, and `tclap-dev`, with `tclap-dev` living in `community`. The nuisance dependency is `libcli`: Alpines package index only shows `libcli` in edge/testing, not as a stable v3.22 package, while upstream libclis own README documents a plain `make && make install` flow into `/usr/local/lib`. For this project, vendoring libcli as a tarball and building it in a helper stage is the sane choice; enabling edge/testing for one leaf dependency is not. citeturn4view0turn4view1turn14view2turn25search0turn19view0turn24view0turn26search10
There is one more important project-specific constraint: the archive does **not** include the kernel tree that ASKs out-of-tree modules need. So a fully reproducible module build requires one more input archive, such as `packages/linux.tar.xz`, or a deliberate decision to build only `userspace`.
## Recommended build architecture
The right shape is a **closed build context**. Put the ASK tarball, every required dependency tarball, and optionally the matching kernel source tarball in `packages/`. Keep the Docker context small with `.dockerignore`. Verify those archives with `SHA256SUMS`. Set `SOURCE_DATE_EPOCH` so timestamps stop drifting. In CI, pin the base image by digest instead of trusting a moving tag. Dockers best-practices guide is blunt about why: tags move, digests do not, multi-stage builds slim outputs, and `.dockerignore` matters. The `SOURCE_DATE_EPOCH` spec exists precisely to keep timestamps from poisoning reproducibility. citeturn23view0turn22search2turn22search4turn22search1
For ASK specifically, the least painful Alpine route is to build natively for `linux/arm64` with Buildx rather than trying to recreate Debians `crossbuild-essential-arm64` model inside Alpine. Debian has a purpose-built GNU/glibc cross meta-package; Alpines stable equivalent is not that. Alpines native toolchain story is `build-base` on musl, and Dockers multi-platform build support is exactly what makes `--platform=linux/arm64` practical here. citeturn8view7turn13view4turn14view5turn9view4
```mermaid
flowchart TD
A[packages/*.tar.* in build context] --> B[buildx builder stage]
B --> C[extract ASK tarball]
B --> D[extract vendored dependency tarballs]
D --> E[patch and build fmlib, fmc, libnfnetlink, libnetfilter_conntrack, libcli]
C --> F[build ASK userspace and optionally kernel modules]
E --> F
F --> G[dist or artifact directory]
G --> H[scratch artifacts stage]
H --> I[buildx local exporter to out/ask]
```
## Concrete implementation
The wrapper below fixes the syntax problem in the prompt. `docker buildx build` needs an explicit build context, repeated `--build-arg KEY=VALUE` flags, and a `--file` argument. The local exporter is the right default here because ASKs real deliverable is a directory of build artifacts, not a fat runtime image. citeturn9view5turn9view6
**Recommended `.dockerignore`**
```dockerignore
**
!Makefile
!docker/**
!packages/**
```
Docker recommends using `.dockerignore` to exclude irrelevant files from the build context, and that matters here because the whole point is to build from a tightly controlled set of source tarballs. citeturn23view0
**Host-side vendoring script**
```sh
#!/usr/bin/env sh
set -eu
PACKAGES_DIR="${1:-packages}"
ASK_SRC="${ASK_SRC:-/absolute/path/to/ASK.tar.gz}"
NXP_TAG="lf-6.12.49-2.2.0"
mkdir -p "${PACKAGES_DIR}"
install -m 0644 "${ASK_SRC}" "${PACKAGES_DIR}/ASK.tar.gz"
curl -L --fail -o "${PACKAGES_DIR}/fmlib-${NXP_TAG}.tar.gz" \
"https://github.com/nxp-qoriq/fmlib/archive/refs/tags/${NXP_TAG}.tar.gz"
curl -L --fail -o "${PACKAGES_DIR}/fmc-${NXP_TAG}.tar.gz" \
"https://github.com/nxp-qoriq/fmc/archive/refs/tags/${NXP_TAG}.tar.gz"
curl -L --fail -o "${PACKAGES_DIR}/libnfnetlink-1.0.2.tar.bz2" \
"https://www.netfilter.org/projects/libnfnetlink/files/libnfnetlink-1.0.2.tar.bz2"
curl -L --fail -o "${PACKAGES_DIR}/libnetfilter_conntrack-1.1.0.tar.xz" \
"https://www.netfilter.org/projects/libnetfilter_conntrack/files/libnetfilter_conntrack-1.1.0.tar.xz"
curl -L --fail -o "${PACKAGES_DIR}/libcli-1.10.7.tar.gz" \
"https://github.com/dparrish/libcli/archive/refs/tags/V1.10.7.tar.gz"
# Optional: add a matching kernel source archive for full module builds.
# install -m 0644 /path/to/linux.tar.xz "${PACKAGES_DIR}/linux.tar.xz"
(
cd "${PACKAGES_DIR}"
find . -maxdepth 1 -type f \
\( -name '*.tar.gz' -o -name '*.tar.xz' -o -name '*.tar.bz2' \) \
-print0 | sort -z | xargs -0 sha256sum > SHA256SUMS
)
```
If local repacking is part of your process, normalize ownership, ordering, and mtimes and set `SOURCE_DATE_EPOCH`; otherwise you are leaking host-local timestamps into your supposedly reproducible input archive. citeturn22search1turn22search4
**Root wrapper `Makefile`**
```make
.RECIPEPREFIX := >
PLATFORM ?= linux/arm64
ALPINE_VERSION ?= 3.22
BUILD_TARGET ?= dist
OUT_DIR ?= out/ask
IMAGE ?= ask-build:local
ASK_TAR ?= packages/ASK.tar.gz
FMLIB_TAR ?= packages/fmlib-lf-6.12.49-2.2.0.tar.gz
FMC_TAR ?= packages/fmc-lf-6.12.49-2.2.0.tar.gz
LIBNFNETLINK_TAR ?= packages/libnfnetlink-1.0.2.tar.bz2
LIBNFCT_TAR ?= packages/libnetfilter_conntrack-1.1.0.tar.xz
LIBCLI_TAR ?= packages/libcli-1.10.7.tar.gz
KERNEL_TAR ?= packages/linux.tar.xz
SOURCE_DATE_EPOCH ?= 1704067200
JOBS ?= 0
USERSPACE_CFLAGS ?=
USERSPACE_LDFLAGS ?=
.PHONY: ASK ASK_IMAGE
ASK:
> docker buildx build \
> --platform="$(PLATFORM)" \
> --file docker/ask.Dockerfile \
> --build-arg "ALPINE_VERSION=$(ALPINE_VERSION)" \
> --build-arg "ASK_TAR=$(ASK_TAR)" \
> --build-arg "FMLIB_TAR=$(FMLIB_TAR)" \
> --build-arg "FMC_TAR=$(FMC_TAR)" \
> --build-arg "LIBNFNETLINK_TAR=$(LIBNFNETLINK_TAR)" \
> --build-arg "LIBNFCT_TAR=$(LIBNFCT_TAR)" \
> --build-arg "LIBCLI_TAR=$(LIBCLI_TAR)" \
> --build-arg "KERNEL_TAR=$(KERNEL_TAR)" \
> --build-arg "BUILD_TARGET=$(BUILD_TARGET)" \
> --build-arg "SOURCE_DATE_EPOCH=$(SOURCE_DATE_EPOCH)" \
> --build-arg "JOBS=$(JOBS)" \
> --build-arg "USERSPACE_CFLAGS=$(USERSPACE_CFLAGS)" \
> --build-arg "USERSPACE_LDFLAGS=$(USERSPACE_LDFLAGS)" \
> --target artifacts \
> --output "type=local,dest=$(OUT_DIR)" \
> .
ASK_IMAGE:
> docker buildx build \
> --platform="$(PLATFORM)" \
> --file docker/ask.Dockerfile \
> --build-arg "ALPINE_VERSION=$(ALPINE_VERSION)" \
> --build-arg "ASK_TAR=$(ASK_TAR)" \
> --build-arg "FMLIB_TAR=$(FMLIB_TAR)" \
> --build-arg "FMC_TAR=$(FMC_TAR)" \
> --build-arg "LIBNFNETLINK_TAR=$(LIBNFNETLINK_TAR)" \
> --build-arg "LIBNFCT_TAR=$(LIBNFCT_TAR)" \
> --build-arg "LIBCLI_TAR=$(LIBCLI_TAR)" \
> --build-arg "KERNEL_TAR=$(KERNEL_TAR)" \
> --build-arg "BUILD_TARGET=$(BUILD_TARGET)" \
> --build-arg "SOURCE_DATE_EPOCH=$(SOURCE_DATE_EPOCH)" \
> --build-arg "JOBS=$(JOBS)" \
> --build-arg "USERSPACE_CFLAGS=$(USERSPACE_CFLAGS)" \
> --build-arg "USERSPACE_LDFLAGS=$(USERSPACE_LDFLAGS)" \
> --load \
> --tag "$(IMAGE)" \
> .
```
**`docker/overrides/toolchain.mk`**
```make
CROSS_COMPILE ?=
ARCH ?= arm64
PLATFORM ?= LS1043A
CC ?= $(if $(CROSS_COMPILE),$(CROSS_COMPILE)gcc,gcc)
CXX ?= $(if $(CROSS_COMPILE),$(CROSS_COMPILE)g++,g++)
AR ?= $(if $(CROSS_COMPILE),$(CROSS_COMPILE)ar,ar)
STRIP ?= $(if $(CROSS_COMPILE),$(CROSS_COMPILE)strip,strip)
HOST ?= $(shell $(CC) -dumpmachine 2>/dev/null || echo aarch64-alpine-linux-musl)
KDIR ?= /opt/kernel
```
**`docker/overrides/Makefile`**
This override intentionally removes the upstream host-setup/network-fetch behavior and keeps only the build targets that matter for containerized CI.
```make
.RECIPEPREFIX := >
include build/toolchain.mk
include build/sources.mk
DIST := $(CURDIR)/dist
SRCDIR := $(CURDIR)/sources
PATCHES := $(CURDIR)/patches
VENDOR_DIR ?= /vendor/packages
HOST ?= $(shell $(CC) -dumpmachine 2>/dev/null || echo aarch64-alpine-linux-musl)
JOBS ?= $(shell getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1)
USERSPACE_CFLAGS ?=
USERSPACE_LDFLAGS ?=
FMLIB_DIR := $(SRCDIR)/fmlib
FMC_DIR := $(SRCDIR)/fmc/source
LIBFCI_DIR := $(CURDIR)/fci/lib
SYSROOT := $(SRCDIR)/sysroot
ABM_DIR := $(CURDIR)/auto_bridge
FMLIB_TAR ?= $(VENDOR_DIR)/fmlib-$(NXP_TAG).tar.gz
FMC_TAR ?= $(VENDOR_DIR)/fmc-$(NXP_TAG).tar.gz
LIBNFNETLINK_TAR ?= $(VENDOR_DIR)/libnfnetlink-$(LIBNFNETLINK_VER).tar.bz2
LIBNFCT_TAR ?= $(VENDOR_DIR)/libnetfilter_conntrack-$(LIBNFCT_VER).tar.xz
KBUILD_ARGS := CROSS_COMPILE=$(CROSS_COMPILE) ARCH=$(ARCH)
CDX_ARGS := $(KBUILD_ARGS) KERNELDIR=$(KDIR) PLATFORM=$(PLATFORM)
FCI_ARGS := $(KBUILD_ARGS) KERNEL_SOURCE=$(KDIR) BOARD_ARCH=$(ARCH) \
KBUILD_EXTRA_SYMBOLS=$(CURDIR)/cdx/Module.symvers
ABM_ARGS := $(KBUILD_ARGS) KERNEL_SOURCE=$(KDIR) PLATFORM=$(PLATFORM)
S := $(SRCDIR)/.stamps
$(shell mkdir -p $(S))
define extract_strip1
rm -rf $(2) && mkdir -p $(2) && tar -xf $(1) --strip-components=1 -C $(2)
endef
.PHONY: all sources modules userspace cdx fci auto_bridge fmc cmm dpa_app dist clean clean-all
all: modules userspace
sources: $(S)/fmlib $(S)/fmc $(S)/libfci $(S)/libnfnetlink $(S)/libnfct
$(S)/fmlib:
> @echo "==> fmlib: extract, patch, build"
> test -f $(FMLIB_TAR)
> $(call extract_strip1,$(FMLIB_TAR),$(FMLIB_DIR))
> cd $(FMLIB_DIR) && patch -p1 -i $(PATCHES)/fmlib/01-mono-ask-extensions.patch
> $(MAKE) -j$(JOBS) -C $(FMLIB_DIR) CROSS_COMPILE=$(CROSS_COMPILE) KERNEL_SRC=$(KDIR) libfm-arm.a
> ln -sf libfm-arm.a $(FMLIB_DIR)/libfm.a
> touch $@
$(S)/fmc:
> @echo "==> fmc: extract, patch, build"
> test -f $(FMC_TAR)
> rm -rf $(SRCDIR)/fmc
> mkdir -p $(SRCDIR)/fmc
> tar -xf $(FMC_TAR) -C $(SRCDIR)/fmc --strip-components=1
> cd $(FMC_DIR) && patch -p1 -i $(PATCHES)/fmc/01-mono-ask-extensions.patch
> $(MAKE) -j$(JOBS) -C $(FMC_DIR) \
> CC=$(CC) \
> CFLAGS="$(USERSPACE_CFLAGS)" \
> LDFLAGS="$(USERSPACE_LDFLAGS)" \
> TCLAP_HEADER_PATH=/usr/include \
> LIBXML2_HEADER_PATH=/usr/include/libxml2 \
> SYSROOT=$(SYSROOT)
> touch $@
$(S)/libnfnetlink:
> @echo "==> libnfnetlink: extract, patch, install into sysroot"
> test -f $(LIBNFNETLINK_TAR)
> rm -rf $(SRCDIR)/libnfnetlink-$(LIBNFNETLINK_VER)
> mkdir -p $(SRCDIR) $(SYSROOT)
> tar -xf $(LIBNFNETLINK_TAR) -C $(SRCDIR)
> cd $(SRCDIR)/libnfnetlink-$(LIBNFNETLINK_VER) && \
> patch -p1 -i $(PATCHES)/libnfnetlink/0001-libnfnetlink-fix-for-ARM64-and-clang.patch && \
> ./configure --host=$(HOST) --prefix=$(SYSROOT) --disable-shared --enable-static && \
> $(MAKE) -j$(JOBS) && $(MAKE) install
> touch $@
$(S)/libnfct: $(S)/libnfnetlink
> @echo "==> libnetfilter_conntrack: extract, patch, install into sysroot"
> test -f $(LIBNFCT_TAR)
> rm -rf $(SRCDIR)/libnetfilter_conntrack-$(LIBNFCT_VER)
> mkdir -p $(SRCDIR) $(SYSROOT)
> tar -xf $(LIBNFCT_TAR) -C $(SRCDIR)
> cd $(SRCDIR)/libnetfilter_conntrack-$(LIBNFCT_VER) && \
> patch -p1 -i $(PATCHES)/libnetfilter_conntrack/0001-libnetfilter_conntrack-fix-for-ARM64-and-clang.patch && \
> ./configure --host=$(HOST) --prefix=$(SYSROOT) --with-libnfnetlink=$(SYSROOT) --disable-shared --enable-static && \
> $(MAKE) -j$(JOBS) && $(MAKE) install
> touch $@
$(S)/libfci: $(S)/fmlib $(S)/libnfnetlink $(S)/libnfct
> @echo "==> libfci"
> $(MAKE) -C fci/lib CC=$(CC) AR=$(AR)
> touch $@
modules: cdx fci auto_bridge
userspace: fmc cmm dpa_app
cdx:
> $(MAKE) -C cdx $(CDX_ARGS)
fci: $(S)/libfci
> $(MAKE) -C fci $(FCI_ARGS)
auto_bridge:
> $(MAKE) -C auto_bridge $(ABM_ARGS)
fmc: $(S)/fmc
cmm: $(S)/libfci $(S)/libnfnetlink $(S)/libnfct
> $(MAKE) -C cmm \
> CC=$(CC) \
> LIBFCI_DIR=$(LIBFCI_DIR) \
> ABM_DIR=$(ABM_DIR) \
> SYSROOT=$(SYSROOT) \
> CFLAGS="$(USERSPACE_CFLAGS)" \
> LDFLAGS="$(USERSPACE_LDFLAGS)"
dpa_app: $(S)/fmc
> $(MAKE) -C dpa_app \
> CC=$(CC) \
> CFLAGS="-DDPAA_DEBUG_ENABLE -DNCSW_LINUX $(USERSPACE_CFLAGS) -I$(FMC_DIR) -I$(FMC_DIR)/inc/integrations/drivers/netcfg" \
> LDFLAGS="-lpthread -lcli -lxml2 -lstdc++ $(USERSPACE_LDFLAGS)"
dist: all
> rm -rf $(DIST)
> mkdir -p $(DIST)
> cp -a $(CURDIR)/cmm/src/cmm $(DIST)/
> cp -a $(CURDIR)/dpa_app/dpa_app $(DIST)/
> cp -a $(CURDIR)/cdx/cdx.ko $(DIST)/
> cp -a $(CURDIR)/fci/fci.ko $(DIST)/
> cp -a $(CURDIR)/auto_bridge/auto_bridge.ko $(DIST)/
> cp -a $(SRCDIR)/fmc/source/fmc $(DIST)/
> cp -a scripts/init/* $(DIST)/
clean:
> $(MAKE) -C auto_bridge clean || true
> $(MAKE) -C cdx clean || true
> $(MAKE) -C cmm clean || true
> $(MAKE) -C dpa_app clean || true
> $(MAKE) -C fci clean || true
> $(MAKE) -C fci/lib clean || true
> rm -rf $(DIST)
clean-all: clean
> rm -rf $(SRCDIR)
```
**`docker/ask.Dockerfile`**
This Dockerfile deliberately enables Alpine `community`, because Alpine documents that `community` is not enabled by default in many configurations, and ASK needs `tclap-dev`, which is in that repository. citeturn26search10turn25search0
```dockerfile
# syntax=docker/dockerfile:1.7
ARG ALPINE_VERSION=3.22
FROM alpine:${ALPINE_VERSION} AS builder
ARG ALPINE_VERSION
ARG ASK_TAR=packages/ASK.tar.gz
ARG FMLIB_TAR=packages/fmlib-lf-6.12.49-2.2.0.tar.gz
ARG FMC_TAR=packages/fmc-lf-6.12.49-2.2.0.tar.gz
ARG LIBNFNETLINK_TAR=packages/libnfnetlink-1.0.2.tar.bz2
ARG LIBNFCT_TAR=packages/libnetfilter_conntrack-1.1.0.tar.xz
ARG LIBCLI_TAR=packages/libcli-1.10.7.tar.gz
ARG KERNEL_TAR=packages/linux.tar.xz
ARG BUILD_TARGET=dist
ARG SOURCE_DATE_EPOCH=1704067200
ARG JOBS=0
ARG USERSPACE_CFLAGS=
ARG USERSPACE_LDFLAGS=
WORKDIR /work
RUN set -eux; \
printf '%s\n' \
"https://dl-cdn.alpinelinux.org/alpine/v${ALPINE_VERSION}/main" \
"https://dl-cdn.alpinelinux.org/alpine/v${ALPINE_VERSION}/community" \
> /etc/apk/repositories; \
apk add --no-cache \
bash \
bc \
bison \
build-base \
bzip2 \
coreutils \
file \
findutils \
flex \
gawk \
libmnl-dev \
libpcap-dev \
libxml2-dev \
linux-headers \
openssl \
openssl-dev \
patch \
perl \
pkgconf \
python3 \
tar \
tclap-dev \
xz \
zlib-dev
COPY packages/ /vendor/packages/
COPY docker/overrides/ /docker-overrides/
RUN set -eux; \
if [ -f /vendor/packages/SHA256SUMS ]; then \
cd /vendor/packages && sha256sum -c SHA256SUMS; \
fi
RUN set -eux; \
test -f "/vendor/${ASK_TAR}"; \
mkdir -p /work/src; \
tar -xf "/vendor/${ASK_TAR}" -C /work/src; \
test -d /work/src/ASK; \
rm -rf /work/src/ASK/.git; \
install -m 0644 /docker-overrides/Makefile /work/src/ASK/Makefile; \
install -m 0644 /docker-overrides/toolchain.mk /work/src/ASK/build/toolchain.mk; \
if [ ! -f /work/src/ASK/cmm/src/version.h ]; then \
printf '/* Auto-generated */\n#ifndef VERSION_H\n#define VERSION_H\n#define CMM_VERSION "%s"\n#endif\n' "tarball" > /work/src/ASK/cmm/src/version.h; \
fi
RUN set -eux; \
test -f "/vendor/${LIBCLI_TAR}"; \
mkdir -p /tmp/libcli; \
tar -xf "/vendor/${LIBCLI_TAR}" --strip-components=1 -C /tmp/libcli; \
make -C /tmp/libcli; \
make -C /tmp/libcli install; \
rm -rf /tmp/libcli
RUN set -eux; \
case "${BUILD_TARGET}" in \
all|modules|kernel|dist) \
test -f "/vendor/${KERNEL_TAR}" || { echo "KERNEL_TAR is required for BUILD_TARGET=${BUILD_TARGET}"; exit 2; }; \
mkdir -p /opt/kernel; \
tar -xf "/vendor/${KERNEL_TAR}" --strip-components=1 -C /opt/kernel; \
;; \
*) : ;; \
esac
ENV LC_ALL=C
ENV TZ=UTC
ENV SOURCE_DATE_EPOCH=${SOURCE_DATE_EPOCH}
RUN set -eux; \
if [ "${JOBS}" = "0" ]; then JOBS="$(getconf _NPROCESSORS_ONLN)"; fi; \
make -C /work/src/ASK \
VENDOR_DIR=/vendor/packages \
FMLIB_TAR="/vendor/${FMLIB_TAR}" \
FMC_TAR="/vendor/${FMC_TAR}" \
LIBNFNETLINK_TAR="/vendor/${LIBNFNETLINK_TAR}" \
LIBNFCT_TAR="/vendor/${LIBNFCT_TAR}" \
KDIR=/opt/kernel \
JOBS="${JOBS}" \
USERSPACE_CFLAGS="${USERSPACE_CFLAGS}" \
USERSPACE_LDFLAGS="${USERSPACE_LDFLAGS}" \
"${BUILD_TARGET}"
RUN set -eux; \
mkdir -p /out; \
if [ -d /work/src/ASK/dist ]; then \
cp -a /work/src/ASK/dist/. /out/; \
else \
[ -f /work/src/ASK/cmm/src/cmm ] && cp /work/src/ASK/cmm/src/cmm /out/ || true; \
[ -f /work/src/ASK/dpa_app/dpa_app ] && cp /work/src/ASK/dpa_app/dpa_app /out/ || true; \
[ -f /work/src/ASK/sources/fmc/source/fmc ] && cp /work/src/ASK/sources/fmc/source/fmc /out/ || true; \
[ -f /work/src/ASK/cdx/cdx.ko ] && cp /work/src/ASK/cdx/cdx.ko /out/ || true; \
[ -f /work/src/ASK/fci/fci.ko ] && cp /work/src/ASK/fci/fci.ko /out/ || true; \
[ -f /work/src/ASK/auto_bridge/auto_bridge.ko ] && cp /work/src/ASK/auto_bridge/auto_bridge.ko /out/ || true; \
fi
FROM scratch AS artifacts
COPY --from=builder /out/ /
```
The Dockerfile above uses `COPY packages/` plus explicit `tar -xf` because that is the less magical pattern once you have **multiple optional archives** such as `KERNEL_TAR`, checksum verification, and tarballs whose extracted top-level directory names you do not want to trust blindly. Dockers docs are clear that `ADD` can auto-unpack local tar archives, but they are equally clear that `COPY` is the basic, explicit copy primitive. In practice, explicit wins here. citeturn23view0turn10view0turn10view1turn10view4
The normal invocation is `make ASK BUILD_TARGET=dist KERNEL_TAR=packages/linux.tar.xz` for a full module build, or `make ASK BUILD_TARGET=userspace` when only the source/dependency tarballs are available and the kernel tree is not. The artifact-export path is the better default; the image-loading target is only useful if you actually want a local OCI image as an intermediate. citeturn9view5turn9view6
**Short `ARG + ADD` pattern and a secret-safe variant**
If all you need is the shortest path-based pattern, Docker supports it:
```dockerfile
ARG ASK_TAR=packages/ASK.tar.gz
ADD ${ASK_TAR} /work/src/
```
That works because local `ADD` sources are relative to the build context and local tar archives are unpacked automatically. But Docker also warns that build args are not secret channels. So if the archive is confidential, use a BuildKit secret mount instead:
```dockerfile
# build command:
# docker buildx build --secret id=ask,src=packages/ASK.tar.gz -f docker/ask.Dockerfile .
RUN --mount=type=secret,id=ask,target=/tmp/ASK.tar.gz \
mkdir -p /work/src && tar -xf /tmp/ASK.tar.gz -C /work/src
```
That is the secure answer. Build args should carry a **path or selector**, not secret bytes. citeturn10view0turn10view2turn9view0turn9view1
## Debian and Alpine package mapping
The table below is the practical package/command map for the common build tools requested, plus the one ASK-specific exception that matters in real life.
| Need | Debian Trixie | Alpine 3.22 | Practical note |
|---|---|---|---|
| Meta toolchain bundle | `apt-get install -y build-essential` | `apk add --no-cache build-base` | Rough equivalents for native builds |
| arm64 cross bundle | `apt-get install -y crossbuild-essential-arm64` | no single stable equivalent | On Alpine, use `buildx --platform=linux/arm64` or bring your own GNU sysroot |
| C compiler | `apt-get install -y gcc` | `apk add --no-cache gcc` | `build-base` already includes it |
| C++ compiler | `apt-get install -y g++` | `apk add --no-cache g++` | `build-base` already includes it |
| `make` | `apt-get install -y make` | `apk add --no-cache make` | `build-base` already includes it |
| CMake | `apt-get install -y cmake` | `apk add --no-cache cmake` | Direct name match |
| pkg-config tooling | `apt-get install -y pkgconf` | `apk add --no-cache pkgconf` | On Debian, `pkg-config` is now effectively the pkgconf world too |
| OpenSSL headers/libs | `apt-get install -y libssl-dev` | `apk add --no-cache openssl-dev` | Direct development-package equivalents |
| zlib headers/libs | `apt-get install -y zlib1g-dev` | `apk add --no-cache zlib-dev` | Direct development-package equivalents |
| C++ runtime | runtime usually via `libstdc++6`; headers via `g++`/`libstdc++-14-dev` | `apk add --no-cache libstdc++` | Alpine splits runtime from the compiler meta-package |
| libc headers | `apt-get install -y libc6-dev` | `apk add --no-cache musl-dev` | Alpine exposes `libc-dev` through musl |
| ASK-specific TCLAP | `apt-get install -y libtclap-dev` | `apk add --no-cache tclap-dev` | Needs Alpine `community` |
| ASK-specific libcli | `apt-get install -y libcli-dev` | vendor tarball | Stable Alpine v3.22 does not have a normal `libcli-dev` package |
These mappings are grounded in Debians package pages and Alpines official package index. The important details are: `build-base` bundles `gcc`, `g++`, `make`, `patch`, and `libc-dev`; `musl-dev` provides the `libc-dev` role on Alpine; `pkgconf` is the package you actually install for pkg-config tooling; and Debians `crossbuild-essential-arm64` does not have a one-package Alpine twin. citeturn5search0turn8view7turn7view0turn8view0turn8view2turn5search1turn5search9turn8view3turn8view4turn8view5turn8view6turn14view5turn13view4turn4view2turn4view3turn14view1turn14view3turn14view4
For ASK specifically, the two Alpine deltas worth remembering are `tclap-dev` in `community` and the absence of stable `libcli`, which is why the helper-stage libcli build is worth doing. citeturn25search0turn19view0turn24view0turn26search10
## Musl compatibility, troubleshooting, and assumptions
This is where most Alpine ports actually fail.
- **Do not trust glibc-specific `#ifdef` logic.** The musl FAQ explicitly calls out hardcoded glibc assumptions and wrong `__GLIBC__` checks as common failure causes. Fix the preprocessor logic first, not last. The same FAQ also calls out GNU `getopt` expectations, iconv/UCS2 assumptions, `off_t` width assumptions, and too-small default thread stacks as recurring causes of runtime failures. citeturn13view1
- **Musl locale behavior is not glibc locale behavior.** Alpines own musl page says musl does not implement most of the locale features that glibc implements. If your code relies on glibc locale internals, stop pretending Alpine is a drop-in replacement. citeturn13view2
- **Plugin unload/reload semantics differ.** musls dynamic loader keeps libraries loaded for the life of the process and treats `dlclose` as a no-op. If your software depends on glibc-style unload/reload behavior, that is a real portability bug, not a packaging bug. citeturn13view0
- **Static versus dynamic needs a hard-headed choice.** `-static-libgcc` and `-static-libstdc++` are good pragmatic flags when the problem is just the GCC/C++ runtime. They are not magic. Full static linking only works cleanly when every dependency is available as a static archive and your deployment model actually benefits from it.
- **`gcompat` is for simple runtime cases, not for wishful thinking.** Alpine documents `gcompat` as a glibc compatibility layer for musl systems, and the package provides the relevant `libc6-compat`/loader compatibility pieces. For more complex glibc applications, Alpines own docs point you toward a glibc chroot/container instead. citeturn13view3turn14view0
- **ASK-specific practical fix:** because ASKs source already contains at least one musl-friendly guard around `execinfo`/`backtrace`, a native Alpine build is plausible for the build itself. The unresolved question is not “can it compile?” It is “what libc must the resulting userspace binaries target on the final device?”
Where the upstream project is genuinely unspecified, the pattern stays the same and only the builder image family changes: C/C++/Make/CMake/autotools usually start from `alpine`, Go from `golang:*-alpine`, Python from `python:*-alpine`, and Node from `node:*-alpine`, all with the same multi-stage split. The exception rule is blunt: the moment the project depends on glibc-only binaries, ABI-sensitive vendor libraries, or packaging ecosystems that assume glibc, switch back to a glibc builder instead of burning time fighting musl. Dockers own guidance is to use trusted minimal images and multi-stage builds; Alpines own docs make clear that musl/glibc compatibility is extra engineering work, not something the base image solves for you. citeturn23view0turn9view3turn13view2turn13view3
**Open questions / limitations**
The supplied ASK tarball did not include the matching kernel source tree, so a fully reproducible module build still needs one more input archive. The deploy targets libc requirement was also unspecified; that matters a lot, because a musl-linked `dpa_app` or `cmm` is not automatically a drop-in replacement for a glibc target userland. Finally, no existing Debian Dockerfiles were present in the supplied archive, so the Alpine conversion above is a concrete replacement design derived from the archives real build logic, not a literal line-by-line rewrite of preexisting Dockerfiles.