Files
monok8s/docs/ask/ask-deepresearch-3.md
2026-05-01 01:39:04 +08:00

39 KiB
Raw Blame History

Reproducible Docker Build System for a Tarball-Supplied Upstream Project

Executive summary

The right design is to treat the upstream project tarball as an input artifact, not something the build fetches for itself. That means: vendor every required source tarball into the build context, pin the container base image by digest, verify checksums before extraction, remove .git metadata after unpacking, set SOURCE_DATE_EPOCH, and export build artifacts with docker buildx build --output type=local rather than baking them into an opaque image layer. For container construction, the correct pattern is “copy a vendored packages directory, select the tarball path with ARG, extract in RUN, build in a throwaway stage, and export only the outputs.” Docker explicitly documents that multi-stage builds reduce final image size, that build arguments parameterize the Dockerfile but are not appropriate for secrets, that .dockerignore should keep the context small, and that buildx supports local artifact export and build secrets. Docker also recommends digest pinning for deterministic base-image selection. citeturn12view3turn12view0turn12view1turn16view0turn16view1turn13view0turn16view3

For Alpine conversion, the key issue is not package-manager syntax; it is the libc boundary. Alpine uses musl, not glibc, and that changes package names, runtime behavior, and sometimes build logic. Alpines own documentation is blunt that compiling codebases may be harder on Alpine because of musl, that build-base is the standard compiler meta-package, that stable reproducible builds should use stable repositories rather than edge/testing, and that gcompat is only a compatibility layer for some glibc-built binaries. musls own documentation highlights the load-bearing differences: smaller default thread stacks, limited symbol-versioning support, dlclose() as a no-op, and common bugs caused by glibc-specific #ifdef logic. citeturn17view3turn17view0turn17view1turn17view2turn17view5turn17view6turn17view7turn17view8

Inspection of the supplied files changes the recommendation in one important way. The uploaded ASK source tarball is not self-contained: its upstream Makefile still clones fmlib and fmc, and downloads libnfnetlink and libnetfilter_conntrack. So a reproducible/offline ASK build requires vendoring all of those dependency tarballs as inputs alongside ASK.tar.gz, and overriding the upstream fetch logic to extract vendored tarballs instead of using git clone or wget. The supplied board build files also show a current kernel build flow that already merges kernel-extra.config into an NXP 6.12 kernel tree and checks the resolved .config. That is the right shape; one fragment line is wrong for this kernel. The reported error occurs because CONFIG_NETFILTER_XTABLES_LEGACY=y is required by the uploaded fragment, but that symbol does not exist in the uploaded lf-6.12.49-2.2.0 kernel tree. In that tree, the relevant legacy iptables symbols are CONFIG_IP_NF_IPTABLES_LEGACY and CONFIG_IP6_NF_IPTABLES_LEGACY, and both are already present in the supplied fragment. Comparing the supplied fragment against the uploaded kernel tree shows that CONFIG_NETFILTER_XTABLES_LEGACY is the only missing symbol. The fix is to drop or version-gate that single line, not to weaken the checker globally.

Findings from the supplied files

The supplied board-specific build files target an entity["company","NXP","semiconductor company"] Layerscape-oriented arm64 build with NXP_VERSION=lf-6.12.49-2.2.0, ARCH=arm64, CROSS_COMPILE=aarch64-linux-gnu-, and DEVICE_TREE_TARGET=mono-gateway-dk-sdk. The current root build uses a Debian Trixie-based builder image, downloads many artifacts into packages/, builds the kernel in a container, and exports artifacts locally with --output type=local. That overall shape is sound; the main weaknesses are network fetches during build and glibc-specific base images.

The supplied ASK tarball is a C/C++/Make-based project. Its upstream Makefile builds kernel modules and userspace components, but its “sources” phase still clones fmlib and fmc and downloads libnfnetlink and libnetfilter_conntrack. That means ASK.tar.gz alone is not enough for a fully reproducible build. You must vendor those dependency tarballs too, or the build will still depend on the network.

The uploaded ASK tree also contains a .git/ directory inside the tarball. That is unusual for release tarballs and bad for reproducibility unless you intentionally need VCS-derived version stamping. In this tree, cmm already falls back to "unknown" when git describe is unavailable, but cdx only generates version.h if .git exists. That means a tarball-only build should remove .git for determinism and generate a stable placeholder cdx/version.h before building.

The supplied cmm/src/cmm.c already guards execinfo.h / backtrace() under #if defined(__GLIBC__), which is exactly the right direction for musl portability: glibc-only code stays behind glibc checks, and non-glibc systems take the portable path. That is a good sign for Alpine migration.

Kernel mismatch diagnosis

The specific kernel error you reported:

kconfig mismatch: CONFIG_NETFILTER_XTABLES_LEGACY
  expected: CONFIG_NETFILTER_XTABLES_LEGACY=y
  actual:   <missing>
error: resolved kernel config does not satisfy /tmp/kernel-extra.config

is not a generic dependency failure. It is a fragment/tree mismatch.

The supplied kernel-extra.config contains all three of these settings:

CONFIG_NETFILTER_XTABLES_LEGACY=y
CONFIG_IP_NF_IPTABLES_LEGACY=y
CONFIG_IP6_NF_IPTABLES_LEGACY=y

In the uploaded NXP 6.12 kernel tree, CONFIG_IP_NF_IPTABLES_LEGACY and CONFIG_IP6_NF_IPTABLES_LEGACY exist, but CONFIG_NETFILTER_XTABLES_LEGACY does not. That newer top-level gate belongs to later kernels; it is not valid for this tree. So the direct fix is:

-CONFIG_NETFILTER_XTABLES_LEGACY=y

and nothing more. If you want one fragment to span multiple kernel lines, you must filter unsupported symbols before olddefconfig, or enable Kconfigs unknown-symbol warnings and treat them as fatal. The kernel docs explicitly provide KCONFIG_WARN_UNKNOWN_SYMBOLS and KCONFIG_WERROR for this purpose, and also recommend make listnewconfig / scripts/diffconfig to inspect config drift. citeturn21view0turn8search1

Reproducible build design

The build should be split into four distinct responsibilities.

First, a host vendoring step collects all source archives into packages/ and writes SHA256SUMS. That step is allowed to touch the network if your policy permits; the container build should not. If the tarballs are private or sensitive, pass them as BuildKit secrets instead of regular build args. Dockers docs are explicit: build args are not for secrets, because they may appear in image history and provenance metadata, while RUN --mount=type=secret makes the file available only for that instruction. citeturn12view0turn12view1turn12view2

Second, the container build should copy the vendored package directory into the builder stage, verify checksums, and extract only what the build needs. This is why the build arg should select a path inside the already-copied vendor directory, not try to trigger host-side fetching logic. In practice, that means ARG ASK_TAR=packages/ASK.tar.gz, then tar -xf "/vendor/${ASK_TAR}" ... inside RUN.

Third, for module builds, a matching KERNEL_TAR must be supplied and configured inside the build. The Linux kernel documentation is clear that external modules need a kernel tree with the right configuration and headers, and that modules_prepare is not enough when CONFIG_MODVERSIONS matters because it does not produce Module.symvers. If you are building out-of-tree modules against a board kernel, the safe rule is: if the project builds kernel modules, require a matching kernel source tarball and either run a full kernel build or at least a preparation step appropriate to your versioning model. citeturn22search0turn22search2

Fourth, export artifacts directly to the host filesystem using --output type=local,dest=.... Docker documents this as a first-class buildx output mode, and it is the cleanest way to make build results explicit and inspectable. citeturn16view0turn16view1

flowchart LR
    A[Vendor tarballs on host\nASK + dependency tarballs + optional KERNEL_TAR] --> B[Generate SHA256SUMS]
    B --> C[docker buildx build]
    C --> D[base-build stage\napk add toolchain]
    D --> E[Verify SHA256SUMS]
    E --> F[Extract ASK tarball]
    F --> G[Replace fetch logic with vendored tarball extraction]
    C --> H{KERNEL_TAR supplied?}
    H -- yes --> I[Extract kernel tree]
    I --> J[Filter kernel fragment\nmerge_config.sh\nolddefconfig]
    J --> K[Build kernel or modules_prepare]
    K --> L[Build ASK kernel modules]
    H -- no --> M[Build userspace only]
    G --> M
    M --> N[Collect dist/]
    L --> N
    N --> O[artifacts stage]
    O --> P[--output type=local]
    O --> Q[ASK_IMAGE minimal artifact image]

Concrete implementation

Assumptions

Where the prompt does not specify details, these are the assumptions I am making rather than inventing facts:

  • The supplied ASK project is a C/C++/Make project because that is what the uploaded tarball contains.
  • The generic design below is still valid for CMake, Autotools, Go, Python, and Node projects, but the builder stage package set and build commands should be adjusted accordingly.
  • The examples default to linux/arm64 because your uploaded board build files target arm64.
  • Tarballs are assumed to have a single top-level directory, which is normal for release archives and git archive outputs.
  • GNU tar semantics are assumed for the normalization snippet.
  • KERNEL_TAR is optional for userspace-only builds and mandatory for kernel-module builds.
  • Base-image digests are shown as placeholders because you asked for the pattern, not a locked digest for one exact tag.

Makefile

The example in your prompt needs one correction: the valid flag is --build-arg, not --args, and the Dockerfile is specified with -f/--file. This Makefile gives you a tarball-driven ASK target and an ASK_IMAGE target that loads a minimal artifact image locally.

.RECIPEPREFIX := >

DOCKER_BUILDKIT ?= 1
export DOCKER_BUILDKIT

PLATFORM        ?= linux/arm64
ALPINE_VERSION  ?= 3.22
ASK_NAME        ?= ask
ASK_OUT         ?= out/ask
ASK_IMAGE_NAME  ?= local/$(ASK_NAME):dev

# Required vendored inputs
ASK_TAR         ?= packages/ASK.tar.gz

# Optional vendored inputs for offline/reproducible dependency builds
FMLIB_TAR       ?= packages/fmlib-lf-6.12.49-2.2.0.tar.gz
FMC_TAR         ?= packages/fmc-lf-6.12.49-2.2.0.tar.gz
LIBNFNETLINK_TAR?= packages/libnfnetlink-1.0.2.tar.bz2
LIBNFCT_TAR     ?= packages/libnetfilter_conntrack-1.1.0.tar.xz
LIBCLI_TAR      ?= packages/libcli-1.10.7.tar.gz

# Required only if you build kernel modules
KERNEL_TAR      ?=
KERNEL_EXTRA    ?= docker/kernel-extra.config

ARCH            ?= arm64
CROSS_COMPILE   ?=
BUILD_TARGET    ?= dist
SOURCE_DATE_EPOCH ?= 1715731200
PROGRESS        ?= plain

COMMON_BUILD_ARGS = \
  --build-arg ALPINE_VERSION=$(ALPINE_VERSION) \
  --build-arg ASK_TAR=$(ASK_TAR) \
  --build-arg FMLIB_TAR=$(FMLIB_TAR) \
  --build-arg FMC_TAR=$(FMC_TAR) \
  --build-arg LIBNFNETLINK_TAR=$(LIBNFNETLINK_TAR) \
  --build-arg LIBNFCT_TAR=$(LIBNFCT_TAR) \
  --build-arg LIBCLI_TAR=$(LIBCLI_TAR) \
  --build-arg KERNEL_TAR=$(KERNEL_TAR) \
  --build-arg ARCH=$(ARCH) \
  --build-arg CROSS_COMPILE=$(CROSS_COMPILE) \
  --build-arg BUILD_TARGET=$(BUILD_TARGET) \
  --build-arg SOURCE_DATE_EPOCH=$(SOURCE_DATE_EPOCH)

.PHONY: ASK ASK_IMAGE clean

ASK:
> mkdir -p $(ASK_OUT)
> docker buildx build \
>   --platform $(PLATFORM) \
>   --progress=$(PROGRESS) \
>   -f docker/ask.Dockerfile \
>   $(COMMON_BUILD_ARGS) \
>   --target artifacts \
>   --output type=local,dest=$(ASK_OUT) \
>   .

ASK_IMAGE:
> docker buildx build \
>   --platform $(PLATFORM) \
>   --progress=$(PROGRESS) \
>   -f docker/ask.Dockerfile \
>   $(COMMON_BUILD_ARGS) \
>   --target runtime \
>   --load \
>   -t $(ASK_IMAGE_NAME) \
>   .

clean:
> rm -rf $(ASK_OUT)

Example invocations:

# Userspace-only artifact export
make ASK ASK_TAR=packages/ASK.tar.gz BUILD_TARGET=userspace

# Full artifact export, including module build against a matching kernel tree
make ASK \
  ASK_TAR=packages/ASK.tar.gz \
  KERNEL_TAR=packages/lf-6.12.49-2.2.0.tar.gz \
  BUILD_TARGET=dist \
  ARCH=arm64 \
  CROSS_COMPILE=

# Load a minimal artifact image into the local Docker image store
make ASK_IMAGE ASK_TAR=packages/ASK.tar.gz

# Override SOURCE_DATE_EPOCH for a release
make ASK SOURCE_DATE_EPOCH=1735689600

docker/ask.Dockerfile

This Dockerfile does four things that matter: it verifies vendored source archives, extracts the selected tarball based on a build arg, replaces fetch logic with vendored extraction through an override Makefile, and supports optional kernel-tree injection for module builds.

# syntax=docker/dockerfile:1.7

ARG ALPINE_VERSION=3.22

FROM alpine:${ALPINE_VERSION} AS base-build
ARG SOURCE_DATE_EPOCH
ENV SOURCE_DATE_EPOCH=${SOURCE_DATE_EPOCH}
ENV LC_ALL=C.UTF-8
ENV TZ=UTC

# Replace the tag with a digest in CI/release builds:
# FROM alpine:${ALPINE_VERSION}@sha256:<digest> AS base-build

RUN apk add --no-cache \
      bash bc bison build-base cpio coreutils diffutils file findutils flex \
      gawk git jq kmod libelf-dev libmnl-dev libpcap-dev libxml2-dev \
      linux-headers openssl-dev patch perl pkgconf python3 rsync tar tclap-dev \
      xz zlib-dev

WORKDIR /work

# Copy vendored packages once; select the specific tarball path with ARG later.
COPY packages/ /vendor/packages/
COPY docker/overrides/ /docker-overrides/
COPY scripts/filter-kconfig-fragment.sh /usr/local/bin/filter-kconfig-fragment.sh
COPY docker/kernel-extra.config /tmp/kernel-extra.config

RUN chmod +x /usr/local/bin/filter-kconfig-fragment.sh && \
    if [ -f /vendor/packages/SHA256SUMS ]; then \
      cd /vendor/packages && sha256sum -c SHA256SUMS; \
    fi

FROM base-build AS builder
ARG ASK_TAR=packages/ASK.tar.gz
ARG FMLIB_TAR=packages/fmlib-lf-6.12.49-2.2.0.tar.gz
ARG FMC_TAR=packages/fmc-lf-6.12.49-2.2.0.tar.gz
ARG LIBNFNETLINK_TAR=packages/libnfnetlink-1.0.2.tar.bz2
ARG LIBNFCT_TAR=packages/libnetfilter_conntrack-1.1.0.tar.xz
ARG LIBCLI_TAR=packages/libcli-1.10.7.tar.gz
ARG KERNEL_TAR=
ARG ARCH=arm64
ARG CROSS_COMPILE=
ARG BUILD_TARGET=dist
ARG SOURCE_DATE_EPOCH

ENV SOURCE_DATE_EPOCH=${SOURCE_DATE_EPOCH}
ENV KBUILD_BUILD_TIMESTAMP=@${SOURCE_DATE_EPOCH}
ENV KBUILD_BUILD_USER=repro
ENV KBUILD_BUILD_HOST=repro-host

# Optional vendored libcli because Alpine stable does not have a dependable stable libcli-dev path.
RUN if [ -n "${LIBCLI_TAR}" ] && [ -f "/vendor/${LIBCLI_TAR}" ]; then \
      mkdir -p /tmp/libcli-src /usr/local && \
      tar -xf "/vendor/${LIBCLI_TAR}" --strip-components=1 -C /tmp/libcli-src && \
      make -C /tmp/libcli-src && \
      make -C /tmp/libcli-src PREFIX=/usr/local install; \
    fi

# Extract ASK from the provided tarball and replace upstream fetch logic.
RUN mkdir -p /work/src/ASK && \
    tar -xf "/vendor/${ASK_TAR}" --strip-components=1 -C /work/src/ASK && \
    rm -rf /work/src/ASK/.git && \
    install -m 0644 /docker-overrides/Makefile /work/src/ASK/Makefile && \
    install -m 0644 /docker-overrides/toolchain.mk /work/src/ASK/build/toolchain.mk && \
    mkdir -p /work/src/ASK/cdx && \
    printf '%s\n' \
      '/* Auto-generated for tarball builds */' \
      '#ifndef CDX_VERSION_H' \
      '#define CDX_VERSION_H' \
      '#define CDX_GIT_VERSION "tarball"' \
      '#endif' \
      > /work/src/ASK/cdx/version.h

# Optional matching kernel tree, required for module builds.
RUN if [ -n "${KERNEL_TAR}" ]; then \
      mkdir -p /opt/kernel && \
      tar -xf "/vendor/${KERNEL_TAR}" --strip-components=1 -C /opt/kernel && \
      /usr/local/bin/filter-kconfig-fragment.sh /opt/kernel /tmp/kernel-extra.config > /tmp/kernel-extra.effective.config && \
      make -C /opt/kernel ARCH="${ARCH}" CROSS_COMPILE="${CROSS_COMPILE}" defconfig && \
      /opt/kernel/scripts/kconfig/merge_config.sh -m /opt/kernel/.config /tmp/kernel-extra.effective.config && \
      KCONFIG_WARN_UNKNOWN_SYMBOLS=1 KCONFIG_WERROR=1 \
      make -C /opt/kernel ARCH="${ARCH}" CROSS_COMPILE="${CROSS_COMPILE}" olddefconfig && \
      make -C /opt/kernel ARCH="${ARCH}" CROSS_COMPILE="${CROSS_COMPILE}" -j"$(nproc)" modules_prepare; \
    fi

WORKDIR /work/src/ASK
ENV KDIR=/opt/kernel
ENV TAR_DIR=/vendor/packages
ENV FMLIB_TAR=/vendor/${FMLIB_TAR}
ENV FMC_TAR=/vendor/${FMC_TAR}
ENV LIBNFNETLINK_TAR=/vendor/${LIBNFNETLINK_TAR}
ENV LIBNFCT_TAR=/vendor/${LIBNFCT_TAR}

RUN case "${BUILD_TARGET}" in \
      userspace) make userspace ;; \
      modules)   test -n "${KERNEL_TAR}" && make modules ;; \
      dist)      test -n "${KERNEL_TAR}" && make dist ;; \
      *)         echo "unsupported BUILD_TARGET=${BUILD_TARGET}" >&2; exit 2 ;; \
    esac && \
    mkdir -p /out && \
    cp -a dist/. /out/

FROM scratch AS artifacts
COPY --from=builder /out/ /

# Minimal artifact image, not a runtime service image.
FROM scratch AS runtime
COPY --from=builder /out/ /opt/ask/

docker/overrides/toolchain.mk

This keeps the upstream layout but makes the toolchain configurable and points the kernel directory at the injected kernel tree.

# Toolchain override for reproducible container builds
CROSS_COMPILE ?=
ARCH          ?= arm64
PLATFORM      ?= LS1043A

CC            := $(if $(CROSS_COMPILE),$(CROSS_COMPILE)gcc,gcc)
CXX           := $(if $(CROSS_COMPILE),$(CROSS_COMPILE)g++,g++)
AR            := $(if $(CROSS_COMPILE),$(CROSS_COMPILE)ar,ar)
STRIP         := $(if $(CROSS_COMPILE),$(CROSS_COMPILE)strip,strip)

# Injected by Dockerfile when building modules
KDIR          ?= /opt/kernel

docker/overrides/Makefile

This is the critical part. It preserves the upstream build shape but replaces network fetches with vendored tarball extraction. The intent is simple: no git clone, no wget, no curl inside the build.

include build/toolchain.mk
include build/sources.mk

DEFCONFIG  := $(KDIR)/.config
DIST       := $(CURDIR)/dist
SRCDIR     := $(CURDIR)/sources
PATCHES    := $(CURDIR)/patches
HOST       := $(if $(CROSS_COMPILE),$(patsubst %-,-%,$(CROSS_COMPILE:%-=%)),)
SYSROOT    := $(SRCDIR)/sysroot
FMLIB_DIR  := $(SRCDIR)/fmlib
FMC_DIR    := $(SRCDIR)/fmc/source
LIBFCI_DIR := $(CURDIR)/fci/lib
ABM_DIR    := $(CURDIR)/auto_bridge

S := $(SRCDIR)/.stamps
$(shell mkdir -p $(S))

FMLIB_TAR        ?= /vendor/packages/fmlib-$(NXP_TAG).tar.gz
FMC_TAR          ?= /vendor/packages/fmc-$(NXP_TAG).tar.gz
LIBNFNETLINK_TAR ?= /vendor/packages/libnfnetlink-$(LIBNFNETLINK_VER).tar.bz2
LIBNFCT_TAR      ?= /vendor/packages/libnetfilter_conntrack-$(LIBNFCT_VER).tar.xz

KBUILD_ARGS := CROSS_COMPILE=$(CROSS_COMPILE) ARCH=$(ARCH)
CDX_ARGS    := $(KBUILD_ARGS) KERNELDIR=$(KDIR) PLATFORM=$(PLATFORM)
FCI_ARGS    := $(KBUILD_ARGS) KERNEL_SOURCE=$(KDIR) BOARD_ARCH=$(ARCH) \
               KBUILD_EXTRA_SYMBOLS=$(CURDIR)/cdx/Module.symvers
ABM_ARGS    := $(KBUILD_ARGS) KERNEL_SOURCE=$(KDIR) PLATFORM=$(PLATFORM)

.PHONY: all sources modules userspace dist clean clean-all
all: modules userspace

sources: $(S)/fmlib $(S)/fmc $(S)/libfci $(S)/libnfnetlink $(S)/libnfct

$(S)/fmlib:
	test -f "$(FMLIB_TAR)"
	rm -rf "$(FMLIB_DIR)"
	mkdir -p "$(FMLIB_DIR)"
	tar -xf "$(FMLIB_TAR)" --strip-components=1 -C "$(FMLIB_DIR)"
	cd "$(FMLIB_DIR)" && patch -p1 < "$(PATCHES)/fmlib/01-mono-ask-extensions.patch"
	$(MAKE) -C "$(FMLIB_DIR)" CROSS_COMPILE="$(CROSS_COMPILE)" KERNEL_SRC="$(KDIR)" libfm-arm.a
	ln -sf libfm-arm.a "$(FMLIB_DIR)/libfm.a"
	touch $@

$(S)/fmc: $(S)/fmlib
	test -f "$(FMC_TAR)"
	rm -rf "$(SRCDIR)/fmc"
	mkdir -p "$(SRCDIR)/fmc"
	tar -xf "$(FMC_TAR)" --strip-components=1 -C "$(SRCDIR)/fmc"
	cd "$(SRCDIR)/fmc" && patch -p1 < "$(PATCHES)/fmc/01-mono-ask-extensions.patch"
	$(MAKE) -C "$(FMC_DIR)" \
		CC="$(CC)" CXX="$(CXX)" AR="$(AR)" \
		MACHINE=ls1046 \
		FMD_USPACE_HEADER_PATH="$(FMLIB_DIR)/include/fmd" \
		FMD_USPACE_LIB_PATH="$(FMLIB_DIR)" \
		LIBXML2_HEADER_PATH=/usr/include/libxml2 \
		TCLAP_HEADER_PATH=/usr/include
	touch $@

$(S)/libfci:
	$(MAKE) -C "$(LIBFCI_DIR)" CC="$(CC)" AR="$(AR)"
	touch $@

$(S)/libnfnetlink:
	test -f "$(LIBNFNETLINK_TAR)"
	rm -rf "$(SRCDIR)/libnfnetlink-$(LIBNFNETLINK_VER)"
	mkdir -p "$(SRCDIR)/libnfnetlink-$(LIBNFNETLINK_VER)"
	tar -xf "$(LIBNFNETLINK_TAR)" --strip-components=1 -C "$(SRCDIR)/libnfnetlink-$(LIBNFNETLINK_VER)"
	cd "$(SRCDIR)/libnfnetlink-$(LIBNFNETLINK_VER)" && \
		patch -p1 < "$(PATCHES)/libnfnetlink/01-nxp-ask-nonblocking-heap-buffer.patch" && \
		./configure --host="$(HOST)" --prefix="$(SYSROOT)" --enable-static --disable-shared -q && \
		$(MAKE) -j$$(nproc) -s && $(MAKE) install -s
	touch $@

$(S)/libnfct: $(S)/libnfnetlink
	test -f "$(LIBNFCT_TAR)"
	rm -rf "$(SRCDIR)/libnetfilter_conntrack-$(LIBNFCT_VER)"
	mkdir -p "$(SRCDIR)/libnetfilter_conntrack-$(LIBNFCT_VER)"
	tar -xf "$(LIBNFCT_TAR)" --strip-components=1 -C "$(SRCDIR)/libnetfilter_conntrack-$(LIBNFCT_VER)"
	cd "$(SRCDIR)/libnetfilter_conntrack-$(LIBNFCT_VER)" && \
		patch -p1 < "$(PATCHES)/libnetfilter-conntrack/01-nxp-ask-comcerto-fp-extensions.patch" && \
		PKG_CONFIG_PATH="$(SYSROOT)/lib/pkgconfig" \
		./configure --host="$(HOST)" --prefix="$(SYSROOT)" --enable-static --disable-shared -q \
			CFLAGS="-I$(SYSROOT)/include" LDFLAGS="-L$(SYSROOT)/lib" && \
		$(MAKE) -j$$(nproc) -s && $(MAKE) install -s
	touch $@

modules: sources
	test -d "$(KDIR)"
	$(MAKE) -C cdx $(CDX_ARGS)
	$(MAKE) -C fci $(FCI_ARGS)
	$(MAKE) -C auto_bridge $(ABM_ARGS)

userspace: sources
	$(MAKE) -C "$(FMC_DIR)" \
		CC="$(CC)" CXX="$(CXX)" AR="$(AR)" \
		MACHINE=ls1046 \
		FMD_USPACE_HEADER_PATH="$(FMLIB_DIR)/include/fmd" \
		FMD_USPACE_LIB_PATH="$(FMLIB_DIR)" \
		LIBXML2_HEADER_PATH=/usr/include/libxml2 \
		TCLAP_HEADER_PATH=/usr/include
	$(MAKE) -C cmm \
		CC="$(CC)" \
		PKG_CONFIG=pkg-config \
		PKG_CONFIG_PATH="$(SYSROOT)/lib/pkgconfig:/usr/lib/pkgconfig:/usr/share/pkgconfig" \
		LDFLAGS="-L$(SYSROOT)/lib -L/usr/local/lib" \
		CFLAGS="-I$(SYSROOT)/include -I/usr/local/include"
	$(MAKE) -C dpa_app \
		CC="$(CC)" \
		CFLAGS="-I$(FMLIB_DIR)/include -I$(FMC_DIR)/libfci_cli/src" \
		LDFLAGS="-lpthread -lcli -L/usr/local/lib -L$(FMC_DIR) -lfmc -L$(FMLIB_DIR) -lfm -lstdc++ -lxml2 -lm"

dist: modules userspace
	mkdir -p "$(DIST)"
	cp cdx/cdx.ko "$(DIST)/"
	cp fci/fci.ko "$(DIST)/"
	cp auto_bridge/auto_bridge.ko "$(DIST)/"
	cp "$(FMC_DIR)/fmc" "$(DIST)/"
	cp cmm/src/cmm "$(DIST)/"
	cp dpa_app/dpa_app "$(DIST)/"

clean:
	-$(MAKE) -C cdx $(CDX_ARGS) clean
	-$(MAKE) -C fci $(FCI_ARGS) clean
	-$(MAKE) -C auto_bridge $(ABM_ARGS) clean
	-$(MAKE) -C "$(LIBFCI_DIR)" clean
	-$(MAKE) -C cmm clean
	-$(MAKE) -C dpa_app clean
	rm -rf "$(DIST)"
	rm -f "$(S)"/*

clean-all: clean
	rm -rf "$(SRCDIR)"

.dockerignore

Keep the context brutally small. That improves cache behavior and reduces the chance that unrelated files perturb reproducibility.

**
!docker/**
!scripts/**
!packages/**
!Makefile

Host vendoring script

This is the clean place to fetch and lock dependency tarballs. It can pull from upstream or from your own artifact mirror; the container build stays offline either way.

#!/usr/bin/env bash
set -euo pipefail

mkdir -p packages

# Required
cp "${ASK_TAR_SRC:?set ASK_TAR_SRC}" packages/ASK.tar.gz

# Optional but required for full offline ASK builds
[ -n "${FMLIB_TAR_SRC:-}" ]        && cp "${FMLIB_TAR_SRC}"        packages/fmlib-lf-6.12.49-2.2.0.tar.gz
[ -n "${FMC_TAR_SRC:-}" ]          && cp "${FMC_TAR_SRC}"          packages/fmc-lf-6.12.49-2.2.0.tar.gz
[ -n "${LIBNFNETLINK_TAR_SRC:-}" ] && cp "${LIBNFNETLINK_TAR_SRC}" packages/libnfnetlink-1.0.2.tar.bz2
[ -n "${LIBNFCT_TAR_SRC:-}" ]      && cp "${LIBNFCT_TAR_SRC}"      packages/libnetfilter_conntrack-1.1.0.tar.xz
[ -n "${LIBCLI_TAR_SRC:-}" ]       && cp "${LIBCLI_TAR_SRC}"       packages/libcli-1.10.7.tar.gz
[ -n "${KERNEL_TAR_SRC:-}" ]       && cp "${KERNEL_TAR_SRC}"       packages/lf-6.12.49-2.2.0.tar.gz

(
  cd packages
  rm -f SHA256SUMS
  sha256sum *.tar.gz *.tar.xz *.tar.bz2 2>/dev/null | sort -k2 > SHA256SUMS
)

Tarball normalization snippet

Use this on the host if you need to strip VCS noise and normalize timestamps before vendoring.

#!/usr/bin/env bash
set -euo pipefail

: "${SOURCE_DATE_EPOCH:?set SOURCE_DATE_EPOCH}"
: "${IN_TAR:?set IN_TAR}"
: "${OUT_TAR:?set OUT_TAR}"

tmp="$(mktemp -d)"
trap 'rm -rf "$tmp"' EXIT

mkdir -p "$tmp/src"
tar -xf "$IN_TAR" -C "$tmp/src"
root="$(find "$tmp/src" -mindepth 1 -maxdepth 1 -type d | head -n1)"

rm -rf "$root/.git"
find "$root" -exec touch -h -d "@${SOURCE_DATE_EPOCH}" {} +

tar --sort=name \
    --mtime="@${SOURCE_DATE_EPOCH}" \
    --owner=0 --group=0 --numeric-owner \
    -czf "$OUT_TAR" -C "$tmp/src" "$(basename "$root")"

Kernel fragment filtering and concrete fix for CONFIG_NETFILTER_XTABLES_LEGACY

This script solves the exact mismatch you reported without weakening verification for valid symbols.

#!/usr/bin/env bash
set -euo pipefail

KERNEL_DIR="${1:?kernel dir required}"
FRAGMENT="${2:?fragment required}"

symbol_exists() {
  local sym="${1#CONFIG_}"
  grep -RqsE "^[[:space:]]*(menu)?config[[:space:]]+${sym}\b" \
    "$KERNEL_DIR"/Kconfig \
    "$KERNEL_DIR"/arch \
    "$KERNEL_DIR"/drivers \
    "$KERNEL_DIR"/fs \
    "$KERNEL_DIR"/init \
    "$KERNEL_DIR"/kernel \
    "$KERNEL_DIR"/lib \
    "$KERNEL_DIR"/mm \
    "$KERNEL_DIR"/net \
    "$KERNEL_DIR"/security \
    "$KERNEL_DIR"/sound \
    "$KERNEL_DIR"/virt
}

while IFS= read -r line || [ -n "$line" ]; do
  stripped="$(printf '%s\n' "$line" | sed 's/^[[:space:]]*//; s/[[:space:]]*$//')"
  [ -z "$stripped" ] && { printf '\n'; continue; }

  case "$stripped" in
    \#\ CONFIG_*" is not set")
      sym="$(printf '%s' "$stripped" | sed 's/^# \(CONFIG_[^ ]*\) is not set$/\1/')"
      symbol_exists "$sym" && printf '%s\n' "$stripped"
      ;;
    CONFIG_*=*)
      sym="${stripped%%=*}"
      symbol_exists "$sym" && printf '%s\n' "$stripped"
      ;;
    \#*)
      printf '%s\n' "$stripped"
      ;;
    *)
      printf '%s\n' "$stripped"
      ;;
  esac
done < "$FRAGMENT"

For your exact fragment, the direct patch is just this:

--- a/docker/kernel-extra.config
+++ b/docker/kernel-extra.config
@@
-CONFIG_NETFILTER_XTABLES_LEGACY=y

If you want to inject and verify a corrected kernel config inside the Docker build before module compilation, use this sequence:

RUN mkdir -p /opt/kernel && \
    tar -xf "/vendor/${KERNEL_TAR}" --strip-components=1 -C /opt/kernel && \
    /usr/local/bin/filter-kconfig-fragment.sh /opt/kernel /tmp/kernel-extra.config > /tmp/kernel-extra.effective.config && \
    make -C /opt/kernel ARCH="${ARCH}" CROSS_COMPILE="${CROSS_COMPILE}" defconfig && \
    /opt/kernel/scripts/kconfig/merge_config.sh -m /opt/kernel/.config /tmp/kernel-extra.effective.config && \
    KCONFIG_WARN_UNKNOWN_SYMBOLS=1 KCONFIG_WERROR=1 \
    make -C /opt/kernel ARCH="${ARCH}" CROSS_COMPILE="${CROSS_COMPILE}" olddefconfig && \
    grep -E '^CONFIG_(NETFILTER_XTABLES|IP_NF_IPTABLES|IP_NF_IPTABLES_LEGACY|IP6_NF_IPTABLES|IP6_NF_IPTABLES_LEGACY)=' /opt/kernel/.config

If you prefer imperative fixes over fragment files, scripts/config is fine too:

cd /opt/kernel
scripts/config --file .config \
  -e NETFILTER \
  -e NETFILTER_ADVANCED \
  -e NETFILTER_XTABLES \
  -e IP_NF_IPTABLES \
  -e IP_NF_IPTABLES_LEGACY \
  -e IP_NF_FILTER \
  -e IP_NF_MANGLE \
  -e IP_NF_NAT \
  -e IP6_NF_IPTABLES \
  -e IP6_NF_IPTABLES_LEGACY \
  -e IP6_NF_FILTER \
  -e IP6_NF_MANGLE \
  -e IP6_NF_NAT

# Only enable if the symbol actually exists in this tree.
grep -RqsE '^[[:space:]]*(menu)?config[[:space:]]+NETFILTER_XTABLES_LEGACY\b' . && \
  scripts/config --file .config -e NETFILTER_XTABLES_LEGACY || true

KCONFIG_WARN_UNKNOWN_SYMBOLS=1 KCONFIG_WERROR=1 \
make ARCH=arm64 CROSS_COMPILE="${CROSS_COMPILE}" olddefconfig

Build-arg versus BuildKit secret mount

For ordinary source tarballs, build args are fine:

docker buildx build \
  -f docker/ask.Dockerfile \
  --build-arg ASK_TAR=packages/ASK.tar.gz \
  --target artifacts \
  --output type=local,dest=out/ask \
  .

For private source archives or credentials, do not use build args. Use a secret mount:

docker buildx build \
  -f docker/ask.Dockerfile \
  --secret id=ask_tar,src=packages/ASK.tar.gz \
  --target artifacts \
  --output type=local,dest=out/ask \
  .
RUN --mount=type=secret,id=ask_tar,target=/run/secrets/ASK.tar.gz \
    mkdir -p /work/src/ASK && \
    tar -xf /run/secrets/ASK.tar.gz --strip-components=1 -C /work/src/ASK

Notes on the code above: Dockers documented behavior is that build args are for Dockerfile parameterization, may persist in image metadata/history, and are therefore not appropriate for secrets; build secrets are mounted temporarily for the duration of a single RUN; multi-stage builds are the recommended way to shrink outputs; .dockerignore is the standard way to keep context small; and buildx supports type=local output directly to the client filesystem. Docker also recommends digest pinning for deterministic base-image selection. citeturn12view0turn12view1turn12view2turn12view3turn13view0turn16view0turn16view1turn16view3

Kernel-side notes on the config flow above: the kernel docs provide KCONFIG_WARN_UNKNOWN_SYMBOLS, KCONFIG_WERROR, KCONFIG_ALLCONFIG, listnewconfig, and scripts/diffconfig for config hygiene; they also show merge_config.sh in use; and they document that external modules need a prepared kernel tree, with modules_prepare not being sufficient for Module.symvers when module versioning is involved. Reproducible kernel builds should also set KBUILD_BUILD_TIMESTAMP, KBUILD_BUILD_USER, KBUILD_BUILD_HOST, and SOURCE_DATE_EPOCH where external code embeds timestamps. citeturn21view0turn8search1turn22search0turn22search4turn9view1turn11search0

Debian Trixie to Alpine conversion

The conversion rule is simple: translate package names, then re-evaluate libc assumptions. Do not assume a one-line mechanical translation is enough. Alpine stable gives you build-base, gcc, g++, musl-dev, pkgconf, openssl-dev, zlib-dev, cmake, and tclap-dev through its normal repositories, but repository choice matters: Alpine explicitly warns that testing is edge-only and should not be used for deterministic production-like container builds. That matters for libcli, where the official package index shows a libcli package in edge/testing, but not a stable libcli-dev path for the v3.22 line. In other words: for libcli, vendoring the source tarball is the safe answer. citeturn17view1turn17view3turn18view8

Tool / library Debian Trixie package(s) Debian install command Alpine package(s) Alpine install command Practical note
gcc gcc apt-get update && apt-get install -y gcc gcc apk add --no-cache gcc Same compiler family, different libc target by default
g++ g++ apt-get update && apt-get install -y g++ g++ apk add --no-cache g++ Same caveat as gcc
make make apt-get update && apt-get install -y make make apk add --no-cache make Straight mapping
cmake cmake apt-get update && apt-get install -y cmake cmake apk add --no-cache cmake Straight mapping
pkg-config / pkgconf pkgconf or pkg-config apt-get update && apt-get install -y pkgconf pkgconf apk add --no-cache pkgconf Prefer pkgconf on both
OpenSSL development libssl-dev apt-get update && apt-get install -y libssl-dev openssl-dev apk add --no-cache openssl-dev Use openssl separately if you need the CLI
zlib development zlib1g-dev apt-get update && apt-get install -y zlib1g-dev zlib-dev apk add --no-cache zlib-dev Straight mapping
libstdc++ runtime libstdc++6 apt-get update && apt-get install -y libstdc++6 libstdc++ apk add --no-cache libstdc++ Headers/dev pieces still come from g++ / toolchain
libc development files libc6-dev apt-get update && apt-get install -y libc6-dev musl-dev apk add --no-cache musl-dev This is the real ABI divide
build meta-package build-essential apt-get update && apt-get install -y build-essential build-base apk add --no-cache build-base Debian and Alpine standard meta-packages
arm64 cross meta-package crossbuild-essential-arm64 apt-get update && apt-get install -y crossbuild-essential-arm64 no direct equivalent apk add --no-cache build-base plus Buildx/QEMU, or install a dedicated cross toolchain Prefer Buildx --platform unless you truly need a host cross toolchain
TCLAP development libtclap-dev apt-get update && apt-get install -y libtclap-dev tclap-dev apk add --no-cache tclap-dev Alpine package is in community
libcli development libcli-dev apt-get update && apt-get install -y libcli-dev no stable equivalent surfaced for v3.22 vendor and build from tarball edge/testing shows libcli, not a dependable stable libcli-dev path

Sources and notes for the table: Debians official package pages document build-essential, crossbuild-essential-arm64, and the default g++ metapackage; the Debian package index and source pages also show libtclap-dev and libcli-dev. Alpines official package pages and wiki document build-base, gcc, g++, cmake, musl-dev, libstdc++, openssl-dev, zlib-dev, tclap-dev, and repository support rules. Alpines repository documentation explicitly says testing is edge-only and not appropriate for deterministic production-like use, which is why vendoring libcli is the better answer than leaning on edge/testing in a stable builder. citeturn18view10turn18view9turn18view11turn7search0turn18view12turn18view0turn19search2turn18view3turn18view4turn18view1turn18view2turn19search5turn19search9turn18view7turn17view1turn17view3turn18view8

Musl compatibility and troubleshooting

The musl migration problem is mostly about identifying which failures are toolchain/configuration issues and which are libc/ABI issues.

At the toolchain level, Alpines own documentation says build-base is the standard compiler meta-package and warns that ordinary Alpine systems use BusyBox, which means some GNU utility behavior is absent or reduced; if your build scripts assume full GNU userland, add bash, coreutils, findutils, diffutils, gawk, and similar packages explicitly instead of assuming Debian defaults. citeturn17view3turn17view4

At the libc level, musls documentation highlights the differences that actually break software. The big four are: smaller default thread stacks, limited symbol-versioning support compared with glibcs versioned symbol model, dlclose() being a no-op, and code taking the wrong path because the project wrote glibc-specific preprocessor logic backwards. The musl FAQ explicitly calls out getopt assumptions, iconv assumptions, off_t width assumptions, and bad #ifdef logic as common failure modes, and recommends making the preprocessor logic check for __GLIBC__ correctly rather than defaulting to glibc behavior. citeturn17view5turn17view6

For the supplied ASK sources specifically, the good news is that one of the classic glibc-only pain points is already handled. cmm/src/cmm.c wraps execinfo.h and backtrace() behind __GLIBC__ guards. Keep that pattern. Do not “fix” it by forcing musl through the glibc path.

A practical troubleshooting order is:

  • If the failure is missing headers or package names, fix package mappings first.
  • If the failure is undefined references or runtime loader errors, inspect whether the code or binary assumes glibc versioned symbols.
  • If the failure is random crashes in worker threads, suspect musls smaller default thread stack and either reduce stack usage or set an explicit stack size.
  • If the failure is plugin-reload behavior, remember that musl treats dlclose() differently.
  • If the failure is from a third-party binary blob, try gcompat only for simple runtime compatibility. Do not assume it is a full glibc replacement.

Recommended fixes, in descending order of cleanliness:

  • Patch the source so it is libc-agnostic.
  • Adjust link flags for more self-contained outputs where appropriate, such as -static-libgcc and, if acceptable for your deployment model, -static-libstdc++.
  • Use gcompat only for relatively simple glibc-linked runtime binaries on Alpine, not as a blanket cure-all. Alpine documents it as a compatibility layer, not a full glibc runtime. citeturn17view2turn17view7turn17view8
  • Use a glibc builder stage for the one component that genuinely cannot be ported, while keeping the rest of the system on Alpine.
  • Keep a glibc runtime image for that component if you hit hard symbol-versioning or loader constraints that gcompat cannot cover.
flowchart TD
    A[Component fails on Alpine/musl] --> B{Package/toolchain issue?}
    B -- yes --> C[Fix apk package names\nadd GNU userland tools\nrebuild]
    C --> D{Now links and runs?}
    D -- yes --> E[Stay on Alpine]
    D -- no --> F{Source has glibc-only ifdefs or APIs?}
    B -- no --> F
    F -- yes --> G[Patch source\nuse __GLIBC__ guards\nfix flags]
    G --> H{Still failing?}
    H -- no --> E
    H -- yes --> I{Simple glibc runtime dependency only?}
    F -- no --> I
    I -- yes --> J[Try gcompat]
    J --> K{Reliable enough?}
    K -- yes --> E
    K -- no --> L[Use glibc builder or runtime for that component]
    I -- no --> L
    L --> M{Kernel module / ABI-coupled piece?}
    M -- yes --> N[Require matching KERNEL_TAR\nand matching kernel config]
    M -- no --> O[Split build:\nAlpine for musl-safe parts,\nglibc stage for offender]

Sources and notes

This report prioritizes official documentation. Docker guidance comes from the official Docker docs on best practices, multi-stage builds, build variables, build secrets, build context, .dockerignore, and buildx outputs and secret handling. Alpine guidance comes from the Alpine wiki and package index for musl, repositories, software management, BusyBox, build-base, tclap-dev, gcompat, and related packages. musl/glibc compatibility notes come from the musl wikis FAQ and functional-differences pages. Kernel configuration and reproducibility guidance comes from the official kernel docs on configuration targets, Kbuild environment variables, external modules, and reproducible builds. citeturn12view3turn12view0turn12view1turn12view4turn12view5turn16view0turn16view1turn16view3turn17view0turn17view1turn17view2turn17view3turn17view4turn17view7turn17view8turn17view5turn17view6turn21view0turn9view1turn22search0turn22search4turn11search0

Open questions and limitations: I did not pin one exact final base-image digest because you did not specify one exact image tag policy; the code uses placeholders for digest pinning. I also did not assume one exact ASK dependency inventory beyond what is visible in the supplied tarball: the report covers the dependencies the uploaded ASK Makefile demonstrably fetches (fmlib, fmc, libnfnetlink, libnetfilter_conntrack, and practically libcli). If your unrevealed local patches or downstream packaging add more source fetches, those need the same vendoring treatment.