AINode Release Notes
Last updated: Apr 15, 2026
AINode Products
All AINode Release Notes (1)
- Apr 15, 2026
- Date parsed from source:Apr 15, 2026
- First seen by Releasebot:Apr 15, 2026
AINode v0.4.0 — container-native distribution
AINode releases a single-container install that bundles the Web UI, OpenAI-compatible API, and GB10-patched vLLM in one version-locked image. It adds multi-node cluster support, auto-wired distributed launches in the UI, and verified TP=2 cross-node inference on NVIDIA GB10.
Highlights
AINode is now a single-container install: docker pull ghcr.io/getainode/ainode:0.4.0.
No host Python venv, no source-built vLLM. Web UI, OpenAI-compatible API, and GB10-patched vLLM all version-locked in one image.
What's in this release
Unified image — one docker run per node, systemd unit on the host.
Three node modes: solo, head, member. The head orchestrates cross-node tensor-parallel via a patched NCCL (dgxspark-3node-ring); members broadcast their presence on UDP 5679 and reserve GPUs for Ray workers placed by the head.
UI auto-wires distributed launches — pick Minimum Nodes ≥ 2 + Tensor in Launch Instance, the UI writes config and hot-swaps the engine.
Real multi-node cluster topology — aggregated VRAM across members, peer IPs captured via UDP recvfrom, "DISTRIBUTED · TP=N" badges.
NFS-shared model storage pattern + docs.
Verified TP=2 cross-node inference on NVIDIA GB10: 61 GB of model weights on each GPU, NCCL over RoCE @ 200 Gb/s, ~35 tok/s for warm 1.5B model.
Honest State-of-Distributed-Inference section in the README.
What works, what doesn't, lessons learned, and our "why 3 nodes is harder than 2, 4 is probably easier" hypothesis.
Install
curl -fsSL https://ainode.dev/install | bash
or directly:
docker pull ghcr.io/getainode/ainode:0.4.0
docker pull argentos/ainode:0.4.0 # Docker Hub mirror
Screenshots
See README.md on main for the full product tour.
Known limitations
3-node TP requires a proper mesh or dedicated switch subnet — see Networking requirements in the README.
Browser-based fine-tuning UI is scaffolded but not yet validated end-to-end on real GPUs. Tracked as roadmap item.
Ray over VPN (Tailscale) doesn't work for NCCL — use physical cables or a dedicated switch.
Artifacts
ghcr.io/getainode/ainode:0.4.0 (primary)
argentos/ainode:0.4.0 (Docker Hub mirror)
Contributors
This release was driven by Jason Brashear with AI-pair-programming via Claude Opus 4.6 (1M context). All code and docs in this repo are Apache-2.0.
Original source Report a problem
This is the end. You've seen all the release notes in this feed!