Starting to pick up Nix a bit!
While investigating how repl.it used Nix to secure the access to their running containers, I went down the Nix rabbit hole for a week and became a bit more enlightened. Also, Nixpacks piqued my interest due to its capability of being able to generate OCI complaint container images from any Nix supported dependencies directly (though its not perfect and I should contribute to it maybe). Folks using Fly.io to deploy literally having love stories with Nix :) Interestingly I also remembered that Saksham (one of my genius seniors at IITK) also wrote about it here which made me double down on Nix. The positive impact of Nix driven workflow is clearly visible by the amount Nix community has grown! This writeup is what I hoped I had before challenging myself with this steep Nix learning curve. This is not to learn about using Nix in your workflows (though I have linked some great blogs from some super smart folks), but to serve as a collection of useful resources for anyone starting out, a headstarter!
Here I am focused on Nix as a package manager (nixpkgs as a central repository of nix packages), an immutable graph database nix-store -q --tree result
(cool diagrams here) and a build system. When nix builds something, it’s stored in /nix/store, which is essentially an immutable graph database, with a hash that encodes this full dependency graph. In developer workflows, we can use mkShell
for using nix develop
. For a more wholesome explanation refer this blog.
The concept is simple, we just read a Nix expression and derive the build with specific build inputs. We can also fetch from the cache, the needed inputs and final derivation.
The advantage of Nix is in getting your developement, CI and production in sync (Devbox in their internal dev workflow suffered through this), also having the power of cross compilation, i.e consistent environment accross development, making it the single source of truth. Yeah! Things would just work... You get always-working environments with little to no duplicated effort. Basically avoid a green check mark on a PR only to realize that the production is broken. Whereas Docker (with K8s) is container-based deployment solution which have become so popular, that they have set a high standard when it comes to organizing systems and automating deployment -- today, in many environments, I have the feeling that it is no longer the question what kind of deployment solution is best for a particular system and organization, but rather: "how do we get it into containers and deploy it into microservices?". Docker is partially fit for this (unless you aggressively play with versions) and this concept of 'Reproducible vs Repeatable' builds is very well explained in this talk.
Docker approached building Container Images (immutable self-contained root file systems containing all necessary files to run a program, such as binaries, libraries, configuration files etc) in multi-stage builds. As a best practice, 'scratch' image ( though I am casually calling it base image, its not, its totally empty, yes! not even shell is there. The way container images are constructed is that it makes use of the underlying Kernel providing only the tools and system calls that are present inside the kernel. Because in Linux everything is a file you can add any self-contained binary or an entire operating system as a file in this filesystem. This means that when creating an image from Scratch, technically refers to the Kernel of the host system and all the files on top of it are loaded. That's why building from Scratch is no also a no-op operation and when adding just a single binary the size of the image is only the size of that binary plus a bit of overhead. The resources assigned when executing an image in a container is by leveraging the cgroups and the networking makes use of the linux network namespacing technique ), starkly different from Distroless Image which I often see used other than parent images when the code cannot be complied directly to a single runnable binary. Thus the container life-cycle is bound to the life-cycle of a root process, that runs in an isolated environment using the content of a Docker image as its root file system.
Finally, to optimize/reduce storage overhead, Docker uses layers and a union filesystem (there are a variety of file system options for this) to combine these layers by "stacking" them on top of each other.
A running container basically mounts an image's read-only layers on top of each other, and keeps the final layer writable so that processes in the container can create and modify files on the system.
Whenever you construct an image from a Dockerfile, each modification operation generates a new layer. Each layer is immutable (it will never change after it has been created) and is uniquely identifiable with a hash code, similar to Nix store paths.
But if you can create Container Images with Nix dockerTools
, boasted by Nix on their homepage with doc as well as Dockerfiles, then why mess your brain up with Nix and simply not use Dockerfile? Nix v/s Docker to generate OCI complaint docker images is surely a heated question catered well by this. Its universally known that smaller images with less bloat lead to a smaller attack surface and probably increased security and faster deployments. So lets compare the three options we have to get our PERFECT container image looking for build speed from caching, maintainability, security:
- Using traditional Multistage builds Many DockerCon talks like this and this covered the best practices while writing Dockerfiles, and as you can see, writing good Dockerfiles becomes increasingly difficult as you try to incorporate best practices. This blog highlights the size comparison using various dockerfile techniques. You'll see that the base image contains many files we might not need or want, things that potentially could be a security problem. It comes bundled with tools such as sh, wget, and the apk. The nix image will have no such tools - only nginx and its dependencies.
- Using Nix dockerTools Although this method to create container image has been there for years, recent caching optimization has made it a very appealing choice. The functional approach rewards with better abstraction. The Hydra CI server obviates the need for paying for (or administering a self hosted) docker registry, and avoids the imperative push and pull model. Because a docker image is just another Nix package, you get distributed building, caching and signing for free. Also, as Nix caches intermediate packages builds, building a Docker image via Nix will likely be faster than letting Docker do it. This sums up the sentiment of this blog really well!
- Using Nix as base image in multistage builds Mixing both the worlds using NixOS as base image to write Dockerfiles is generally a comfortable approach. Each build will run as an unprivileged user, that do not have any write access to any directory but its own build directory and the designated output Nix store paths.The network namespace helps the Nix builder to prevent a build process from accessing the network. In Nix, only builds that are so-called fixed output derivations (whose output hashes need to be known in advance) are allowed to download files from remote locations, because their output results can be verified.
Why you should consider using Nix in your build systems? Well you can even achieve what Vercel calls Remote Caching.
Nix Flakes
I generally think of flakes in a docker container way (flakes run native on mac btw, Docker runs over VM)! Its said that flakes are processors of Nix code
. This blog series is very informative to start working up the way! Sharing layers between flakes works better than Docker. Flakes can basically depend on other flakes! Unlike docker, there are no cgroups or namespaces or VMs or anything with nix!
An excellent resource that I found for learning Nix was Ian Hnery's How to learn Nix diary! If you write a lot of Nix, you might also need to unit test it.
Its also an interesting read on how Supabase has now started to use Nix being a well matured startup with a huge ecosystem around it.
Note that you need to enable 'flakes' and 'commands' by experimental-features = nix-command flakes
in ~/.config/nix/nix.conf
to use nix build
.
This blog shows how to start!
Nix flakes with Rust :)
Nix has a pretty bad reputation for its documentation, but Jade's Nix guide to flakes is an excellent writeup!
A Flake for your Crate gives a great overview on how you can start using Nix with Rust. And nix-ifying a rust project discussion is wholesome.
If you are also using WebAssembly in your Rust project, this blog is quite helpful!
There is a fantastic template to use!
Nix flakes with Go
You can use gomodnix and refer this blog to get started!
Nix flakes with OCaml
I found opam-nix with the corresponding blog to be a good enough starting point.
Nix has more binary packages than Homebrew does, so is generally faster as we don’t need to build much from the source, and some folks are literally using Nix as their Homebrew replacement, and the only practical reasoning for it might be that its easier to setup a new machine with flake.nix
compared to homebrew, basically having a single flake to maintain all the packages. You can even run NixOS VM on Mac now!
Bitcoin and Nix
Well you know, I am orange-pilled so I have to do this. There is Nix-Bitcoin for your Bitcoin infrastructure needs as well :)
You will have to face a lot of errors when starting out with Nix, so be ready for being frustrated, like this :-]
You can do a lot of fancy stuff with Nix, but I just started out with Nix!
These are a lot of resources that I shared, and I hope you will be a nix ninja in no time if you go through all of this, so ALL THE BEST :)