mirror of
https://github.com/SebastianStork/nixos-config.git
synced 2026-03-22 22:29:06 +01:00
Compare commits
3 commits
c85c6619b7
...
36a1e21a00
| Author | SHA1 | Date | |
|---|---|---|---|
| 36a1e21a00 | |||
| bdd0d25c88 | |||
| 9ed3d13238 |
8 changed files with 93 additions and 252 deletions
91
.github/copilot-instructions.md
vendored
Normal file
91
.github/copilot-instructions.md
vendored
Normal file
|
|
@ -0,0 +1,91 @@
|
|||
# Copilot Instructions — nixos-config
|
||||
|
||||
## Architecture
|
||||
|
||||
This is a **NixOS flake** managing multiple hosts using [flake-parts](https://flake.parts). The flake output is composed entirely from `flake-parts/*.nix` — each file is auto-imported via `builtins.readDir`.
|
||||
|
||||
### Layers (top → bottom)
|
||||
|
||||
1. **Hosts** (`hosts/`) — minimal per-machine config: profile import, overlay IP, underlay interface, enabled services. All `.nix` files in a host directory are auto-imported recursively by `flake-parts/hosts.nix`.
|
||||
- **External hosts** (`external-hosts/`) — non-NixOS devices (e.g., a phone) that participate in the overlay network and syncthing cluster but aren't managed by NixOS. They import `nixosModules.default` directly (no profile) and only declare `custom.networking` and `custom.services` options so their config values are discoverable by other hosts. This enables auto-generating nebula certs, DNS records, and syncthing device lists for them.
|
||||
- `allHosts` = `nixosConfigurations // externalConfigurations` — passed to every module via `specialArgs`, so any module can query the full fleet including external devices.
|
||||
2. **Profiles** (`profiles/`) — role presets. `core.nix` is the base for all hosts; `server.nix` and `workstation.nix` extend it. Profile names become `nixosModules.<name>-profile`.
|
||||
3. **Modules** (`modules/system/`, `modules/home/`) — reusable NixOS/Home Manager modules auto-imported as `nixosModules.default` / `homeModules.default`. Every module is always imported; activation is gated by `lib.mkEnableOption` + `lib.mkIf`.
|
||||
4. **Users** (`users/seb/`) — Home Manager config. Per-host overrides live in `@<hostname>/` subdirectories (e.g., `users/seb/@desktop/home.nix` imports `../home.nix` and adds host-specific settings).
|
||||
|
||||
### Networking model
|
||||
|
||||
- **Underlay** (`modules/system/networking/underlay.nix`) — physical network via systemd-networkd.
|
||||
- **Overlay** (`modules/system/networking/overlay.nix`, `modules/system/services/nebula/`) — Nebula mesh VPN (`10.254.250.0/24`, domain `splitleaf.de`). All inter-host communication (DNS, Caddy, SSH) routes over the overlay.
|
||||
- **DNS** (`modules/system/services/dns.nix`) — Unbound on overlay, auto-generates records from `allHosts`.
|
||||
|
||||
### Web services pattern
|
||||
|
||||
Each web service module (`modules/system/web-services/*.nix`) follows a consistent structure:
|
||||
|
||||
```nix
|
||||
options.custom.web-services.<name> = { enable; domain; port; doBackups; };
|
||||
config = lib.mkIf cfg.enable {
|
||||
# upstream NixOS service config
|
||||
custom.services.caddy.virtualHosts.${cfg.domain}.port = cfg.port; # reverse proxy
|
||||
custom.services.restic.backups.<name> = lib.mkIf cfg.doBackups { ... }; # backup
|
||||
custom.persistence.directories = [ ... ]; # impermanence
|
||||
};
|
||||
```
|
||||
|
||||
Hosts enable services declaratively: `custom.web-services.forgejo = { enable = true; domain = "git.example.com"; doBackups = true; };`
|
||||
|
||||
## Conventions
|
||||
|
||||
- **All custom options** live under `custom.*` — never pollute the top-level NixOS namespace.
|
||||
- **`cfg` binding**: always `let cfg = config.custom.<path>;` at module top.
|
||||
- **Pipe operator** (`|>`): used pervasively instead of nested function calls.
|
||||
- **No repeated attrpaths** (per `statix`): group assignments into a single attrset instead of repeating the path. E.g. `custom.networking.overlay = { address = "..."; role = "server"; };` — not `custom.networking.overlay.address = "..."; custom.networking.overlay.role = "server";`. Setting a single attribute with the full path is fine. Conversely, don't nest single-key attrsets unnecessarily — use `custom.networking.overlay.address = "...";` not `custom = { networking = { overlay = { address = "..."; }; }; };`.
|
||||
- **`lib.singleton`** instead of `[ x ]` for single-element lists.
|
||||
- **`lib.mkEnableOption ""`**: empty string is intentional — descriptions come from the option path.
|
||||
- **Secrets**: [sops-nix](https://github.com/Mic92/sops-nix) with age keys. Each host/user has `secrets.json` + `keys/age.pub`. The `.sops.yaml` at repo root is a placeholder — the real config is generated via `nix build .#sops-config` (see `flake-parts/sops-config.nix`).
|
||||
- **Impermanence**: servers use `custom.persistence.enable = true` with an explicit `/persist` mount. Modules add their state directories via `custom.persistence.directories`.
|
||||
- **Formatting**: `nix fmt` runs `nixfmt` + `prettier` + `just --fmt` via treefmt.
|
||||
- **Path references**: use `./` for files in the same directory or a subdirectory. Use `${self}/...` when the path would require going up a directory (`../`). Never use `../`.
|
||||
- **Cross-host data**: modules receive `allHosts` via `specialArgs` (see Hosts layer above). Used by DNS, nebula static host maps, syncthing device lists, and caddy service records.
|
||||
|
||||
## Developer Workflows
|
||||
|
||||
| Task | Command |
|
||||
|---|---|
|
||||
| Rebuild & switch locally | `just switch` |
|
||||
| Test config without switching | `just test` |
|
||||
| Deploy to remote host(s) | `just deploy hostname1 hostname2` |
|
||||
| Format all files | `just fmt` or `nix fmt` |
|
||||
| Run flake checks + tests | `just check` |
|
||||
| Check without building | `just check-lite` |
|
||||
| Update flake inputs | `just update` |
|
||||
| Edit SOPS secrets | `just sops-edit hosts/<host>/secrets.json` |
|
||||
| Rotate all secrets | `just sops-rotate-all` |
|
||||
| Install a new host | `just install <host> root@<ip>` |
|
||||
| Open nix repl for a host | `just repl <hostname>` |
|
||||
|
||||
SOPS commands auto-enter a `nix develop .#sops` shell if `sops` isn't available, which handles Bitwarden login and age key retrieval.
|
||||
|
||||
## Adding a New Module
|
||||
|
||||
1. Create `modules/system/services/<name>.nix` (or `web-services/`, `programs/`, etc.).
|
||||
2. Define options under `options.custom.<category>.<name>` with `lib.mkEnableOption ""`.
|
||||
3. Guard all config with `lib.mkIf cfg.enable { ... }`.
|
||||
4. For web services: set `custom.services.caddy.virtualHosts`, optionally `custom.services.restic.backups`, and `custom.persistence.directories`.
|
||||
5. No imports needed — the file is auto-discovered by `flake-parts/modules.nix`.
|
||||
|
||||
## Adding a New Host
|
||||
|
||||
1. Create `hosts/<hostname>/` with `default.nix`, `disko.nix`, `hardware.nix`, `secrets.json`, and `keys/` (containing `age.pub`, `nebula.pub`).
|
||||
2. In `default.nix`, import the appropriate profile (`self.nixosModules.server-profile` or `self.nixosModules.workstation-profile`) and set `custom.networking.overlay.address` + `custom.networking.underlay.*`.
|
||||
3. The host is auto-discovered by `flake-parts/hosts.nix` — no registration needed.
|
||||
|
||||
## Tests
|
||||
|
||||
Integration tests live in `tests/` and use NixOS VM testing (`pkgs.testers.runNixOSTest`). Run via `just check`. Key details:
|
||||
|
||||
- Each test directory contains a `default.nix` that returns a test attrset (with `defaults`, `nodes`, `testScript`, etc.).
|
||||
- The `defaults` block imports `self.nixosModules.default` and **overrides `allHosts`** with the test's own `nodes` variable: `_module.args.allHosts = nodes |> lib.mapAttrs (_: node: { config = node; });`. This scopes cross-host lookups (DNS records, nebula static maps, etc.) to only the test's VMs, preventing evaluation of real host configs.
|
||||
- Test nodes define their own overlay addresses, underlay interfaces, and use pre-generated nebula keys from `tests/*/keys/`.
|
||||
- The `testScript` is written in Python, using helpers like `wait_for_unit`, `succeed`, and `fail` to assert behavior.
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
# This is a placeholder.
|
||||
# The real .sops.yaml is generated via `nix build .#sops-config`
|
||||
# See flake-parts/sops.nix for details.
|
||||
# The real .sops.yaml is generated via `nix build .#sops-config`.
|
||||
# See flake-parts/sops-config.nix for details.
|
||||
|
||||
creation_rules: []
|
||||
|
|
|
|||
54
flake.lock
generated
54
flake.lock
generated
|
|
@ -37,27 +37,6 @@
|
|||
"type": "github"
|
||||
}
|
||||
},
|
||||
"crowdsec": {
|
||||
"inputs": {
|
||||
"flake-utils": "flake-utils",
|
||||
"nixpkgs": [
|
||||
"nixpkgs"
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1752497357,
|
||||
"narHash": "sha256-9epXn1+T6U4Kfyw8B9zMzbERxDB3VfaPXhVebtai6CE=",
|
||||
"ref": "refs/heads/main",
|
||||
"rev": "84db7dcea77f7f477d79e69e35fb0bb560232667",
|
||||
"revCount": 42,
|
||||
"type": "git",
|
||||
"url": "https://codeberg.org/kampka/nix-flake-crowdsec.git"
|
||||
},
|
||||
"original": {
|
||||
"type": "git",
|
||||
"url": "https://codeberg.org/kampka/nix-flake-crowdsec.git"
|
||||
}
|
||||
},
|
||||
"disko": {
|
||||
"inputs": {
|
||||
"nixpkgs": [
|
||||
|
|
@ -135,23 +114,6 @@
|
|||
"type": "github"
|
||||
}
|
||||
},
|
||||
"flake-utils": {
|
||||
"inputs": {
|
||||
"systems": "systems"
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1731533236,
|
||||
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
|
||||
"owner": "numtide",
|
||||
"repo": "flake-utils",
|
||||
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"id": "flake-utils",
|
||||
"type": "indirect"
|
||||
}
|
||||
},
|
||||
"home-manager": {
|
||||
"inputs": {
|
||||
"nixpkgs": [
|
||||
|
|
@ -312,7 +274,6 @@
|
|||
"inputs": {
|
||||
"betterfox": "betterfox",
|
||||
"comin": "comin",
|
||||
"crowdsec": "crowdsec",
|
||||
"disko": "disko",
|
||||
"firefox-addons": "firefox-addons",
|
||||
"flake-parts": "flake-parts",
|
||||
|
|
@ -347,21 +308,6 @@
|
|||
"type": "github"
|
||||
}
|
||||
},
|
||||
"systems": {
|
||||
"locked": {
|
||||
"lastModified": 1681028828,
|
||||
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
|
||||
"owner": "nix-systems",
|
||||
"repo": "default",
|
||||
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "nix-systems",
|
||||
"repo": "default",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"treefmt": {
|
||||
"inputs": {
|
||||
"nixpkgs": [
|
||||
|
|
|
|||
|
|
@ -35,11 +35,6 @@
|
|||
inputs.nixpkgs.follows = "nixpkgs";
|
||||
};
|
||||
|
||||
crowdsec = {
|
||||
url = "git+https://codeberg.org/kampka/nix-flake-crowdsec.git";
|
||||
inputs.nixpkgs.follows = "nixpkgs";
|
||||
};
|
||||
|
||||
vscode-extensions = {
|
||||
url = "github:nix-community/nix-vscode-extensions";
|
||||
inputs.nixpkgs.follows = "nixpkgs";
|
||||
|
|
|
|||
|
|
@ -32,9 +32,6 @@ in
|
|||
caddy = lib.mkEnableOption "" // {
|
||||
default = config.services.caddy.enable;
|
||||
};
|
||||
crowdsec = lib.mkEnableOption "" // {
|
||||
default = config.services.crowdsec.enable;
|
||||
};
|
||||
};
|
||||
logs.openssh = lib.mkEnableOption "" // {
|
||||
default = config.services.openssh.enable;
|
||||
|
|
@ -139,20 +136,6 @@ in
|
|||
}
|
||||
'';
|
||||
};
|
||||
"alloy/crowdsec-metrics.alloy" = {
|
||||
enable = cfg.collect.metrics.crowdsec;
|
||||
text = ''
|
||||
prometheus.scrape "crowdsec" {
|
||||
targets = [{
|
||||
__address__ = "localhost:${toString config.custom.services.crowdsec.prometheusPort}",
|
||||
job = "crowdsec",
|
||||
instance = constants.hostname,
|
||||
}]
|
||||
forward_to = [prometheus.remote_write.default.receiver]
|
||||
scrape_interval = "15s"
|
||||
}
|
||||
'';
|
||||
};
|
||||
"alloy/sshd-logs.alloy" = {
|
||||
enable = cfg.collect.logs.openssh;
|
||||
text = ''
|
||||
|
|
|
|||
|
|
@ -1,40 +0,0 @@
|
|||
{
|
||||
config,
|
||||
inputs,
|
||||
pkgs,
|
||||
lib,
|
||||
...
|
||||
}:
|
||||
let
|
||||
cfg = config.custom.services.crowdsec;
|
||||
in
|
||||
{
|
||||
imports = [ inputs.crowdsec.nixosModules.crowdsec-firewall-bouncer ];
|
||||
disabledModules = [ "services/security/crowdsec-firewall-bouncer.nix" ];
|
||||
|
||||
options.custom.services.crowdsec.bouncers.firewall = lib.mkEnableOption "";
|
||||
|
||||
config = lib.mkIf cfg.bouncers.firewall {
|
||||
services.crowdsec-firewall-bouncer = {
|
||||
enable = true;
|
||||
package = inputs.crowdsec.packages.${pkgs.stdenv.hostPlatform.system}.crowdsec-firewall-bouncer;
|
||||
settings = {
|
||||
api_key = "cs-firewall-bouncer";
|
||||
api_url = "http://localhost:${toString cfg.apiPort}";
|
||||
};
|
||||
};
|
||||
|
||||
systemd.services.crowdsec.serviceConfig.ExecStartPre = lib.mkAfter (
|
||||
lib.getExe (
|
||||
pkgs.writeShellApplication {
|
||||
name = "crowdsec-add-bouncer";
|
||||
text = ''
|
||||
if ! cscli bouncers list | grep -q "firewall"; then
|
||||
cscli bouncers add "firewall" --key "cs-firewall-bouncer"
|
||||
fi
|
||||
'';
|
||||
}
|
||||
)
|
||||
);
|
||||
};
|
||||
}
|
||||
|
|
@ -1,115 +0,0 @@
|
|||
{
|
||||
config,
|
||||
inputs,
|
||||
pkgs,
|
||||
lib,
|
||||
...
|
||||
}:
|
||||
let
|
||||
cfg = config.custom.services.crowdsec;
|
||||
|
||||
user = config.users.users.crowdsec.name;
|
||||
in
|
||||
{
|
||||
disabledModules = [ "services/security/crowdsec.nix" ];
|
||||
imports = [ inputs.crowdsec.nixosModules.crowdsec ];
|
||||
|
||||
options.custom.services.crowdsec = {
|
||||
enable = lib.mkEnableOption "";
|
||||
apiPort = lib.mkOption {
|
||||
type = lib.types.port;
|
||||
default = 8080;
|
||||
};
|
||||
prometheusPort = lib.mkOption {
|
||||
type = lib.types.port;
|
||||
default = 6060;
|
||||
};
|
||||
sources = {
|
||||
iptables = lib.mkEnableOption "" // {
|
||||
default = true;
|
||||
};
|
||||
caddy = lib.mkEnableOption "" // {
|
||||
default = config.services.caddy.enable;
|
||||
};
|
||||
sshd = lib.mkEnableOption "" // {
|
||||
default = config.services.openssh.enable;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
config = lib.mkIf cfg.enable {
|
||||
sops.secrets."crowdsec/enrollment-key" = {
|
||||
owner = user;
|
||||
restartUnits = [ "crowdsec.service" ];
|
||||
};
|
||||
|
||||
users.groups.caddy.members = lib.mkIf cfg.sources.caddy [ user ];
|
||||
|
||||
services.crowdsec = {
|
||||
enable = true;
|
||||
package = inputs.crowdsec.packages.${pkgs.stdenv.hostPlatform.system}.crowdsec;
|
||||
enrollKeyFile = config.sops.secrets."crowdsec/enrollment-key".path;
|
||||
settings = {
|
||||
api.server.listen_uri = "localhost:${toString cfg.apiPort}";
|
||||
cscli.prometheus_uri = "http://localhost:${toString cfg.prometheusPort}";
|
||||
prometheus = {
|
||||
listen_addr = "localhost";
|
||||
listen_port = cfg.prometheusPort;
|
||||
};
|
||||
};
|
||||
|
||||
allowLocalJournalAccess = true;
|
||||
acquisitions = [
|
||||
(lib.mkIf cfg.sources.iptables {
|
||||
source = "journalctl";
|
||||
journalctl_filter = [ "-k" ];
|
||||
labels.type = "syslog";
|
||||
})
|
||||
(lib.mkIf cfg.sources.caddy {
|
||||
filenames = [ "${config.services.caddy.logDir}/*.log" ];
|
||||
labels.type = "caddy";
|
||||
})
|
||||
(lib.mkIf cfg.sources.sshd {
|
||||
source = "journalctl";
|
||||
journalctl_filter = [ "_SYSTEMD_UNIT=sshd.service" ];
|
||||
labels.type = "syslog";
|
||||
})
|
||||
];
|
||||
};
|
||||
|
||||
systemd.services.crowdsec.serviceConfig = {
|
||||
# Fix journalctl acquisitions
|
||||
PrivateUsers = false;
|
||||
|
||||
ExecStartPre =
|
||||
let
|
||||
installCollection = collection: ''
|
||||
if ! cscli collections list | grep -q "${collection}"; then
|
||||
cscli collections install ${collection}
|
||||
fi
|
||||
'';
|
||||
mkScript =
|
||||
name: text:
|
||||
lib.getExe (
|
||||
pkgs.writeShellApplication {
|
||||
inherit name text;
|
||||
}
|
||||
);
|
||||
collectionsScript =
|
||||
[
|
||||
(lib.singleton "crowdsecurity/linux")
|
||||
(lib.optional cfg.sources.iptables "crowdsecurity/iptables")
|
||||
(lib.optional cfg.sources.caddy "crowdsecurity/caddy")
|
||||
(lib.optional cfg.sources.sshd "crowdsecurity/sshd")
|
||||
]
|
||||
|> lib.concatLists
|
||||
|> lib.map installCollection
|
||||
|> lib.concatLines
|
||||
|> mkScript "crowdsec-install-collections";
|
||||
in
|
||||
lib.mkAfter collectionsScript;
|
||||
};
|
||||
|
||||
custom.persistence.directories = [ "/var/lib/crowdsec" ];
|
||||
};
|
||||
}
|
||||
|
|
@ -57,9 +57,6 @@ in
|
|||
victorialogs.enable = lib.mkEnableOption "" // {
|
||||
default = config.custom.web-services.victorialogs.enable;
|
||||
};
|
||||
crowdsec.enable = lib.mkEnableOption "" // {
|
||||
default = config.custom.services.crowdsec.enable;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
|
|
@ -176,22 +173,6 @@ in
|
|||
''
|
||||
);
|
||||
};
|
||||
# https://grafana.com/grafana/dashboards/19012-crowdsec-details-per-instance/
|
||||
"grafana-dashboards/crowdsec-details-per-instance-patched.json" = {
|
||||
enable = cfg.dashboards.crowdsec.enable;
|
||||
source =
|
||||
pkgs.fetchurl {
|
||||
name = "crowdsec-details-per-instance.json";
|
||||
url = "https://grafana.com/api/dashboards/19012/revisions/1/download";
|
||||
hash = "sha256-VRPWAbPRgp+2pqfmey53wMqaOhLBzXVKUZs/pJ28Ikk=";
|
||||
}
|
||||
|> (
|
||||
src:
|
||||
pkgs.runCommand "crowdsec-details-per-instance-patched.json" { buildInputs = [ pkgs.gnused ]; } ''
|
||||
sed 's/''${DS_PROMETHEUS}/Prometheus/g' ${src} > $out
|
||||
''
|
||||
);
|
||||
};
|
||||
};
|
||||
|
||||
custom.services.caddy.virtualHosts.${cfg.domain}.port = cfg.port;
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue