1081 Commits

Author SHA1 Message Date
7daad01eb8 changes from mac-demarco-mini on Sat Apr 18 19:05:32 PDT 2026 2026-04-18 19:05:37 -07:00
34ecc09def Fix Emacs Elpaca bootstrap and startup 2026-04-18 19:05:37 -07:00
403fe0deef Update nix-darwin codex flake and switch config 2026-04-18 19:05:37 -07:00
8557b82a3b chore(ryzen-shine): disable nixified.ai temporarily 2026-04-18 19:05:37 -07:00
f96b363dbb feat(nixified-ai): configure comfyui cuda model support 2026-04-18 19:05:37 -07:00
522491383b chore(ryzen-shine): disable railbird k3s temporarily 2026-04-18 19:05:37 -07:00
048415e317 fix(k3s): stop rebuilds waiting on cluster readiness 2026-04-18 19:05:37 -07:00
7e6907a0af feat(hyprland): add optional lua-config package stack 2026-04-18 19:05:37 -07:00
7a593f4418 feat(sni): improve kdeconnect tray behavior 2026-04-18 19:05:37 -07:00
16ccd3a914 chore(codex): trust local work directories 2026-04-18 19:05:37 -07:00
6d02f9ec1c chore: remove gtk settings backup 2026-04-18 19:05:37 -07:00
f86ae23055 feat: add Hyprland screensaver helper 2026-04-18 19:05:37 -07:00
82de11eb3c chore: migrate Gmail tooling docs to gws 2026-04-18 19:05:37 -07:00
069a0e61db chore: stop tracking generated checkout caches 2026-04-18 19:05:37 -07:00
9c8335ec2b Use nth-last-child for end widget pills 2026-04-18 19:05:37 -07:00
4587f3a7af Record local taffybar color fix state 2026-04-18 19:05:37 -07:00
fa5145c1d7 Commit current dotfiles changes 2026-04-18 19:05:37 -07:00
af093f45ce chore: adjust flake input pins 2026-04-18 19:05:37 -07:00
f2e4881bab config: trust coqui streamer project 2026-04-18 19:05:37 -07:00
eb25a76d7d fix: simplify org-mode package recipe 2026-04-18 19:05:37 -07:00
c75a5743ad config: deprioritize tailscale tray item 2026-04-18 19:05:37 -07:00
429f51605a chore: add bubblewrap to essential packages 2026-04-18 19:05:37 -07:00
f447145ada fix: configure Strixi dGPU desk profile 2026-04-18 19:05:37 -07:00
1299bc9d41 fix: keep Hyprland workspace focus local 2026-04-18 19:05:37 -07:00
3426ab8678 fix: launch separate ghostty instances 2026-04-18 19:05:37 -07:00
c4c2b1e8bb fix: support POST requests in coqui-read 2026-04-18 19:05:37 -07:00
ad449a3416 feat: add KEF speaker control command 2026-04-18 19:05:37 -07:00
4340c62518 chore: update taffybar submodule 2026-04-18 19:05:37 -07:00
89c8c0bdc3 fix: patch kanshi-sni reconnect handling 2026-04-18 19:05:37 -07:00
18c8e0324f dotfiles: add coqui-read helper 2026-04-18 19:05:37 -07:00
3813af4bd2 nixos: enable local coqui tts 2026-04-18 19:05:37 -07:00
7485dfc423 nixos: vendor local package definitions 2026-04-18 19:05:37 -07:00
74d00e7ca3 taffybar: restore end-widget color wraparound 2026-04-18 19:05:37 -07:00
1d831f0aef Disable Mullvad VPN on ryzen-shine 2026-04-18 19:05:37 -07:00
49faa9376f taffybar: restore single css entrypoint loading 2026-04-18 19:05:37 -07:00
04baba4433 Fix taffybar startup and restore pill colors 2026-04-18 19:05:37 -07:00
4774ff5e8f Update local desktop and secrets configuration 2026-04-18 19:05:37 -07:00
65a1b11605 Update vendored taffybar 2026-04-18 19:05:37 -07:00
0ce886b202 Fix taffybar host CSS composition 2026-04-18 19:05:37 -07:00
8bd6d80ffb Fix Chrome remote debugging launchers 2026-04-18 19:05:37 -07:00
7208ee09ad Improve MIME defaults and Home Manager backups 2026-04-18 19:05:37 -07:00
f6589c3b13 nixos: reduce ryzen-shine text scaling 2026-04-18 19:05:37 -07:00
724a65e499 hyprland: persist ryzen-shine kanshi display profile 2026-04-18 19:05:37 -07:00
4befbc42df taffybar: propagate config and flake updates 2026-04-18 19:05:37 -07:00
45582c6411 nixos: add railbird system user for k3s secrets 2026-04-18 19:05:37 -07:00
4c946bc17f Avoid package.el in org-wild-notifier async worker 2026-04-18 19:05:37 -07:00
b8960524b4 Tune ryzen-shine taffybar density 2026-04-18 19:05:37 -07:00
8115871845 Tune ryzen-shine text scaling 2026-04-18 19:05:37 -07:00
ecb2206e4a revert: undo unintended submodule updates 2026-04-18 19:05:37 -07:00
d6ed8752b4 chore(nixos): refresh flake inputs 2026-04-18 19:05:37 -07:00
f6fc5791ee chore(xmonad): bump submodule revisions 2026-04-18 19:05:37 -07:00
de398fe124 chore(taffybar): remove split widget submodules 2026-04-18 19:05:37 -07:00
da8570d747 nixos: update inputs and restore switch 2026-04-18 19:05:37 -07:00
9ff62d4850 docs: note picom debug log cleanup 2026-04-18 19:05:37 -07:00
1fbed155df chore(codex): lower default reasoning effort 2026-04-18 19:05:37 -07:00
d6b24ee3bd chore(nixos): update t3code patch hash 2026-04-18 19:05:37 -07:00
e8f2842805 fix(nixos): normalize GPG key imports 2026-04-18 19:05:37 -07:00
eb4e3b236a feat(nixos): add Ghostty desktop entry 2026-04-18 19:05:37 -07:00
30cc03dadc Add password reset skill 2026-04-18 19:05:37 -07:00
a95505fc3a hyprland: use builtin previous-workspace toggle 2026-04-18 19:05:37 -07:00
56cb75e072 chore(taffybar): refresh pinned flake inputs 2026-04-18 19:05:37 -07:00
a4e3a50db0 fix(taffybar): make live checkout opt-in 2026-04-18 19:05:37 -07:00
13f446e685 nixos: refresh flake inputs 2026-04-18 19:05:37 -07:00
7d226ce790 codex: update local config defaults 2026-04-18 19:05:37 -07:00
78157e7782 codex: add imagegen and plugin-creator skills 2026-04-18 19:05:37 -07:00
91d22f053d hyprland: cycle workspaces per monitor 2026-04-18 19:05:37 -07:00
c7c4ff9df3 Drop redundant vendored taffybar flake inputs 2026-04-18 19:05:37 -07:00
cd91742e35 add t3code 2026-04-18 19:05:37 -07:00
c9eb7db464 hyprland: serialize systemd session env import 2026-04-18 19:05:37 -07:00
7de2a59dfb Set zlib library path in taffybar dev shell 2026-04-18 19:05:37 -07:00
b56fc7f9d4 Fix taffybar dev shell tool resolution 2026-04-18 19:05:37 -07:00
73393ace58 chore(taffybar): update submodule 2026-04-18 19:05:37 -07:00
b98d4b3d0b docs(skills): add nixpkgs review guidance 2026-04-18 19:05:36 -07:00
f4b07fcb15 chore(flake): update pinned inputs 2026-04-18 19:05:36 -07:00
c20f48037c feat(home-manager): set GTK theme and git signing format 2026-04-18 19:05:36 -07:00
59bdad2aad fix(nixos): patch quill ledger.did during build 2026-04-18 19:05:36 -07:00
1c2fc23e6b feat(desktop): improve launcher and window picker presentation 2026-04-18 19:05:36 -07:00
f0222e5528 Fix Go mode eldoc setup 2026-04-18 19:05:36 -07:00
65d1967d94 fix org-agenda-api container startup 2026-04-18 19:05:36 -07:00
6a64103569 Update taffybar for bus-name churn dedupe fix 2026-04-18 19:05:36 -07:00
51f4a53a40 Add taffybar startup tray diagnostics 2026-04-18 19:05:36 -07:00
069d97cea9 Update taffybar for stale tray rebuild fix 2026-04-18 19:05:36 -07:00
25a84fa685 Propagate taffybar tray startup fix through local flakes 2026-04-18 19:05:36 -07:00
74ccd6ac98 Record taffybar tray startup fix commit 2026-04-18 19:05:36 -07:00
4e7ad30661 Update taffybar for tray startup fix 2026-04-18 19:05:36 -07:00
74133555a7 chore(users): drop explicit imalison uid 2026-04-18 19:05:36 -07:00
be0e42b534 feat(nix): add git-blame-rank to essential packages 2026-04-18 19:05:36 -07:00
de5e2bcfb0 chore(nix): refresh flake inputs 2026-04-18 19:05:36 -07:00
918967e502 chore(taffybar): sync local overlay with vendored submodule 2026-04-18 19:05:36 -07:00
33fcdb6b75 fix(git): use stable gh credential helper path 2026-04-18 19:05:36 -07:00
cb08b3e4e1 chore(codex): tune local assistant config 2026-04-18 19:05:36 -07:00
b7e2ff88b7 feat(skills): add OpenAI docs system skill 2026-04-18 19:05:36 -07:00
e0b59777c6 chore(taffybar): bump vendored submodule 2026-04-18 19:05:36 -07:00
ae1976da59 chore(codex): trust local project directories 2026-04-18 19:05:36 -07:00
2b3cc19613 chore: update flake inputs 2026-04-18 19:05:36 -07:00
66afaba67c chore: trust additional Codex projects 2026-04-18 19:05:36 -07:00
01b79dc771 Improve Rust target cleanup skill 2026-04-18 19:05:36 -07:00
df2f78d374 feat(taffybar): build against local vendored packages 2026-04-18 19:05:36 -07:00
0ce93be240 feat(hypridle): stop locking before dpms off 2026-04-18 19:05:36 -07:00
43bc47df9d chore(skills): prune bundled slides and spreadsheets docs 2026-04-18 19:05:36 -07:00
021b23eb5c fix: restore just switch on current nixpkgs 2026-04-18 19:05:36 -07:00
a978aadebf Update taffybar and Emacs configuration 2026-04-18 19:05:36 -07:00
69a0842892 chore: update codex tooling and taffybar 2026-04-18 19:05:36 -07:00
12610abf90 chore(nixos): update flake lock inputs 2026-04-18 19:05:36 -07:00
8dc5d146b8 feat(nixos): add gws and fix switch blockers 2026-04-18 19:05:36 -07:00
b9cc925b19 chore(taffybar): update input wiring and tray priorities 2026-04-18 19:05:36 -07:00
88714c7747 chore(agents): refresh metadata and skill creator loader 2026-04-18 19:05:36 -07:00
005d799a71 chore(git): scope org-agenda-api identity overrides 2026-04-18 19:05:36 -07:00
org-agenda-api
c84dd02cf2 nixos: bump flake lock inputs 2026-04-18 19:05:36 -07:00
org-agenda-api
f32eb16ed0 taffybar: add CPU widget and advance local pin 2026-04-18 19:05:36 -07:00
org-agenda-api
3ee63d7ef6 hyprland: add hypridle and hyprlock integration 2026-04-18 19:05:36 -07:00
org-agenda-api
a06da4f647 gitconfig: add org-agenda-api identity and safe directories 2026-04-18 19:05:36 -07:00
org-agenda-api
1c603b2d28 docs(skill): expand disk-space cleanup playbook 2026-04-18 19:05:36 -07:00
org-agenda-api
aafc93b138 nixos: remove stale synergy patch override 2026-04-18 19:05:36 -07:00
c0196d1e87 Flake lock bump 2026-04-18 19:05:36 -07:00
c011668cb2 Update flake locks for status-notifier-item KDE SNI fix
Update status-notifier-item to 0.3.2.10 which fixes KDE apps
(kdeconnect-indicator) not registering tray icons due to an overly
strict ownership check rejecting multi-connection SNI registration.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:36 -07:00
f45f84fa18 Prevent org from saving org-agenda-files to customize
org-agenda-files is managed programmatically. Override
org-store-new-agenda-file-list to use setq instead of
customize-save-variable, and reset org-agenda-files to nil
before building it so stale customize values are ignored.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:36 -07:00
749700fc6c Add recurring.org to org-agenda-files
Include ~/org/recurring.org in both Emacs and the org-agenda-api
container so recurring tasks appear in the agenda.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:36 -07:00
e3d1e2e2d4 Add personal information to AGENTS.md 2026-04-18 19:05:36 -07:00
43c86bc011 Update ACME/nginx org-agenda-api host config and stage taffybar repos 2026-04-18 19:05:36 -07:00
fe3862d4ec Fix org-agenda-api host 2026-04-18 19:05:36 -07:00
c0c1e73b02 Another taffy bump 2026-04-18 19:05:36 -07:00
18ac3b66a7 taffybar bumps 2026-04-18 19:05:36 -07:00
9fe5db95e1 Add disk space cleanup skill 2026-04-18 19:05:36 -07:00
4eea50edb9 Update wallpaper source dir and fix README badge link 2026-04-18 19:05:36 -07:00
085f9693d5 nixos: skip taffybar startup in KDE sessions 2026-04-18 19:05:36 -07:00
25e0c9edbe Set taffybar position back to top 2026-04-18 19:05:36 -07:00
c02c1361a6 gh-pages: make org export resilient in CI 2026-04-18 19:05:36 -07:00
88cf64ef58 taffybar: refine tray icon priorities and icon lookup 2026-04-18 19:05:36 -07:00
a2b2ce5635 Update taffybar submodule (workspace fixes + v5.2.1) 2026-04-18 19:05:36 -07:00
35d7e52897 Track taffybar sni-priorities.yaml 2026-04-18 19:05:36 -07:00
f41df1e032 chore(taffybar): bump taffybar to 6cfc1f6 2026-04-18 19:05:36 -07:00
e82c9847a8 chore(taffybar): bump upstream input and refresh locks 2026-04-18 19:05:36 -07:00
15a25fed32 chore(taffybar): bump upstream submodule to latest tray/workspace fixes 2026-04-18 19:05:36 -07:00
d596b7f6c0 ignore(taffybar): ignore local SNI priority state files 2026-04-18 19:05:36 -07:00
aaec022a0c Use upstream prioritized collapsible SNI tray in local config 2026-04-18 19:05:36 -07:00
2db7f07982 style(taffybar): align combined labels and keep clock centered 2026-04-18 19:05:36 -07:00
384d2dfd6a refactor(taffybar): unify workspace widgets across backends 2026-04-18 19:05:36 -07:00
0517dd74f3 Fix a few imalison-taffybar issues 2026-04-18 19:05:36 -07:00
4997d194c8 chore(taffybar): align flake follows and refresh locks 2026-04-18 19:05:36 -07:00
6b484de128 chore(nixos): simplify switch recipe aliases 2026-04-18 19:05:36 -07:00
2ccdfdf1ad taffybar: Remove icons from date/time 2026-04-18 19:05:36 -07:00
8a21883629 Disable railbird cache 2026-04-18 19:05:36 -07:00
06740fdffb Update taffybar submodule for wakeup debug widget 2026-04-18 19:05:36 -07:00
7cfddd16df feat(sni): enable Flameshot grim adapter 2026-04-18 19:05:36 -07:00
7b412c0c67 feat(config): use SSH git protocol with gh credential helper 2026-04-18 19:05:36 -07:00
791e63d616 feat(taffybar): add wakeup debug widget and safer hyprctl handling 2026-04-18 19:05:36 -07:00
dfb3b79ec5 Update taffybar submodule for wakeup manager fix 2026-04-18 19:05:36 -07:00
2ae4a8ba82 feat(clock): stack date above time 2026-04-18 19:05:36 -07:00
9172d882d5 chore(nix): bump flake lock 2026-04-18 19:05:36 -07:00
18c1edba6d taffybar: add configurable persistent SNI tray priorities 2026-04-18 19:05:36 -07:00
bf77f44889 docs(skill): define unsubscribe scan execution defaults 2026-04-18 19:05:36 -07:00
e0ec9175f4 feat(wm): add rofi wallpaper launcher and keybindings 2026-04-18 19:05:36 -07:00
5e6f66e132 nixos: bump keepbook and expose keepbook binaries 2026-04-18 19:05:36 -07:00
dc41b71e46 nixos: drop invalid agenix.service ordering for tailscale autoconnect 2026-04-18 19:05:36 -07:00
07b2c56f8b Remove broken action-cache gitlink entries 2026-04-18 19:05:36 -07:00
f5bdee7018 Add emacs snippets and workspace stubs 2026-04-18 19:05:36 -07:00
bf3126f414 Add Claude settings and cached GitHub actions 2026-04-18 19:05:36 -07:00
d6eef4c001 Adjust NixOS config and tailscale setup 2026-04-18 19:05:36 -07:00
22486c4cf5 Update xmonad dev environment setup 2026-04-18 19:05:36 -07:00
30e93dd949 Update Tailscale auth key secret 2026-04-18 19:05:36 -07:00
acaa644e25 Clarify credential handling in AGENTS instructions 2026-04-18 19:05:36 -07:00
9dd86cfac9 Combine laptop battery/network and asus/disk widgets 2026-04-18 19:05:36 -07:00
d61b4bbdc6 Sanitize MPRIS metadata newlines while keeping stacked label 2026-04-18 19:05:36 -07:00
1d5cc54b21 nixos(keepbook): drop obsolete sync daemon patch 2026-04-18 19:05:36 -07:00
10dcbc8813 claude: skip dangerous mode permission prompt 2026-04-18 19:05:36 -07:00
3ee5f226ee nixos/taffybar: pin status-notifier-item package 2026-04-18 19:05:36 -07:00
3c1c20d8c1 nixos: add switch-local recipe 2026-04-18 19:05:36 -07:00
6bda9bfc75 nixos: bump keepbook flake input 2026-04-18 19:05:36 -07:00
35b48c7877 Remove asciinema config from dotfiles 2026-04-18 19:05:36 -07:00
8f3aff0cb8 Fix cachix-populate just recipe (shebang) 2026-04-18 19:05:36 -07:00
12ce1f1fd2 railbird-sf: serve syncthing/docs via nginx 2026-04-18 19:05:36 -07:00
716c28750e nixos: patch keepbook-sync-daemon for updated sync_all_if_stale 2026-04-18 19:05:36 -07:00
4505d9f3cb taffybar config: fix tray/battery order for packEnd 2026-04-18 19:05:36 -07:00
7a9e4254d7 nixos: nix flake update 2026-04-18 19:05:36 -07:00
fb478e74ba taffybar: fix parse error in laptop widget list 2026-04-18 19:05:36 -07:00
aa82c5a71f Move sni tray back 2026-04-18 19:05:36 -07:00
fac7bb9491 Fix cachix-populate just recipe 2026-04-18 19:05:36 -07:00
002381c098 taffybar config: place battery after tray 2026-04-18 19:05:36 -07:00
4a2c7eeb68 Add just commands to auth and populate Cachix 2026-04-18 19:05:36 -07:00
bd1f690f46 taffybar config: drop barLevels; tray back on main row; restore sun/lock text 2026-04-18 19:05:36 -07:00
ae3d3d937f CI: include taffybar Cachix substituter 2026-04-18 19:05:36 -07:00
82a3209eaa CI: free disk and pin substituters 2026-04-18 19:05:36 -07:00
6f4c5e120d repo hygiene: move secrets to pass; add examples; misc updates 2026-04-18 19:05:36 -07:00
a5f3ffc21b Add Cachix cache and CI workflow 2026-04-18 19:05:36 -07:00
d4dfaae6fd taffybar config: stack sun+lock widget 2026-04-18 19:05:36 -07:00
c6d9a3f909 taffybar config: stacked RAM/SWAP, barLevels tray row, bump locks 2026-04-18 19:05:36 -07:00
89d470e489 chore(nixos): bump org-agenda-api (mova v5.20.3) 2026-04-18 19:05:36 -07:00
39054b48e9 chore(nixos): update playwright-cli override hash 2026-04-18 19:05:36 -07:00
4280698766 chore(nixos): point switch recipes at GitHub flake URL 2026-04-18 19:05:36 -07:00
c74585443b feat(rumno): tune notification rules and timeout handling 2026-04-18 19:05:36 -07:00
39721b1341 feat(hyprland): patch hyprexpo with workspace numbers and bring mode 2026-04-18 19:05:36 -07:00
f9561d419c nixos: remove retired user accounts and related config 2026-04-18 19:05:36 -07:00
fe90b19271 chore(secrets): rotate org-agenda-api auth password 2026-04-18 19:05:36 -07:00
acf01382ad fix(org-agenda-api): import single-line secrets via flyctl 2026-04-18 19:05:36 -07:00
4b2cb3a078 feat(taffybar): split now-playing label and bump submodule 2026-04-18 19:05:36 -07:00
26d2b967fc Update unsubscribe skill cleanup guidance 2026-04-18 19:05:36 -07:00
e7547f4300 Rotate org-agenda-api prod auth password 2026-04-18 19:05:36 -07:00
485b618bc5 nixos: apply nixpkgs PR 490230 (playwright-cli) 2026-04-18 19:05:36 -07:00
eae2c76aa2 nixos: bump keepbook 2026-04-18 19:05:36 -07:00
c509071ff9 nixos: add playwright-cli 2026-04-18 19:05:36 -07:00
27c6b2af3c nixos: pin taffybar via github input (override locally when needed) 2026-04-18 19:05:36 -07:00
6f7926cd48 nixos: bump git-sync-rs 2026-04-18 19:05:36 -07:00
485718b103 nixos: lock taffybar flake input via git+file 2026-04-18 19:05:36 -07:00
be6f4d8bb8 nixos: update lock for taffybar path input 2026-04-18 19:05:36 -07:00
85ccfc622e nixos: allow agenix to decrypt tailscale authkey via user ssh key 2026-04-18 19:05:36 -07:00
5d16fb00c0 nixos: tailscale auto-connect via agenix auth key 2026-04-18 19:05:35 -07:00
14c86c61d4 nixos: bump kanshi-sni 2026-04-18 19:05:35 -07:00
0272c00fa9 taffybar: collapse mpris when no visible children 2026-04-18 19:05:35 -07:00
8e588f81eb nixos: fix oci-containers Restart conflict for org-agenda-api 2026-04-18 19:05:35 -07:00
d0459c517e nixos: enable tailscale module 2026-04-18 19:05:35 -07:00
68ccc823e3 nixos(keepbook): remove removed keepbook-sync-daemon tray-icon arg 2026-04-18 19:05:35 -07:00
c06384c1b3 Enable flake nixConfig and add cache.railbird.ai substituter 2026-04-18 19:05:35 -07:00
2cd76c38b2 chore(nixos): bump keepbook 2026-04-18 19:05:35 -07:00
45f20e876e chore(nixos): bump git-sync-rs 2026-04-18 19:05:35 -07:00
8c820ff38b nixquick: force-disable k3s 2026-04-18 19:05:35 -07:00
d50fb307a7 Bump keepbook 2026-04-18 19:05:35 -07:00
8f568be3ae Add journaling skill 2026-04-18 19:05:35 -07:00
97041009d9 Remove debug taffybar logging 2026-04-18 19:05:35 -07:00
71c624326e nixos: update locks and rootless podman prune 2026-04-18 19:05:35 -07:00
e9266b3b10 hypr: treat rumno as overlay 2026-04-18 19:05:35 -07:00
20d8c13656 taffybar: refresh lockfiles 2026-04-18 19:05:35 -07:00
795416e967 gitignore: ignore build artifacts and generated configs 2026-04-18 19:05:35 -07:00
afd93588a5 nixos: quiet warn-dirty and scope nixified-ai import 2026-04-18 19:05:35 -07:00
298ff71042 nixos: replace rcm with home-manager dotfile links 2026-04-18 19:05:35 -07:00
7861a7f61f Move taffybar back to top 2026-04-18 19:05:35 -07:00
0dcf4a7cd6 taffybar: bottom bar + fix Hyprland workspace widget config 2026-04-18 19:05:35 -07:00
f1a26f6be9 chore(nixos): bump git-sync-rs to v0.7.0 2026-04-18 19:05:35 -07:00
9b7b5c02e0 Add rofi launcher for tmux Codex and nixos agent notes 2026-04-18 19:05:35 -07:00
c250e2a4ff nixos/taffybar: propagate status-notifier-item dbus fix from upstream master 2026-04-18 19:05:35 -07:00
a2fa543115 Treat rbsf.tplinkdns.com as managed ssh host 2026-04-18 19:05:35 -07:00
de6a2df8a1 flake bump 2026-04-18 19:05:35 -07:00
c2c87f767b chore: unify codex skills under agents 2026-04-18 19:05:35 -07:00
d51a32910c feat: add logical-commits skill 2026-04-18 19:05:35 -07:00
6fe8a61d0a Add agent project constellation guides and link policy 2026-04-18 19:05:35 -07:00
ba8f8adbba Propagate taffybar lock updates into nixos flakes 2026-04-18 19:05:35 -07:00
a429eced1a Propagate status-notifier-item update into imalison-taffybar 2026-04-18 19:05:35 -07:00
c7e0b484dc Enable all terminfo entries for system 2026-04-18 19:05:35 -07:00
7e3c2d1b19 gitignore: ignore local status-notifier-item checkout 2026-04-18 19:05:35 -07:00
496943ba6f dotfiles: bump taffybar submodule 2026-04-18 19:05:35 -07:00
bd1d9fe385 nixos: refresh flake lock 2026-04-18 19:05:35 -07:00
0dd6575c67 dotfiles: add rofi agentic skill launcher binding 2026-04-18 19:05:35 -07:00
b7b2b4fbb5 nixos: fix switch on strixi-minaj nvidia build 2026-04-18 19:05:35 -07:00
d78b702b90 nixos: disable status-notifier-item checks in taffybar overlay 2026-04-18 19:05:35 -07:00
9fa6c87b7a nixos: add SSH TERM compatibility wrapper and ghostty terminfo 2026-04-18 19:05:35 -07:00
573856adb4 emacs: guard org-mode yasnippet disable hook 2026-04-18 19:05:35 -07:00
9e429a7634 hyprland: tune window animations and restart hyprscratch 2026-04-18 19:05:35 -07:00
5d9f2719e7 codex: add hyprland trust and OpenAI docs MCP server 2026-04-18 19:05:35 -07:00
a9dd87e8b4 flake: update inputs and lockfile sources 2026-04-18 19:05:35 -07:00
43e3a3db6f nixquick: disable railbird-k3s 2026-04-18 19:05:35 -07:00
ac8956895b nixos: adjust desktop package selection 2026-04-18 19:05:35 -07:00
e109698c23 nixos: set NetworkManager rc-manager to symlink 2026-04-18 19:05:35 -07:00
49d974341e nixos: bump keepbook flake input 2026-04-18 19:05:35 -07:00
6141cccaaa nixos: remove taffybar ecosystem follows and fix bitwarden rename
Remove top-level gtk-sni-tray, gtk-strut, status-notifier-item,
dbus-menu, and dbus-hslogger inputs that only existed as follows
targets. Let taffybar and imalison-taffybar resolve their own
ecosystem deps, eliminating cascading lock update headaches.

Also rename bitwarden -> bitwarden-desktop in kat.nix (nixpkgs rename).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
d61e5d7b48 Add email unsubscribe skill 2026-04-18 19:05:35 -07:00
b905f53e9e agents: Add worktrees instructions 2026-04-18 19:05:35 -07:00
dadeb01219 Update SNI tray inputs
Bump flake.lock pins for gtk-sni-tray and status-notifier-item to pick up watcher-restart dedupe and nix-friendly test bus config.
2026-04-18 19:05:35 -07:00
c956a12e87 nixos: enable keepbook sync 2026-04-18 19:05:35 -07:00
b71ee754c7 org-agenda-api: guard yas disable hook in container config 2026-04-18 19:05:35 -07:00
14606a2351 nixos: bump org-agenda-api input to v4.4.1 2026-04-18 19:05:35 -07:00
510937034c nixos: propagate status-notifier-item v0.3.2.3 2026-04-18 19:05:35 -07:00
4697105b2a taffybar: propagate status-notifier-item v0.3.2.3 2026-04-18 19:05:35 -07:00
2654d809d0 flake: bump status-notifier-item to v0.3.2.2 2026-04-18 19:05:35 -07:00
5334cda9a2 nixos: update status-notifier-item input to v0.3.2.2 2026-04-18 19:05:35 -07:00
56bc68d553 podman: Enable daily auto-prune with --all flag
Without --all, only dangling (untagged) images were pruned,
allowing tagged-but-unused CI images to accumulate (~223G).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
0b1c1554d6 Update tmux titling guidance in AGENTS 2026-04-18 19:05:35 -07:00
1010682e00 Ignore local .worktrees directory 2026-04-18 19:05:35 -07:00
97cb9312ce Ignore generated hyprscratch.conf symlink 2026-04-18 19:05:35 -07:00
3240b1c86b taffybar: Reorder widgets 2026-04-18 19:05:35 -07:00
89c4f43854 nixos: fix switch space issues and set hourly upgrades 2026-04-18 19:05:35 -07:00
a5a97e0dd4 Make railbird-sf update hourly 2026-04-18 19:05:35 -07:00
044f1ba3ca Bump flake.lock 2026-04-18 19:05:35 -07:00
fc7293493e Don't save plans 2026-04-18 19:05:35 -07:00
9d4706f70a docs: add hyprscratch migration plan 2026-04-18 19:05:35 -07:00
2c322f19fb codex: trust notifications-tray-icon project path 2026-04-18 19:05:35 -07:00
d42db28301 nixos: fix org icon path and disable conflicting power services 2026-04-18 19:05:35 -07:00
7757fc1719 emacs: retangle org config when any generated file is stale 2026-04-18 19:05:35 -07:00
1c461048d9 taffybar: refine tray behavior and add SNI menu debug tooling 2026-04-18 19:05:35 -07:00
5bfb1a5884 taffybar: align flake inputs and drop local overlay patch 2026-04-18 19:05:35 -07:00
ce25ccd975 Tighten taffybar widget spacing 2026-04-18 19:05:35 -07:00
4bf7ae75c1 nixos/sni: add kanshi-sni tray service 2026-04-18 19:05:35 -07:00
cf8892152c feat: add hyprscratch scratchpad management with rule reapply fork
- Add hyprscratch from colonelpanic8/reapply-rules-on-toggle fork that
  reapplies size/position/float rules on every toggle (not just spawn)
- Configure 9 scratchpads: htop, volume, spotify, element, slack,
  transmission, dropdown, gmail, messages
- Use clean mode (hide on workspace change) and auto-dismiss (only
  dropdown persists)
- Fix pavucontrol class (org.pulseaudio.pavucontrol) and dropdown
  position (offset by taffybar height)
- Add kanshi-sni, dbus-menu, dbus-hslogger flake inputs and follows

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
a15fee5b30 Make completion matching case-insensitive by default 2026-04-18 19:05:35 -07:00
07b8bb5aff chore: bump git-sync-rs and notifications-tray-icon flake inputs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
feb5883e29 refactor: split taffybar skill into ecosystem release and NixOS flake chain
Separate the taffybar ecosystem release workflow (Hackage publishing,
dependency graph, version bounds) from the NixOS flake chain integration
(three-layer flake.lock cascade, bottom-up update strategy). Each skill
cross-references the other.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
fb8df6e242 feat: propagate module enables (hyprland/xmonad -> taffybar -> sni)
Move enable propagation into the modules themselves instead of
desktop.nix. Relax the assertion to only prevent both taffybar
and waybar from being enabled simultaneously.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
59afc172b5 docs: note manual doc upload requirement for GTK-dependent packages
Hackage can't build docs for packages with GTK/GI system deps. This
affects taffybar, gtk-sni-tray, gtk-strut, and dbus-menu — docs must
be built locally and uploaded manually for those.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
a2dfc31787 refactor: separate ecosystem release from NixOS config in taffybar skill
Split the skill into two clear concerns: the taffybar org package
release workflow (dependency graph, Hackage publishing, version bounds)
and the personal NixOS flake chain integration. The NixOS section
explains the three-layer flake.lock cascade and why bottom-up updates
matter, without being prescriptive about always doing all layers.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
4081f9bcca feat: remove katnivan git-sync, add per-repo tray icons
Remove katnivan repository from both imalison and kat git-sync configs.
Add per-repo tray icon support with icon mapping for org and password-store.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
ab30a7f671 fix: remove muted color override on mpris label
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
3bef4d1460 feat: reorganize skills with canonical agents/ locations and symlinks
- Symlink dotfiles/claude/CLAUDE.md -> ../agents/AGENTS.md
- Move global skills to dotfiles/agents/skills/ as canonical location
  (hackage-release, org-agenda-api-production, release)
- Add new taffybar-ecosystem-release skill documenting the taffybar
  package dependency graph and release propagation workflow
- ~/.claude/skills/ entries are now symlinks to canonical locations
- email-cleanup and weekly-scheduling moved to ~/org/agents/skills/
- Removed stale debug-video-processing symlink

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
719a19473c feat: rename passgen to xkcdpassgen, fix arg passing, add number and char class controls
Fix first-invocation failure by passing "$@" through to the function.
Add number to default password suffix and allow opt-out of each
character class (-U uppercase, -N number, -S symbol).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
7db022c818 feat: add wlsunset home-manager module for Wayland night light
Replace the manual wlsunset package + commented exec-once with a proper
home-manager services.wlsunset module tied to hyprland-session.target.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
01834dcbda agents: add credential retrieval instructions using pass
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
edd967e163 feat: add ASUS platform profile widget and fix flake deps
- Add ASUS widget to laptop bar showing profile icon, CPU freq, and temp
- Add dbus-menu and dbus-hslogger flake inputs to fix gtk-sni-tray build
- Simplify CSS color rules for end-widget pills
- Update taffybar submodule with ASUS Information/Widget modules

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
e06782a435 feat: add screenLock and wlsunset widgets to taffybar config
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
b573745072 taffybar: remove accidentally committed debug code
Remove debugPopupSNIMenuHook, withDebugServer, and associated debug
imports/deps that were accidentally included in 8d6664d8.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
505ba47485 justfile: add remote-switch recipe for remote NixOS rebuilds
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
68609a1b49 flake: use direct commit URL for happy-coder patch
Moves the happy-coder patch from PR-based template to custom patches
using a stable commit URL instead of the PR number.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
2f0b733fd9 [NixOS] Disable k3s and enable autoUpgrade on jimi-hendnix 2026-04-18 19:05:35 -07:00
334eeefa76 nix flake update 2026-04-18 19:05:35 -07:00
3a23ad2960 nixquick: enable hourly auto-upgrade
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
f8e08cba31 nix: wrap git-sync-rs with convenience symlinks
Adds git-sync and git-sync-on-inotify as symlinks to git-sync-rs binary.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
454e13575f git-sync: remove config repository
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
57be8c2d47 hyprland: add empty monitors and workspaces config files
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
86fd99c588 taffybar CSS: remove inline menu/popover styling
The narrowed :not(menu):not(menuitem):not(popover):not(window) selectors
now prevent bar styles from bleeding into popup menus, making the
explicit menu overrides unnecessary. Menus inherit clean styling from
the GTK theme instead.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
ba952b4b57 taffybar CSS: extract per-widget colors to nth-child rotation
Replace individual .outer-pad.audio, .outer-pad.network, etc. color
rules with a 5-color palette in end-widget-colors.css that cycles via
:nth-child(5n+N). Add workspace pill reset to prevent the rotation
from bleeding into workspace widgets. Move per-widget color variables
from theme.css into end-widget-colors.css (keep tray colors in theme
since the SNI tray is a center widget outside the rotation).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
a258af58d6 taffybar CSS: narrow selectors to prevent menu/popover color bleed
Use :not(menu):not(menuitem):not(popover):not(window) guards on all
wildcard selectors so bar typography and background-color rules don't
bleed into SNI popup menus and popovers attached via menuAttachToWidget.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
0302cadd22 taffybar: reduce SNI tray overlay icon size (2/5 -> 1/3)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
f7cc33fc36 flake.lock: update inputs and remove stale HLS-related locks
Update home-manager, NixOS-WSL, status-notifier-item, and
notifications-tray-icon. Remove lock entries for the old
haskell-language-server input and its transitive dependencies.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
b67aeb20be notifications-tray-icon: add dedicated module and update flake input
Add notifications-tray-icon.nix module with overlay and home-manager
service config. Update flake input to colonelpanic8 fork and remove
stale HLS input follows. Clean up xmonad.nix by removing the old
overlay reference and commented-out service definition.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
72f09a1c8b hyprland: replace services.kanshi with package, add nwg-displays
services.kanshi is a Home Manager option, not a NixOS module option,
so the NixOS build was failing. Install kanshi as a package instead.
Also add nwg-displays for GUI monitor arrangement.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
6f57129de9 git-sync: enable tray indicator via GIT_SYNC_TRAY env var
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
e67f760b68 Replace CLAUDE.md symlink with standalone file
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
9608658be5 flake: add git-sync-rs as top-level input and deduplicate
Move git-sync-rs from a nixpkgs PR patch to a direct flake input,
add overlay and package, and have org-agenda-api follow it.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
46156387a7 taffybar: fix popup menu styling and submenu color bleed
Use :not(menu):not(menuitem):not(popover) selectors to prevent
forcing transparent backgrounds on popup menus attached via
menuAttachToWidget. Add dedicated .dbusmenu-submenu styling.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
8a75f7a8f4 hyprland: add ghostty dropdown scratchpad and specialWorkspace animation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
c89593946a Move dunst config from xmonad.nix to desktop.nix
Dunst works on both X11 and Wayland, so it belongs in the shared
desktop config rather than under the xmonad-specific module.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
e05120791b Add IMALISON_SESSION_TYPE=x11 condition to picom and autorandr
Both are X11-only services that shouldn't start in Wayland sessions.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
feaca43946 Set IMALISON_SESSION_TYPE env var for reliable X11/Wayland detection
systemd user environment persists across login sessions, so
XDG_SESSION_TYPE can be stale. Proactively set a custom variable
on each session start so ConditionEnvironment checks are reliable.

Use it for xsettingsd and random-background (X11-only services).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
63392614b1 Enable daily auto-upgrade on strixi-minaj, railbird-sf, and ryzen-shine
Set the autoUpgrade flake reference globally in configuration.nix so
machines only need to opt in with system.autoUpgrade.enable.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
ee0d21f8db taffybar: fix SNI popup menu styling (white bg, black text, blue hover)
GTK's menuAttachToWidget makes popup menus CSS descendants of the tray
widget, so .outer-pad.sni-tray * (specificity 0,2,0) was bleeding light
tray text colors into menu items.  Fix by using the same parent selectors
with menu descendant types (.outer-pad.sni-tray menu menuitem *) for
specificity 0,2,2+ that definitively overrides the tray color rule.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
134c1aef8f fix: disable use-package :ensure in container config
The tangled org-config now includes the org-window-habit use-package
block (added in efc50ec1) which uses elpaca-style :ensure recipes.
Standard use-package cannot parse these. Override the normalizer to
accept and ignore :ensure since all packages are provided via Nix.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
2ebf3f6493 flake: deduplicate inputs with follows
Add follows declarations to reduce duplicate dependency copies:
- nixpkgs: 16 → 8 copies
- flake-utils: 9 → 1 copy
- systems: 12 → 2 copies

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
2458560e17 chore: bump org-agenda-api (mova 5.20.0)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
da9850a007 Update flake inputs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
523835e2e9 Add happy-coder package
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
c2f8898e50 taffybar: add SNI menu, withLogLevels, and clean up deps
- Wrap network widget with withNmAppletMenu for click-to-open menu
- Use withLogLevels hook instead of manual enableLogger
- Remove unused aeson, directory, yaml dependencies

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
ce71555c20 hyprland: switch htop scratchpad terminal to alacritty
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
ddf35984d3 Fix deprecated xorg package references
Rename xorg.{libXdmcp,libXtst,xev,xwininfo} to their new top-level names.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
873a769a54 taffybar: change default log level from INFO to WARNING
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
44593a204f taffybar: add yaml/aeson deps and log-levels config file
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
48507ad95d Bump taffybar submodule and update flake.lock
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
51f11df390 Update taffybar (ScalingImage refactor), add toggle keybinding
- Bump taffybar submodule with ScalingImage refactor replacing autoSizeImage
- Add hyper+slash keybinding for toggle_taffybar
- Simplify mprisWidget wrapper in taffybar.hs
- Update flake.locks for gtk-sni-tray
- Add codex trust for taffybar submodule
- Add waybar widget ideas notes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
7cbacd3ff4 chore: bump taffybar submodule (gtk-sni-tray 0.1.11.2)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
3beec0117c taffybar: bump gtk-sni-tray and update widget layout
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
b9f3f0c28a Fix tmclaude to avoid sandboxing 2026-04-18 19:05:35 -07:00
6b3fbc83bd USe ghosttty 2026-04-18 19:05:35 -07:00
c80f1addb2 taffybar: accent-tinted active workspace, white border active window
Use accent color (#f1b2b2) tint for the active workspace pill instead of
a white outline, so it's visually distinct from the active window
highlight. Add white border and background to the active window icon
container within workspaces. Also add nerd font family to mpris icon.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
1db1da6378 taffybar: add nerd font icon to mpris widget
Wrap the mpris grid in a box with a nerd font music note icon (U+F075A)
so it follows the same icon+label pattern as other widgets.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
5eee144236 taffybar: bump font size to 11pt and add icon-label spacing
Increase global font size from 9pt to 11pt for better readability.
Add padding-right on icon-label icon elements to prevent nerd font
glyphs from overlapping adjacent text. Consolidate battery CSS
selectors to match the new single-widget structure.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
5a5758720b taffybar: use nerd font icon+label pairs for battery and disk widgets
Replace the separate batteryIconWidget + batteryTextWidget with a single
batteryWidget using batteryTextIconNew paired with textBatteryNew via
buildIconLabelBox. Switch diskUsageWidget from diskUsageLabelNew to
diskUsageNew which includes a nerd font disk icon.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
2fa6a364ad chore: bump taffybar submodule (gtk-sni-tray 0.1.10.3)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
d2f718e04e chore: bump taffybar submodule (gtk-sni-tray update)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
4c66ad7626 chore: update user config to use new icon-label widget variants
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
979376c2b0 taffybar: add comments explaining recent CSS additions
Document the @import url() requirement, per-widget color variables,
workspace label positioning via padding (not margin, which GTK
overlays ignore), asymmetric workspace padding, active workspace
outline targeting, and bar border-radius.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
efd18cc704 taffybar: add white outline to active workspace squircle
Target .workspaces .active .outer-pad with an inset box-shadow to
highlight the currently focused workspace pill.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
4bf8ff09fe taffybar: lighter bar background and rounded corners
Reduce bar alpha from 0.55 to 0.35 for more transparency and add
6px border-radius to match widget squircles.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
61cec69e67 taffybar: reduce right-side padding on workspace pills
Asymmetric inner-pad padding (10px left, 3px right) so workspace
number labels have room on the left without extra space on the right.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
661ec9e73b taffybar: reduce widget squircle border-radius for squarer shape
outer-pad: 12px -> 6px, inner-pad: 9px -> 4px

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
792a86d44a taffybar: position workspace number labels inside squircle pills
Use padding on the workspace-label (not margin on the overlay-box,
which GTK overlays ignore) to inset the number into the pill area.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
59af32457b taffybar: add distinct colored squircle backgrounds per widget
Fix CSS import syntax (bare `import` -> `@import url()`) so
@define-color variables from theme.css are available. Define
per-widget background/foreground/border colors and add CSS rules
for clock, disk-usage, sni-tray, battery, and backlight widgets.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
87b03b51f9 hyprland: add rofi icons to window go/bring/replace pickers
Add desktop-entry icon lookup to the rofi window picker scripts,
matching the XMonad setup's icon support. A shared helper script
(window-icon-map.sh) builds a class→icon mapping from .desktop files
and each picker uses rofi's dmenu icon protocol (\0icon\x1f).

Also replaces the X11-only "rofi -show window" with a native
Hyprland window picker using hyprctl clients.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
87868495f8 Replace codex_tmux with generic trw, tmclaude, tmcodex helpers
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
89716b927f nixos: load module-dbus-protocol for PulseAudio DBus support
The taffybar audio widget requires PulseAudio's DBus interface
(module-dbus-protocol) to read volume/mute state. Without it the
widget shows "n/a".

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
999883ea9a taffybar: use library detectBackend for wayland/hyprland discovery
Remove detectBackendRobust and discovery helpers from taffybar.hs now
that the equivalent logic lives in System.Taffybar.Context.Backend
(taffybar PR #625). Update submodule and flake.lock accordingly.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
b8fce3dfd4 nixos: update flake.lock
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
8214c886d8 taffybar: update submodule and flake.lock
Bump taffybar submodule, gtk-sni-tray, status-notifier-item, xmonad,
and nixpkgs inputs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
a20c53348e taffybar: add AGENTS.md
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
b633b6e696 taffybar: add taffybar-crop-bar utility
Shell script that crops the top bar region from a screenshot using
ffmpeg, auto-detecting height from Hyprland's reserved area.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
189dd9c339 taffybar: use button and overlay controllers for workspace widget
Wire up hyprlandBuildButtonController and
hyprlandBuildCustomOverlayController so workspace buttons are
clickable and the overlay layout is explicitly configured.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
ddc93c8a2a taffybar: robust Wayland/Hyprland environment discovery
Instead of relying solely on environment variables (which can be stale
from systemd --user), actively discover wayland sockets and Hyprland
instance signatures from XDG_RUNTIME_DIR.  Fix up the process
environment so taffybar's internal backend detection agrees, and also
correct XDG_SESSION_TYPE in both directions.  Add INFO-level logging
for backend selection.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
05c7d20b5c taffybar: add theme.css and update CSS styling
Extract color variables into a dedicated theme.css and import it from
the main stylesheet.  Remove the bar gradient in favor of a flat
background, adjust workspace overlay-box margins, add SNI tray
double-padding fix, and clean up whitespace.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:35 -07:00
a22e4cfb8b remove ivanm-dfinity-razer and uber-loaner host references
Both hosts are long dead. Removes their CSS files, taffybar host
config entries, and synergy aliases.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:34 -07:00
09a57e4076 ci: fix gh-pages build by setting user-emacs-directory
The org-config macro reads preface/custom/config/bind .el files from
user-emacs-directory at macro-expansion time. In CI this defaulted to
~/.emacs.d/ where those files don't exist, causing the build to fail.
Point it at the repo's emacs.d directory instead.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:34 -07:00
eb7338ba37 remove xremap from NixOS config
keyd handles all key remapping now, so xremap is no longer needed.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:34 -07:00
581cb40c08 emacs: add missing tangle directive for org-window-habit
The org-window-habit use-package block had no :tangle header, so it
was never written to org-config-config.el. This meant the mode was
never activated and the advice on org-habit-parse-todo was never
installed, causing errors for habits without a scheduled repeater.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:34 -07:00
04f92f632d emacs: fix transient void-variable on pgtk builds
The pgtk Emacs build lacks HAVE_TEXT_CONVERSION so
overriding-text-conversion-style is void, but transient's .elc
compiled on X11 has static-if expanded to reference it directly.
Define the variable before transient loads when it's missing.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-18 19:05:34 -07:00
dece70564d hypr: improve minimize/unminimize workflow 2026-04-18 19:05:34 -07:00
151a886b7a taffybar: restore SNI tray; name wayland widgets hyprland 2026-04-18 19:05:34 -07:00
ac4fa3b189 flakes: switch codex-cli-nix back to sadjow/main 2026-04-18 19:05:34 -07:00
3a68b8e23d agents: encourage nix run/shell for ad-hoc tools 2026-04-18 19:05:34 -07:00
9c801771e6 taffybar: fix Hyprland context deadlock 2026-04-18 19:05:34 -07:00
789b1d96b5 nixos: unify wallpapers via syncthing
Start hyprpaper via a user service and set wallpaper via IPC on session start.\nPoint xmonad random-background at the same Syncthing wallpaper directory (X11 only).
2026-04-18 19:05:34 -07:00
9f3f5c5a9d nixos: add emacs-auto wrapper + desktop files 2026-04-18 19:05:34 -07:00
bffae091d4 xmonad: bind Hyper keys via Ctrl+Alt+Super chord 2026-04-18 19:05:34 -07:00
bfba9f58d0 taffybar: avoid Wayland backend with stale systemd env
When taffybar is started via systemd --user, the manager environment can keep WAYLAND_DISPLAY/HYPRLAND_INSTANCE_SIGNATURE from an older Wayland session even when we're currently running an X11 xmonad session.

This causes taffybar's backend detection to select the Wayland backend, which prevents X11 struts from being applied and leads to incorrect placement.

Add a small wrapper backend detector that checks for an actual wayland socket and, if missing, sanitizes the env so taffybar's internal context uses the X11 backend.
2026-04-18 19:05:34 -07:00
5f5bc8ec54 taffybar: menu css debugging tweaks 2026-04-18 19:05:34 -07:00
4a431d410d Update codex input and switch command 2026-04-18 19:05:34 -07:00
1f0d46ef3e flake: update inputs
Updated nixpkgs, home-manager, nix, and related inputs after running nix flake update.
2026-04-18 19:05:34 -07:00
a0780c6abe strixi-minaj: stop managing waybar disks file
Avoid home-manager overwriting the repo-tracked waybar disks list when ~/.config is symlinked into this repo.
2026-04-18 19:05:34 -07:00
8c14ba9d9d taffybar: bump submodule for new widgets
Track the local taffybar submodule commit that adds NetworkManager-backed widgets.
2026-04-18 19:05:34 -07:00
92de6c2c01 nixos: add chrome devtools desktop entry
- Add a google-chrome launcher with remote debugging enabled
- Add a tmux attach shell alias
2026-04-18 19:05:34 -07:00
0c6363c793 hyprland: cap workspaces and add empty-workspace helpers
- Introduce HYPR_MAX_WORKSPACE (default 9) and enforce it in scripts
- Replace 'workspace empty' bindings with scripts that pick an empty id
- Add scroll-to-workspace helper for mouse wheel binds
2026-04-18 19:05:34 -07:00
9dc6aacb25 codex: bump default model
Switch to gpt-5.3-codex.
2026-04-18 19:05:34 -07:00
769fe29de0 taffybar: refresh config and add helpers
- Remove stack config in favor of cabal/flake
- Add helper scripts for running/restarting and screenshots
- Update bar CSS/HS config
2026-04-18 19:05:34 -07:00
937b1c218c waybar: add nowplaying module
- Add a playerctl-backed now playing widget
- Track a default disk list file instead of a home-manager symlink
2026-04-18 19:05:34 -07:00
b089d7701c nixos: change home-manager backup extension
Use 'hm-backup' globally and remove the per-user override.
2026-04-18 19:05:34 -07:00
1f60631e6c nixos: add quickshell/waybar/taffybar modules
- Switch taffybar input to the local submodule
- Add caelestia quickshell (home-manager module)
- Make waybar/taffybar mutually exclusive, defaulting based on xmonad
- Move tray ordering and status notifier watcher config into the right modules
2026-04-18 19:05:34 -07:00
2d8d5d7fcb Hyprland: share workspace CSS helpers 2026-04-18 19:05:34 -07:00
c5c0647542 Hyprland: reuse workspace widgets 2026-04-18 19:05:34 -07:00
e617e29197 Hyprland: default special workspace filter 2026-04-18 19:05:34 -07:00
9c40ba1013 taffybar: use submodule and improve hyprland config 2026-04-18 19:05:34 -07:00
7005b042f0 hyprland: move cursor with moved window 2026-04-18 19:05:34 -07:00
76d9145e88 Fix gather-class to move all windows 2026-04-18 19:05:34 -07:00
fea5f56555 Add codex_tmux_resume helper 2026-04-18 19:05:34 -07:00
d3d7eb3586 Manage mimeapps via home-manager 2026-04-18 19:05:34 -07:00
b7fd7e60ba waybar: tweak sizing defaults 2026-04-18 19:05:34 -07:00
a88e431a7d waybar: center widgets above bottom line 2026-04-18 19:05:34 -07:00
d9f5415660 Remove legacy XKB config 2026-04-18 19:05:34 -07:00
bc6f713de9 nixos: split SNI tray services from xmonad 2026-04-18 19:05:34 -07:00
ca7116d49a railbird-sf: enable k3s-single-node module
Switch from the multi-node railbird-k3s agent setup to the new
single-node k3s module with integrated GPU/CDI support.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-04-18 19:05:34 -07:00
f6e512f63f Add k3s-single-node module with GPU/CDI support
Provides a NixOS module for running a single-node k3s cluster with
NVIDIA GPU support via CDI (Container Device Interface). Includes
automatic deployment of the generic-cdi-plugin DaemonSet for GPU
resource allocation.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-04-18 19:05:34 -07:00
dcbea6ee58 Use custom waybar fork 2026-04-18 19:05:34 -07:00
e0892d780d keyd: add MoErgo hyper mapping 2026-04-18 19:05:34 -07:00
ed5341e988 docs: note nixos flake + just switch 2026-04-18 19:05:34 -07:00
35475da5b8 nix-darwin: add claude-code and codex via dedicated flakes
Add flake inputs for codex-cli-nix and claude-code-nix with cachix
caching for pre-built binaries, matching the NixOS setup.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 14:27:52 -08:00
6d283347ee Add Waybar disk list via HM 2026-02-04 12:35:23 -08:00
0d47dba590 chore: bump org-agenda-api input 2026-02-04 12:34:36 -08:00
657a80ac52 waybar: enlarge bar and tighten spacing 2026-02-04 12:33:56 -08:00
b1284f787d rofi-pass: include x11 + wayland deps 2026-02-04 12:15:10 -08:00
de64062078 Enable UWSM for Hyprland and tie Waybar to session 2026-02-04 12:13:15 -08:00
b1c1cf9ddc Add Hyprland workspace swap command 2026-02-04 12:02:00 -08:00
304376397b waybar: use workspace taskbar icons 2026-02-04 11:47:28 -08:00
68f9809798 hypr: fix volume scratchpad matching 2026-02-04 09:24:49 -08:00
71b38811dd Add codex_tmux 2026-02-04 02:49:16 -08:00
3ff056e6d9 codex: allow apps 2026-02-04 02:49:03 -08:00
98c85c5727 Enable frame-mode under Hyprland 2026-02-04 02:48:55 -08:00
3a405c0520 hyprland: restore mod+backslash workspace toggle 2026-02-04 02:47:12 -08:00
65a329258e Add agentic config files 2026-02-04 02:15:59 -08:00
46f16c406c Improve Hyprland scratchpads 2026-02-04 01:41:00 -08:00
b78c7b7f2c Revert "[Emacs] claude-code-ide"
This reverts commit 6dc320ff1c.
2026-02-04 01:31:31 -08:00
9f554b3976 emacs: tangle org-config outputs before load 2026-02-04 01:31:04 -08:00
239013386f Switch back to workspaces 2026-02-04 01:15:49 -08:00
1e3321c2b0 tmux: Remove tmux auto naming 2026-02-04 00:46:02 -08:00
f1142f58a8 tmux: remove prompt from codex session shortcut 2026-02-04 00:36:58 -08:00
2a24fde229 nixos: add ghostty to desktop packages 2026-02-04 00:28:44 -08:00
7f61090a82 tmux: add Codex session shortcut 2026-02-04 00:28:40 -08:00
56c9ddf508 Load org-config preface on direct load 2026-02-04 00:25:49 -08:00
5c3c55c582 waybar: switch to wlr taskbar 2026-02-04 00:18:53 -08:00
b234cbec56 hyprland: configure hyprexpo overview 2026-02-04 00:18:45 -08:00
e3ad0d857e Add shared agent instructions and tmux titling hook 2026-02-04 00:17:56 -08:00
2f2bb59693 org: tangle org-config during export 2026-02-04 00:13:04 -08:00
9b42d002cf docs: drop old githook export notes 2026-02-04 00:12:36 -08:00
9187b7381a docs: remove unused packages from README 2026-02-04 00:11:20 -08:00
0c47603d8d emacs: bind vertico TAB to embark 2026-02-03 23:33:30 -08:00
0aa3dc14f3 rofi: adjust colorful style font 2026-02-03 23:33:26 -08:00
c25da8d505 terminal: switch back to alacritty 2026-02-03 23:33:18 -08:00
389f746a94 hypr: switch to hyprexpo and stabilize waybar 2026-02-03 23:32:47 -08:00
ae5036721a ui: tweak alacritty and rofi sizing 2026-02-03 22:03:37 -08:00
d3259bbb26 hypr: tweak scaling, fullscreen, and scratchpads 2026-02-03 22:03:02 -08:00
d87da32dee terminal: switch to ghostty 2026-02-03 22:01:59 -08:00
b2ff5f1ae5 hypr: add hyprspace overview and waybar config 2026-02-03 22:01:14 -08:00
478ceed777 Ignore generated fontconfig defaults 2026-02-03 20:52:32 -08:00
31b7174624 emacs: ignore elpaca directory 2026-02-03 20:40:28 -08:00
ab8cf11b80 nixpkgs: use emacs30-pgtk 2026-02-03 20:37:05 -08:00
b3b405ec7e keyd: align hyper chord with hyprland 2026-02-03 20:36:58 -08:00
0e16fbaeb0 audio: support Hyprland in toggle_mute_current_window 2026-02-03 20:33:46 -08:00
af9c3299cb hyprland: restore config and drop hyprexpo 2026-02-03 20:33:30 -08:00
b1f578b248 hyprland: drop hyprexpo and update bindings 2026-02-03 20:31:57 -08:00
ebfbca4827 nixos: remove forEachUser helper 2026-02-03 20:30:39 -08:00
8620cc6287 Fix rofi DPI auto-detect 2026-02-03 20:16:02 -08:00
c75d1e59e0 hyprland: update config and workspace script 2026-02-03 20:15:54 -08:00
781c3b8297 picom: use my-picom branch with spring physics and animation fixes
- Point to my-picom branch (clean merge of spring-physics + animation-fixes PRs)
- Includes: spring physics curve, adaptive settling threshold, position detection fix
- Add suppressions to geometry animation to prevent opacity changes from
  interrupting position/scale animations

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 17:20:25 -08:00
5d44ec0aaa Bump org-agenda-api to de1c60b
Includes:
- Mova bump to e9cfb65
- org-window-habit fix for logbook parser reading next entry's LOGBOOK
- fetchYarnDeps hash update for new mova

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 14:37:25 -08:00
f52635d139 taffybar: update submodule to latest
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 20:01:07 -08:00
2bcfa3df1a picom: use spring physics for open/close animations
- Scale: spring with bounce (clamping=false) for a "pop" effect
- Opacity: spring with clamping=true to prevent going above 1 or below 0
- Close uses clamping=true on scale to avoid bounce when closing

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 13:21:24 -08:00
dd3931249f picom: slow down spring animations and add debug logging
Spring parameters changed from (250, 20, 1) to (150, 18, 1.5):
- Reduced stiffness: 250 -> 150 (slower oscillation)
- Slightly reduced dampening: 20 -> 18
- Increased mass: 1 -> 1.5 (more "weight", slower motion)

Also added --log-level=debug --log-file to picom service for
monitoring animation triggers.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 13:18:46 -08:00
daf2911540 picom: add scale animation for size changes in geometry trigger
When both size AND position change (common in tiling WMs), picom's
win_position_changed() returns false. Only size_changed is true.
This means the geometry trigger fires but only scale-x/y change,
not offset-x/y.

Added scale-x and scale-y with spring physics to the geometry
animation so windows animate properly when resized.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 13:10:33 -08:00
8bce7b05c6 taffybar: remove crypto price widgets
Remove ICP, BTC, and ETH crypto price widgets and related CMC API key setup.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 12:51:28 -08:00
3e8ff68f15 picom: completely replace home-manager's picom with custom service
Home-manager was concatenating its generated settings with our custom
config. Disable home-manager's picom entirely and create our own
systemd service that uses only our config file.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 12:42:42 -08:00
f22a96e37b picom: disable home-manager settings generation entirely
Set settings = {} to prevent home-manager from generating any picom
config settings that would be appended to our custom config file.
This was causing duplicate/conflicting settings.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 12:39:40 -08:00
8045c4cc9d chore: update org-agenda-api to v4.4.0 (overdue_behavior feature)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 12:37:45 -08:00
ff90c0ab3f picom: update hash for settling threshold fix
Increases settling threshold to 1.0 pixel to more aggressively stop
the spring animation when visually settled, preventing sub-pixel
jitter at the tail end.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 12:32:34 -08:00
baf8b17098 Remove codex PR patch (merged upstream)
PR 483705 has been merged into nixpkgs-unstable.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 10:52:53 -08:00
cd51704a0c Fix appearance setup: enable font and add nerd-icons
- Remove :tangle no from font setting so JetBrainsMono gets applied
- Add nerd-icons package (required by doom-modeline v4+)
- Add :after nerd-icons to doom-modeline for proper load order

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 22:31:48 -08:00
03c8fd52a3 picom: write complete config directly to fix animations syntax
Give up on @include workarounds - libconfig doesn't support ~.
Write the complete picom config directly with correct () list syntax
for animations, using xdg.configFile with force=true to override
home-manager's generated config.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 13:44:15 -08:00
11cbe51ab8 picom: use absolute paths in @include directives
The config files are symlinks to nix store, so relative paths don't
resolve correctly. Try using ~/.config/picom paths.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 13:41:24 -08:00
aacb4e5e99 picom: use @include directive for config composition
picom doesn't support multiple --config flags. Use a wrapper config
with libconfig @include directive to merge the base config and
animations config.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 13:38:47 -08:00
dd3c638461 picom: use %h systemd specifier for home directory
XDG_CONFIG_HOME is not set in the systemd environment, causing the
animations config path to resolve incorrectly.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 13:35:29 -08:00
ba91ff5b82 picom: fix animations config syntax for libconfig
Home-manager generates [] (arrays) but picom needs () (lists) for
the animations setting. Move animations to a separate config file
with correct syntax and override the picom service to load both.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 13:32:50 -08:00
0493a60f95 picom: fix version to match mainline v13
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 13:10:21 -08:00
68d554d2b2 picom: switch to mainline with spring physics animations
Update picom from dccsillag fork to colonelpanic8/picom spring-physics
branch, which adds spring physics animation support to mainline picom.

Spring curve syntax: spring(stiffness, dampening, mass, clamping)
- stiffness: spring constant (higher = faster)
- dampening: resistance (higher = less oscillation)
- mass: inertia (higher = slower)
- clamping: prevent overshoot (false for bounce effects)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 13:07:30 -08:00
5b92b7ee01 Update elpaca migration for current master
- Convert remaining :straight to :ensure (claude-code-ide)
- Update elpaca-installer.el from v0.7 to v0.11
2026-02-01 00:54:25 -08:00
Nicholas Vollmer
0d21252693 fixup use Elpaca commit
ensure vertico-directory :after vertico.
load org config using expand-file-name instead of load-file-name.
Add :ensure nil to subsequent Org use-package declarations.
2026-02-01 00:54:25 -08:00
Nicholas Vollmer
8794829f81 use Elpaca for package management 2026-02-01 00:54:25 -08:00
e12f65de10 Elpaca migration wip 2026-02-01 00:54:25 -08:00
06dde4652d Add extraDomains support and register rbsf.railbird.ai
- org-agenda-api-host now supports extraDomains option for additional
  domain names, each with its own ACME certificate
- Add org-agenda-api.rbsf.railbird.ai as extra domain on railbird-sf

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 22:43:12 -08:00
91db521dfd Update org-api-passwords 2026-01-31 22:30:26 -08:00
9c51555847 nixos: Use codex-cli-nix and claude-code-nix flakes with cachix
Switch from manual version overrides to dedicated flakes:
- github:sadjow/codex-cli-nix for codex
- github:sadjow/claude-code-nix for claude-code

Added cachix substituters and keys for pre-built binaries.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 21:45:25 -08:00
89ce2116cf Update org-agenda-api input to include modular org-window-habit
- org-window-habit refactored from single file to modular structure
- org-agenda-api test linting fixes

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 21:10:29 -08:00
3b7c053575 chore: update org-agenda-api with modular org-window-habit
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 20:35:11 -08:00
865957d9ac Fix org-api-ssh-key secret (was empty) 2026-01-31 20:18:54 -08:00
22360ddeea Add split org-api secrets for auth password and SSH key
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 20:15:48 -08:00
301e2a1479 Split org-api secrets into auth password and SSH key
- Auth password uses env file format for systemd EnvironmentFile
- SSH key is mounted as a file at /secrets/ssh_key in container
- Fixes multi-line SSH key parsing issue in environment files
- Update codex PR patch hash

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 20:14:48 -08:00
34cacdc40d Update org-api-passwords.age with proper env file format
Include AUTH_PASSWORD and GIT_SSH_PRIVATE_KEY for container deployment.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 19:52:02 -08:00
7a1a612397 Use home-manager.sharedModules for shared user config
Replace manual forEachUser pattern with built-in sharedModules for
applying config to all home-manager users. Add automatic garbage
collection of old generations (weekly, older than 7 days) and remove
the now-unnecessary expire-home-manager-generations justfile recipe.

Also update codex PR patch hash (upstream patch was modified).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 19:48:50 -08:00
317019d5bc Keep imalison: prefix for reschedule-past-to-today
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 18:49:52 -08:00
ad5c3fc8ed Re-encrypt org-api-passwords.age with railbird-sf key
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 14:00:31 -08:00
6b1e5a1aec Update org-agenda-api for exposed functions feature 2026-01-31 14:00:03 -08:00
f28a78b053 Add org-reschedule-past-to-today as exposed function
- Rename imalison:reschedule-past-to-today to org-reschedule-past-to-today
- Keep old name as alias for backwards compatibility
- Register function in org-agenda-api-exposed-functions for API access

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 13:59:28 -08:00
c4867e7845 Re-encrypt org-api-passwords.age with railbird-sf key
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 13:56:35 -08:00
c36ae5f4a8 Re-encrypt age secrets with updated keys
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 12:08:15 -08:00
4d8447ccb2 Increase gitea-runner container shared memory to 2GB
The default 64MB shm-size is too small for Metro/Gradle builds.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 11:41:45 -08:00
f241d76734 Fix xmonad greenclip binding and flake submodule issues
- Inline rofi greenclip command in xmonad.hs instead of calling script
- Update xmonad flake.nix to use github sources instead of path:
  (path: inputs don't work with git submodules in nix flakes)
- Update flake.lock with new inputs

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 11:41:09 -08:00
bf02a7ee4c Add codex 0.92.0 PR patch and fix deprecation warnings
- Add nixpkgs PR 483705 to bump codex from 0.89.0 to 0.92.0
- Disable overlay version overrides in favor of PR patches
- Replace deprecated pkgs.system with pkgs.stdenv.hostPlatform.system
- Replace deprecated xfce.thunar with thunar

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 00:15:33 -08:00
a716ec1694 Update org-agenda-api to 9f5b9d4 (org-window-habit bump)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 17:59:10 -08:00
e7a455ced9 feat(nixos): add org-agenda-api hosting with nginx + Let's Encrypt
Add NixOS module to host org-agenda-api container on railbird-sf:
- org-agenda-api-host.nix: New module with nginx reverse proxy and ACME
- nginx configured for rbsf.tplinkdns.com with automatic TLS
- Container runs on port 51847 (random high port)
- Supports nix-built container images via imageFile option

Configure railbird-sf to use the new module:
- Build org-agenda-api container from flake
- Pass container to machine config via specialArgs
- Set up agenix secret for container environment

Note: Requires creating secrets file with AUTH_PASSWORD and
GIT_SSH_PRIVATE_KEY environment variables.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 09:40:19 -08:00
53afba8b40 Add org-agenda-api cachix to deploy script
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 08:09:19 -08:00
4d7350b3fe Update org-agenda-api to 9bc594c (mova/org-window-habit bump)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 02:39:51 -08:00
df2c033d0f Switch from clipit/gpaste to greenclip for clipboard management
- Remove clipit config and helper scripts
- Add greenclip service and rofi integration
- Update xmonad keybinding to use rofi_clipboard.sh

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 00:09:48 -08:00
9cdd201c41 Update org-agenda-api to b8c66d0 (mova bump)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-29 23:47:36 -08:00
50fc48ec36 [NixOS] Add bump-overlay-versions skill to repo
Move skill from global ~/.claude/skills to project-local .claude/skills
so it's version controlled with the dotfiles.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-29 11:57:11 -08:00
595dc39945 [NixOS] Enable codex override 2026-01-29 11:33:21 -08:00
e1a5d2af5d Update org-agenda-api to 9e85a39 (git identity fix)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-29 11:32:51 -08:00
fc481ababa Update colonelpanic git user identity
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-29 11:29:45 -08:00
4c6ff0bd10 Fix tzupdate service failing at boot
The service was failing because network-online.target is reached before
network is actually online (NetworkManager-wait-online is disabled to
avoid boot hangs). Remove the service from multi-user.target.wants and
let only the timer trigger it, which already has a 30s startup delay.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 19:44:25 -08:00
0d13dd022a Update org-agenda-api to aeb59ee (SSH fix)
Includes fix for SSH setup in container startup script.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 14:45:49 -08:00
43e118b3e3 Rename container outputs to {instance}-org-agenda-api
More descriptive naming convention:
- colonelpanic-org-agenda-api
- kat-org-agenda-api

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 14:27:58 -08:00
504ec1a105 Add org-agenda-api container builds and fly.io deployment
Consolidates container builds from colonelpanic-org-agenda-api repo:
- Add org-agenda-api input to nixos flake
- Add container-colonelpanic and container-kat package outputs
- Add org-agenda-api cachix as substituter
- Add org-agenda-api devShell for deployment work

New org-agenda-api directory contains:
- container.nix: Container build logic using mkContainer
- configs/: Instance configs (custom-config.el, fly.toml, secrets)
- deploy.sh: Fly.io deployment script
- secrets.nix: agenix secret declarations

Build with: nix build .#container-colonelpanic
Deploy with: cd org-agenda-api && ./deploy.sh colonelpanic

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 14:24:41 -08:00
ccd63ba066 Add org-agenda-api consolidation design plan
Documents the plan to:
- Move container builds from colonelpanic-org-agenda-api to dotfiles
- Consolidate fly.io deployment into dotfiles/org-agenda-api/
- Add cachix substituter for faster builds
- Keep custom-config.el for container-specific glue only

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 14:17:13 -08:00
96e019310b [NixOS] Add opencode 2026-01-28 11:28:56 -08:00
fc2903eb5d [tmux] Remove session changed hook 2026-01-28 11:28:47 -08:00
71580f3a84 [NixOS] Fix stale hashes and remove claude-code override 2026-01-26 00:51:48 -08:00
f280b91595 feat: add high priority unscheduled agenda view
Adds a new custom agenda view (key "u") that shows high priority (A/B)
items that don't have a scheduled date or deadline. Useful for finding
important tasks that might be falling through the cracks.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 17:54:01 -08:00
d64a97e2c1 [Emacs] Include repeating in agenda again 2026-01-25 12:06:59 -08:00
f692c9f6c4 feat: add standalone agenda views for inbox, next, and wait states
Adds three new org-agenda-custom-commands:
- "i" for Inbox tasks (INBOX state)
- "n" for Next tasks (NEXT state)
- "w" for Waiting tasks (WAIT state)

These were previously only available as part of the composite "M" view.
Now they're exposed as standalone views for the org-agenda-api.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 13:33:45 -08:00
1876c22b46 feat: add inbox capture template
Add a new capture template (key "i") that captures to INBOX state
instead of TODO. This template appears first in the capture list.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 01:12:56 -08:00
aecc7afb4c [NixOS] Add passgen 2026-01-22 01:10:15 -08:00
b9f4a8731e Remove some stuff from biskcomp 2026-01-22 01:01:23 -08:00
180d9e7c50 [NixOS] Add gimp and inskape 2026-01-22 01:01:05 -08:00
b383cd0cd2 [NixOS] Remove railbird and interview 2026-01-22 01:00:50 -08:00
28ead6994d Fix tmux auto-naming for duplicate directory basenames
- Handle duplicate session names by appending -2, -3, etc.
- Add tn function to manually rename current session to cwd basename

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 00:57:59 -08:00
205d9528b3 [Emacs] Make hydra-yank work with non-file buffers
Add imalison:buffer-file-name-or-directory helper that falls back to
default-directory when buffer-file-name is nil. This allows the yank
hydra to work with magit buffers and other buffers associated with a
directory but not visiting a file.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 00:45:30 -08:00
91e36cd0ba [NixOS] Claude code to 2.1.4 2026-01-21 19:36:03 -08:00
f3871b233a Revert "fix: enable org-window-habit in tangled config"
This reverts commit 674b6ca7cb.
2026-01-21 10:14:58 -08:00
674b6ca7cb fix: enable org-window-habit in tangled config
Add :tangle directive to org-window-habit source block so it gets
included in org-config-config.el for the container deployment.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 10:03:28 -08:00
08cf4ae492 Add autorandr profiles for strixi-minaj with BC Alienware monitors
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 13:35:40 -05:00
986966bbc1 Refactor emacs README to use noweb for shared code blocks
Use noweb syntax to share imalison:join-paths and imalison:projects-directory
definitions between the main config and org-config-bootstrap.el.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 13:35:15 -05:00
b3b425833b Add poetry fix and comment out codex override
- Fix poetry pbs-installer version constraint issue with dontCheckRuntimeDeps
- Comment out codex override (using nixpkgs version)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 13:35:03 -05:00
658f424cf8 Enable Nvidia sync mode on strixi-minaj
Switch from offload mode to sync mode for better performance
with external displays.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 13:34:56 -05:00
cc05a1d790 Add Hyprland with hy3 plugin for XMonad-like tiling
Configure Hyprland to use the hy3 plugin for dynamic tiling similar to
XMonad. Uses official Hyprland and hy3 flakes pinned to v0.53.0 for
proper plugin compatibility (nixpkgs packaging had header issues).

Key changes:
- Add hyprland and hy3 flake inputs with version pinning
- Rewrite hyprland.conf with hy3 layout and XMonad-like keybindings
- Add helper scripts for window management (bring, replace, gather, etc.)
- WASD directional navigation using hy3:movefocus/movewindow
- Tab groups, horizontal/vertical splits via hy3:makegroup
- Scratchpads via Hyprland special workspaces

Also removes org-agenda-api flake integration (moved elsewhere).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 13:34:45 -05:00
68b3e5d83c feat: add Next (Scheduled Today) capture template
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 13:21:00 -05:00
c70d6a9e99 Add automatic ID property generation to org capture templates
Use org-id-new to generate UUIDs for new entries, matching the format
used by org-roam. Updates imalison:created-property-string, both
template helper functions, and the habit capture template.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 07:22:27 -05:00
3e805b172e Simplify org-agenda-api.nix to only produce tangled elisp files
Container construction moved to colonelpanic-org-agenda-api repo.
This flake now only exports org-agenda-custom-config (tangled files).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-17 12:11:59 -05:00
b27ff12180 Update org-agenda-api for LOGBOOK fix
Fixes state change logging (LOGBOOK entries) not being created in
non-interactive contexts.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 02:01:54 -08:00
fb2cc5672c Update org-agenda-api with debug-config endpoint 2026-01-16 01:52:18 -08:00
2291a14e84 [NixOS] Bump cc version 2026-01-16 04:15:41 -05:00
e7ee6c7d3d [NixOS] Add a codex overlay 2026-01-16 04:15:41 -05:00
74d95edcae Update org-agenda-api with agenda date override fix
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 01:07:31 -08:00
4954c40f85 Update org-agenda-api to fix off-by-one date error
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 00:39:22 -08:00
d0c92889ee Update org-agenda-api with field validation 2026-01-15 13:59:23 -08:00
ff3bb1e492 Update org-agenda-api with request logging 2026-01-15 13:37:41 -08:00
94216a3ec0 Update org-agenda-api with title-based fallback matching
Fixes position drift issue where cached positions become stale
after todo updates. API now falls back to title matching.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 13:25:38 -08:00
a848468d09 Fix shared-org-file-p to handle nil shared-org-dir in container
The org-schedule advice calls imalison:shared-org-file-p which uses
file-truename on imalison:shared-org-dir. When this var is nil (as in
the container), it causes 'Wrong type argument: arrayp, nil' error.

Override the function after loading tangled config to check for nil first.
2026-01-15 12:52:49 -08:00
5d73857125 Update org-agenda-api flake input 2026-01-15 12:31:23 -08:00
3f88a30149 Update flake.lock and fix synergy build with GCC 15
- Update flake inputs (home-manager, nix, nixos-wsl, nixpkgs, org-agenda-api)
- Add overlay to fix synergy missing #include <cstdint>

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 08:34:22 -08:00
b81e2699d6 Strip :straight keywords from tangled config for container
straight.el is not available in the minimal container Emacs,
so we need to remove :straight nil and :straight t from
use-package declarations in the tangled config files.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 21:24:08 -08:00
323ffb935e Update org-agenda-api input
Updated to include logging to stderr and debug-on-error for better container debugging.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 15:02:52 -08:00
ec5411df87 Add org-config-bootstrap.el tangle target for container builds
Adds :tangle org-config-bootstrap.el to blocks defining minimal
utilities needed by org-config.el for use in org-agenda-api containers.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 12:42:32 -08:00
6b54747f9a Add missing helper functions for custom agenda views
The org-agenda-custom-commands reference get-date-created-from-agenda-entry
and imalison:compare-int-list, but these were only defined in README.org
(main emacs config), not tangled into org-config.org.

This caused /custom-view?key=r (Recently created) to fail with
"void-function get-date-created-from-agenda-entry".

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 12:32:34 -08:00
76755b24e1 Use org-agenda-api from GitHub instead of local path
- Change org-agenda-api input from path:../dotfiles/emacs.d/straight/repos/org-agenda-api to github:colonelpanic8/org-agenda-api
- Refactor org-agenda package building to separate org-agenda-api.nix file
- Update flake.lock with new input and fix hercules-ci-effects metadata mismatch

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 11:58:32 -08:00
231f84364c Fix org-config-custom.el loading: convert customize format to setq
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 10:05:14 -08:00
535a6c2521 Set imalison:shared-org-dir to nil in container
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 09:58:20 -08:00
7a471048dc Load full tangled org-config for container
- Set imalison:org-dir and imalison:shared-org-dir for container paths
- Load org-config-preface.el, org-config-custom.el, org-config-config.el
- This properly sets org-agenda-files and org-agenda-custom-commands
- Add capture templates for API

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 09:57:17 -08:00
ccc49ce341 Add custom agenda commands and capture templates for org-agenda-api container
- Add simplified org-agenda-custom-commands for API (n, s, i, w, h, M)
- Add org-agenda-api-capture-templates (gtd-todo, scheduled-todo, deadline-todo)
- Templates use INBOX state and include CREATED property

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 09:49:12 -08:00
5304640c79 Update org-api.el to use org-agenda-api package
- Replace custom endpoint code with org-agenda-api require
- Add gtd-todo, scheduled-todo, and tagged-todo capture templates
- Custom commands from init.el will be available via /custom-views

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 09:45:24 -08:00
3e9f67c432 Update org-agenda-api flake input
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 06:57:22 -08:00
3e121b2944 [NixOS] Update flake.lock 2026-01-14 02:02:13 -08:00
7627ae7361 [NixOS] Update how android tools are managed 2026-01-14 02:00:25 -08:00
58f727b65d [Emacs] Log items inthe closed state 2026-01-14 02:00:02 -08:00
57fc6a4d53 Add org-agenda-api customized container package
- Export org-agenda-api-container with org settings baked in
- Tangle org-config.org and convert to setq format
- Container can be used by colonelpanic-org-agenda-api for Fly.io deployment

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 01:44:41 -08:00
444af768fc Migrate gen-gh-pages from Travis CI to GitHub Actions
- Add GitHub Actions workflow using peaceiris/actions-gh-pages
- Update README.org badge and documentation to reference GitHub Actions
- Simplify compile.sh for local use (removes Travis-specific evm setup)
- Mark deploy.sh as deprecated (workflow handles deployment now)
- Remove deploy_key.enc (no longer needed, uses GITHUB_TOKEN)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 23:53:44 -08:00
f2ca4f3530 [nix-darwin] Bump flake.lock, add claude-code
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 11:59:41 -08:00
62cc99e951 [Emacs] Add function to reschedule past items to today
Adds imalison:reschedule-past-to-today which iterates through agenda
files and reschedules any incomplete TODO items with a SCHEDULED date
in the past to the current date.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 11:36:53 -08:00
049d0ca45c [NixOS] Don't use private tmp for gitea runner 2026-01-08 10:59:20 -08:00
7a55c32da2 [NixOS] Remove unneeded patches 2026-01-08 10:59:15 -08:00
d2a0b86024 [taffybar] [NixOS] Just use remote taffybar for now 2025-12-19 15:49:57 -08:00
f79e7a527c [NixOS] A ton of stuff 2025-12-19 15:03:12 -08:00
b2230c993a [NixOS] Add ability to easily claude code overlay 2025-12-19 15:02:30 -08:00
91e8fcb06a [NixOS] Bump rumno service PR 2025-11-28 01:02:22 -08:00
03356b9280 [NixOS] Get taffybar building again with flake update / Make it easier to specify nixpgks PRs as patches 2025-11-28 01:00:29 -08:00
c3f4f92a09 [NixOS] [taffybar] Hack around the patching of gi-gtk-hs cabal file 2025-11-28 00:59:56 -08:00
8799310710 [NixOS] Run alejandra on flake.nix 2025-11-25 19:45:56 -08:00
493bd42966 [NixOS] Random tweaks 2025-11-21 11:42:37 -08:00
d98e7ae597 [NixOS] Fix warnings 2025-11-21 11:40:52 -08:00
bfd4a53b85 [NixOS] Bump nixpkgs 2025-11-18 12:53:05 -08:00
643096d98c [NixOS] Add flox cache 2025-10-03 12:29:29 -07:00
32cd05a2b8 [NixOS] Bump flake 2025-09-15 17:56:49 -07:00
23afe4a8b6 [NixOS] Enable all services on jimi-hendnix 2025-08-27 13:47:27 -07:00
acf683f334 [NixOS] Disable silabs flasher 2025-08-27 13:04:37 -07:00
4b7fdcd68f [NixOS] Remove nixos mcp 2025-08-27 12:52:27 -07:00
9ecb62e13d [NixOS] Remove breeze-gtk 2025-08-27 12:27:59 -07:00
ce9910daed [NixOS] Remove logind extra configuration 2025-08-27 12:21:51 -07:00
cb2598af14 [NixOS] Fix railbird-sf 2025-08-23 15:06:38 -07:00
ea4a577076 [NixOS] Bump flake.lock, make railbird-sf full 2025-08-23 21:02:15 +00:00
7d15907ee3 [NixOS] Bump claude, add mcp-language-server 2025-08-23 13:24:29 -07:00
9c5dab9ecc Update git-sync-rs hash 2025-08-16 14:10:25 -07:00
8b9c71b77e [NixOS] Use git-sync-rs 2025-08-16 12:55:08 -07:00
b3c3a7249c [NixOS] Add mcp servers 2025-08-15 10:36:06 -06:00
82f3a0eda3 [NixOS] Bump rumno patch hash 2025-08-14 14:16:02 -06:00
7d603e3d4f [NixOS] Small tweaks to flake definitions for xmonad and taffybar 2025-08-14 14:02:27 -06:00
612c0ef78d [Emacs] Remove repeating in agenda for now 2025-08-14 14:00:58 -06:00
db7e115542 [NixOS] Force Brightness management to work 2025-08-14 03:26:36 -06:00
7c25d8d578 [NixOS] Bump rumno patch hash 2025-08-14 02:35:13 -06:00
37070171c2 [NixOS] Better integration with rumno in brightness.sh 2025-08-14 02:04:57 -06:00
5dc82b90e2 [NixOS] Bump rumno patch hash 2025-08-14 01:39:47 -06:00
2cc0a54e07 [NixOS] Add gh 2025-08-14 01:39:38 -06:00
f9d7375f7b [NixOS] Finish integrating rumno 2025-08-14 00:43:19 -06:00
4d714d4416 [xmonad] Bump submodules 2025-08-14 00:26:44 -06:00
96f35ab9d6 [NixOS] Enable rumno service 2025-08-13 23:44:19 -06:00
f092bc782e [NixOS] Use evalConfig/applyPatches to patch nixpkgs 2025-08-13 21:50:02 -06:00
e84d333ea6 [NixOS] Only use backlight class in brightness.sh 2025-08-13 21:38:28 -06:00
05ab80c13f [NixOS] Modify brightness.sh to use brightnessctl 2025-08-13 21:36:34 -06:00
8a972a72f3 [taffybar] Bump 2025-08-13 21:26:07 -06:00
01081d25c7 [tmux] Automatically rename sessions 2025-08-13 21:25:33 -06:00
25a1afa317 [NixOS] Add brightnessctl and relevant permissions to user groups 2025-08-13 21:25:33 -06:00
17efe79dfa [NixOS] Bump flake.lock 2025-08-13 21:25:33 -06:00
90f7a5dc90 [NixOS] Install codex 2025-08-08 14:10:43 -06:00
6dc320ff1c [Emacs] claude-code-ide 2025-08-08 14:10:08 -06:00
ff4c9c8e9a [hyprland] Tweaks, still not working 2025-08-08 14:09:54 -06:00
2bcd5dc9bd [NixOS] Bump flake.lock 2025-08-07 19:13:18 -06:00
e130df3c70 [NixOS] Add overlay for claude code 2025-08-07 19:12:48 -06:00
307710e7a5 [NixOS] Remove nativeSystemd and add ryzen-shine-wsl 2025-07-19 18:10:36 +00:00
4349671e14 [taffybar] Bump submodule 2025-07-19 18:10:36 +00:00
33e4758389 [NixOS] Add ryzen-shine-wsl 2025-07-19 18:10:36 +00:00
0004a1d715 [NixOS] Enable hyprland 2025-07-15 15:32:04 -06:00
abfc369407 [NixOS] Add yarn and prettier 2025-07-15 12:43:39 -06:00
9d61a15337 [NixOS] Remove strong swan 2025-07-14 18:08:16 -06:00
733beb094b [NixOS] Add yarn 2025-07-14 18:07:29 -06:00
4848a20a8d [NixOS] Bump flake.lock 2025-07-09 11:24:34 -06:00
24346c9e88 [Emacs] Add vundo 2025-07-09 11:10:54 -06:00
bb3ba5d702 Remove assumption that user is imalison 2025-07-08 23:35:01 -06:00
77d0a8504e [NixOS] Adell updates 2025-07-02 11:34:44 -06:00
6ec03f7821 [NixOS] Fix k3s etcd flag 2025-07-02 11:33:54 -06:00
d4db3b81a8 [NixOS] Increase etcd db size in k3s 2025-07-02 11:25:02 -06:00
69f982526d [Emacs] Remove overseer 2025-06-14 13:15:34 -06:00
c952702742 [NixOS] Try another k3s nvidia-container-toolkit fix 2025-06-13 14:42:37 -06:00
5963113964 [NixOS] Oops move container toolkit enable 2025-06-13 13:45:51 -06:00
4cc07c65ae [NixOS] Reenable nvidia-container-toolkit 2025-06-13 13:45:06 -06:00
5c2b810a4f [NixOS] Fix k3s? 2025-06-13 13:17:24 -06:00
33ca6d490a [NixOS] Configure runtime and grpc in containedConfigTemplate 2025-06-13 12:59:59 -06:00
2acc6d25c7 [NixOS] Remove label, its not a thing 2025-06-13 00:22:04 -06:00
a74e5ab4b6 [NixOS] Bump flake.lock 2025-06-13 00:20:00 -06:00
39af365839 [NixOS] Set container-runtime-endpoint in k3s 2025-06-13 00:18:45 -06:00
973c5dc134 [NixOS] Bump flake.lock 2025-06-07 16:05:17 -06:00
a244ee2223 [NixOS] Add just to gitea-runner environment 2025-06-01 01:25:06 -06:00
1eaf5166d9 [NixOS] Add uv 2025-06-01 01:24:30 -06:00
a081f743a5 [NixOS] Reenable tzupdate 2025-06-01 01:24:30 -06:00
aa19cc3204 [NixOS] Bump to plasma 6 2025-05-30 10:46:52 -06:00
8d92c45ffe [NixOS] Bump flake.lock 2025-05-28 17:08:34 -06:00
96c74ac95e [NixOS] Add windsurf 2025-05-28 17:08:25 -06:00
b5ffc833fb [NixOS] Try to fix railbird bucket mount timeout 2025-05-21 17:00:04 -06:00
e18c41cf90 [NixOS] Bump flake.lock 2025-05-21 16:52:35 -06:00
291e77b4b4 [NixOS] Don't override dependencies for taffybar overlay 2025-05-15 12:28:58 -07:00
5cbf3ac32e [NixOS] Bump nixpkgs 2025-05-07 09:32:21 -06:00
e7d06c8b91 [NixOS] Add another will key 2025-05-07 09:30:43 -06:00
07a367dc67 [NixOS] Restore obsidian to kat 2025-05-07 09:30:14 -06:00
a0e6ecd222 [NixOS] Enable accounts daemon in plasma 2025-04-25 13:45:04 -06:00
331ce9eec5 [NixOS] Re-enable nixified-ai 2025-04-25 13:42:48 -06:00
3ed35fd553 [NixOS] Reenable heroic games 2025-04-25 13:42:13 -06:00
2df1d71367 [NixOS] Fix key duplication 2025-03-10 15:51:28 -06:00
5341a75a08 [NixOS] Switch to plasma6 2025-03-10 11:21:31 -06:00
d33fc584d0 [NixOS] Allow "electron-32.3.3" 2025-03-10 07:44:44 -06:00
375a7ed910 [NixOS] Fix okular 2025-03-10 07:39:35 -06:00
81e88f6610 [NixOS] Bump emacs version 2025-03-10 07:39:22 -06:00
3da6262856 [NixOS] Fix kleopatra 2025-03-10 07:34:27 -06:00
f8cb82fd60 [NixOS] Use qt6 dolphin 2025-03-10 07:33:00 -06:00
3a3dbad845 [NixOS] Bump flake 2025-03-10 07:31:32 -06:00
017d47ca41 [NixOS] Z-wave js works 2025-02-20 00:52:49 -07:00
78046685f9 [NixOS] Fix keys for zwave json 2025-02-19 22:35:50 -07:00
2046f360a6 [NixOS] Add z-wave js 2025-02-19 20:33:45 -07:00
e70146fd1d [NixOS] Enable home-assistant on biskcomp 2025-02-19 20:09:53 -07:00
ea9d4145d4 [NixOS] Bump flake inputs and fix home manager fallout from module rename 2025-02-19 20:09:20 -07:00
d1814a3072 [NixOS] Fix org-api basic auth 2025-02-18 21:10:41 -07:00
08db2c3a75 [NixOS] Set ota-provider-directory for matter-server 2025-02-07 18:30:32 -07:00
2c384fb003 [NixOS] Add notification sound for wyoming satellite 2025-02-07 18:30:26 -07:00
8ae53c14bd Add obsidian to kat nix 2025-02-07 11:50:29 -07:00
b189e1fa3e [NixOS] Add tts via coqui 2025-02-07 01:24:27 -07:00
107d3cfdb3 [NixOS] Use pulseaudio 2025-02-06 22:53:02 -07:00
e31f684f7b [NixOS] Enable wyoming for jay-lenovo 2025-02-06 19:49:58 -07:00
ab87bb325f [NixOS] Tweak wyoming service names 2025-02-06 19:49:14 -07:00
fd9ceb1dda [NixOS] Use the turbo model for faster-whisper 2025-02-06 17:51:24 -07:00
6fe2f72025 [NixOS] Fix issue with propagatedBuildInputs for wyoming-satellite 2025-02-06 13:27:20 -07:00
94e3c08f88 [NixOS] Reindent strixi-minaj 2025-02-06 13:00:37 -07:00
7138b67f59 [NixOS] Add voice assistant integrations for home assistant 2025-02-06 12:39:15 -07:00
1958d2ebf7 [Emacs] Add swift mode 2025-02-06 12:23:50 -07:00
a744a8fc2d [NixOS] Fix missing propagatedBuildInputs 2025-02-06 12:13:09 -07:00
30d50d72ec [NixOS] Bump flake.lock 2025-02-06 12:04:53 -07:00
a938447b8a [NixOS] Explicitly enable pipewire 2025-02-06 12:04:29 -07:00
9408eeff52 [NixOS] Fix mic for wyoming-satellite 2025-02-06 11:23:42 -07:00
947eaad2f1 [NixOS] Add wyoming protocol setup for home assistant voice assistant 2025-02-06 03:51:40 -07:00
eb6f67559e [NixOS] Enable external access to home assistant 2025-02-06 02:29:57 -07:00
166c3a24ea [NixOS] Fix google home service account key 2025-02-06 01:44:55 -07:00
3f3de17097 [NixOS] Add host key for justin-bieber-creek 2025-02-06 01:32:04 -07:00
d7ceec572f [NixOS] Add google service account for home assistant integration 2025-02-06 01:14:11 -07:00
50bb8561d8 [NixOS] Disable attestation verification in chip/matter-server for Home Assistant 2025-01-29 21:56:34 -07:00
6ece92b75d [NixOS] Set up virt-manager 2025-01-29 21:27:48 -07:00
accb330589 [NixOS] Configure otbr matter server and more for Home Assistant 2025-01-29 00:51:36 -07:00
9d8777e85c [NixOS] Add extensions that fix home-assistant to justin-bieber-creek 2025-01-19 17:35:50 +00:00
6b7a428145 [NixOS] Use my-unstable 2025-01-17 12:33:13 -07:00
6278da83fa Revert "[NixOS] Add matter_server python component"
This reverts commit bf5009fdd4.
2025-01-17 02:04:06 -07:00
bf5009fdd4 [NixOS] Add matter_server python component 2025-01-17 02:02:13 -07:00
4f573be120 [NixOS] More home-assistant fix 2025-01-17 01:59:55 -07:00
aba47c6ce9 [NixOS] Fix home assistant config 2025-01-17 01:57:33 -07:00
14d24534f9 [NixOS] Add home-assistant components needed to complete onboarding 2025-01-17 01:53:47 -07:00
e1752368b4 [NixOS] Xmonad on justin-bieber-creek 2025-01-17 08:52:23 +00:00
ce9c752cbe [NixOS] Set some annoying home-assistant defaults 2025-01-17 01:51:58 -07:00
0b1591642b [NixOS] Remove configuration for home assistant 2025-01-17 01:39:25 -07:00
998099ae10 [NixOS] Set int for elevation 2025-01-17 01:38:04 -07:00
d01659c1b0 [NixOS] Add the matter-server to justin-bieber-creek 2025-01-17 01:20:56 -07:00
251f03838b [NixOS] More adele -> adell fallout 2025-01-17 00:54:39 -07:00
2a303c445c [taffybar] Change adele -> adele in taffybar 2025-01-15 00:20:10 -07:00
7515a871a7 [NixOS] Finish exposing org-mode api on biskcomp 2025-01-03 01:02:09 -07:00
1ead310c05 [NixOS] Add org-api-passwords 2025-01-03 00:21:28 -07:00
76253e34ef [NixOS] Enable emacs org-api server 2025-01-02 23:51:20 -07:00
bbb0017bee [Emacs] Add org-api code 2025-01-02 23:40:27 -07:00
718cf756b9 [Emacs] Generalize kat's org-mode journal system so I can use it too 2025-01-02 16:07:40 -07:00
05e135d61d [NixOS] Remove redundant disableRegistration setting for gitea 2025-01-01 14:40:01 -07:00
9a66f3fc5a [NixOS] Disable ghostty for now 2025-01-01 14:16:18 -07:00
0d3b15c072 [NixOS] Fix nixified.ai to use comfyui 2025-01-01 13:58:27 -07:00
ebc7c2ede5 [NixOS] Attempt to disable registration in gitea 2025-01-01 13:24:59 -07:00
8035ae008b [NixOS] Disable k3s on biskcomp 2025-01-01 13:12:35 -07:00
a6d9bdb7a9 [NixOS] Replace shutter with flameshot (for screenshots) 2025-01-01 12:20:30 -07:00
4f4168768d [NixOS] Enable vaultwarden admin page 2025-01-01 12:03:48 -07:00
eb61989a59 [NixOS] Backup vaultwarden 2025-01-01 11:55:23 -07:00
3dcb49fc1b [NixOS] Disable discourse 2025-01-01 11:39:52 -07:00
442ed2aca4 [NixOS] Enable podman 2024-12-31 23:50:40 -07:00
5df6c5aecf [Emacs] Use new version of org-mode branch 2024-12-31 23:50:40 -07:00
884a8b31ae [Emacs] Reenable habits 2024-12-31 23:02:58 -07:00
fef852f4bf [NixOS] Remove any cdi hook 2024-12-31 03:49:35 -07:00
8453cc92b6 [NixOS] Temporarily remove create-symlinks hooks 2024-12-31 03:47:54 -07:00
e273e34662 [NixOS] Trap errors 2024-12-31 02:22:42 -07:00
b681e4b5b4 [NixOS] Fix executable permission 2024-12-31 02:18:01 -07:00
24c5bb3ec6 [NixOS] Try simple no errors nvidia-cdi-hook 2024-12-31 02:15:10 -07:00
70c5c011f8 [NixOS] Add some buildInputs to nvidia-container-toolkit overlay 2024-12-31 01:43:50 -07:00
fcae542755 [NixOS] More tweaks 2024-12-31 01:34:30 -07:00
0e20737cb3 [NixOS] Allow nvidia-container-toolkit failure 2024-12-31 01:27:07 -07:00
58ea719bed [NixOS] Remove program wrap 2024-12-31 01:17:16 -07:00
15ffb7355e [NixOS] Remove stracing from nvidia-container-toolkit 2024-12-31 01:09:46 -07:00
ca1b22ba98 [NixOS] Wrap nvidia-cdi-hook with LD_LIBRARY_PATH setting 2024-12-31 01:09:26 -07:00
d2add34317 [NixOS] Run ldd on nvidia-cdi-hook 2024-12-31 00:34:40 -07:00
5da32bceea [NixOS] Move nvidia-container-toolkit overlay into its own file and disable 2024-12-30 23:54:39 -07:00
92c2d613af [NixOS] strace nvidia-container-toolkit 2024-12-30 23:47:23 -07:00
c1a2c404e9 [NixOS] Add --debug flag to nvidia-cdi-hook automatically 2024-12-30 23:18:28 -07:00
5b3915ad27 [NixOS] Build runc from source 2024-12-30 23:07:09 -07:00
2d4c1df31f [NixOS] Log runc outputs 2024-12-30 20:31:16 -07:00
0f1895c5d2 [NixOS] Add overlay to log all runc invocations 2024-12-30 20:11:59 -07:00
990b7f0180 Restore environment override 2024-12-30 18:33:23 -07:00
a895c2471d [NixOS] A few more logging nvidia-container-toolkit tweaks 2024-12-30 18:20:46 -07:00
8fd220c919 Debug nvidia-container-toolkit commands 2024-12-30 18:11:01 -07:00
626d719e16 [NixOS] Bump nvidia container toolkit 2024-12-30 16:57:29 -07:00
7873981341 Revert "[NixOS] Remove a possibly unnecessary addition to nvidia-container-toolkit-cdi-generator"
This reverts commit fca6d487f0.
2024-12-30 16:47:09 -07:00
fca6d487f0 [NixOS] Remove a possibly unnecessary addition to nvidia-container-toolkit-cdi-generator 2024-12-30 16:39:31 -07:00
e297235517 [NixOS] Try to fix containerdconfig 2024-12-30 15:38:25 -07:00
29ab9150f8 [NixOS] Put k3s-containerd config in the right place 2024-12-30 15:36:29 -07:00
953d57be15 [NixOS] Debug k3s containerd 2024-12-30 15:20:54 -07:00
7b63af8aae [NixOS] Bump flake.lock 2024-12-30 14:26:33 -07:00
697d216397 [NixOS] Reenable serviec that enabled cdi for k3s containerd
enabel
2024-12-30 01:25:53 -07:00
794f3c1eb8 [NixOS] Remove container runtime endpoint setting 2024-12-30 01:06:17 -07:00
4a8e077b5d Reapply "[NixOS] Use plugins path"
This reverts commit 7d76728651.
2024-12-30 00:59:17 -07:00
7d76728651 Revert "[NixOS] Use plugins path"
This reverts commit 957b94e1cc.
2024-12-30 00:56:02 -07:00
957b94e1cc [NixOS] Use plugins path 2024-12-30 00:25:10 -07:00
2445e6e7d6 [NixOS] Back to /opt/cni/bin 2024-12-29 23:45:17 -07:00
f5ddd2e4c5 [NixOS] Remove bin dir again 2024-12-29 23:33:50 -07:00
f071068e6d [NixOS] Expose flannel plugins 2024-12-29 19:53:13 -07:00
c8ffe51c66 [NixOS] Remove external flannel 2024-12-29 19:45:16 -07:00
e12d261a9f [NixOS] Remove cni binary directory 2024-12-29 19:44:37 -07:00
2557a2b538 [NixOS] Enable flannel 2024-12-29 19:20:40 -07:00
ea3cfe9604 Revert "[NixOS] Switch to gnome as backup desktop environment"
This reverts commit 9fbdead63f.
2024-12-29 19:12:14 -07:00
8f3802a010 [NixOS] Add more cni plugins to containers 2024-12-29 18:56:52 -07:00
4d42e5c89d [NixOS] Use /opt/cni/bin as dir for network plugins 2024-12-29 18:10:45 -07:00
b8872e957f [NixOS] Add bin to the plugins path 2024-12-29 17:58:31 -07:00
fe8b6caf3c [NixOS] Use custom cni directory 2024-12-29 17:46:46 -07:00
3f0311b127 [NixOS] Add calico plugin 2024-12-29 17:20:00 -07:00
0b56680911 [NixOS] Use calico cni plugin 2024-12-29 17:17:03 -07:00
9fbdead63f [NixOS] Switch to gnome as backup desktop environment 2024-12-29 15:59:41 -07:00
5db03a0695 [NixOS] Disable rabbitmq by default 2024-12-29 15:48:16 -07:00
c697b5684a [NixOS] Make the unencrypted ryzen-shine permanent 2024-12-29 15:40:59 -07:00
95bd8dd280 [NixOS] Add ghostty 2024-12-29 15:22:01 -07:00
aa9d7b2d88 [NixOS] Temporarily diable nixified-ai 2024-12-29 14:11:28 -07:00
a1b5f3838d [NixOS] More disables 2024-12-29 14:09:50 -07:00
fe710dac80 [NixOS] Disable clipit 2024-12-29 13:07:44 -07:00
18aee952be [NixOS] Fix biskcomp 2024-12-29 13:06:54 -07:00
728e5ee02f [NixOS] Disable shutter 2024-12-29 13:06:27 -07:00
cb9f478cbc [NixOS] Fail to fix discourse-admin-password 2024-12-29 12:23:01 -07:00
6654470109 [NixOS] Set permissions on discourse-admin-password 2024-12-29 12:21:05 -07:00
4913622bad [NixOS] Remove nixified-ai setting from jimi-hendnix 2024-12-29 12:16:33 -07:00
ed9bed85d9 Remove taffybar follows flake deps 2024-12-29 12:13:14 -07:00
8881b704ca Revert "[NixOS] Disable a bunch of stuff to make ryzen-shine-unencrypted work"
This reverts commit 354b54b772.
2024-12-29 12:11:44 -07:00
89bd7e9a4c [NixOS] Remove enable nvidia 2024-12-29 01:55:16 -07:00
354b54b772 [NixOS] Disable a bunch of stuff to make ryzen-shine-unencrypted work 2024-12-28 20:30:34 -07:00
7e445e7fd3 [NixOS] Mount nvidia executables 2024-12-28 19:58:07 -07:00
a0f75a0f4d [NixOS] Add ryzen-shine-unencrypted 2024-12-28 18:09:06 -07:00
04b7672f0e [Emacs] Add variable to control org repeating files 2024-12-28 17:36:13 -07:00
b643092237 [NixOS] Remove unused imports 2024-12-28 17:35:52 -07:00
f4b753d750 [NixOS] Set container runtime endpoint 2024-12-23 17:48:40 -07:00
0d14cc41a8 [NixOS] Bring back dccsillag picom 2024-12-23 14:36:49 -07:00
3904b09b8c [NixOS] Add discourse secret key base 2024-12-22 23:48:19 -07:00
36e43c3f27 [NixOS] Set admin email 2024-12-22 19:47:19 -07:00
e178958e4f [NixOS] Ignore postgres version discourse 2024-12-22 19:44:59 -07:00
e89501f139 [NixOS] Enable discourse on biskcomp 2024-12-22 18:11:37 -07:00
7f3fe70cac [NixOS] Fix adell 2024-12-18 00:52:28 -07:00
8acb093f34 [NixOS] enableRedistributableFirmware on justin-bieber-creek 2024-12-17 00:20:13 -07:00
d2ff285109 [NixOS] Add iwlwifi to justin-bieber-creek 2024-12-17 00:16:29 -07:00
80c6ec0080 [NixOS] No source code pro nerd font 2024-12-16 17:23:16 -07:00
af706c8f40 [NixOS] Fix nerd fonts 2024-12-16 17:21:36 -07:00
01d2d1d31b [NixOS] Bump flake.lock 2024-12-12 01:02:12 -07:00
df045e44b5 [NixOS] Use nvidia_x11 kernel packages 2024-12-12 01:01:58 -07:00
1c8def8999 [NixOS] xkcdpass 2024-12-12 01:01:40 -07:00
f40788cd15 Revert "[NixOS] Use default taffybar package"
This reverts commit 3e774e37f9.
2024-12-12 01:01:34 -07:00
f077cc647b [NixOS] Temporarily disable volnoti and nerdfonts 2024-12-09 16:22:01 -07:00
6428ec9f2a [NixOS] Fix katnivan git repo 2024-12-09 16:21:29 -07:00
a97cd99394 [NixOS] Bump strixy-minaj packages 2024-12-09 16:21:13 -07:00
27258da627 [NixOS] Rename: adele -> adell 2024-12-01 23:47:48 -07:00
ce962bad1a [NixOS] Rekey 2024-12-01 12:23:41 -07:00
1b29407793 [NixOS] Add justin-bieber--creeek key 2024-12-01 12:22:56 -07:00
81f9cb6cf9 [NixOS] Add home-assistant config 2024-11-24 14:14:18 -07:00
37d1109bc3 [NixOS] Enable home-assistant on justin-bieber-creek 2024-11-24 14:02:35 -07:00
5cb32ff923 [NixOS] Bump nixpkgs 2024-11-24 09:11:19 +00:00
f972642cfa [NixOS] Add justin-bieber-creek 2024-11-24 09:10:44 +00:00
3e774e37f9 [NixOS] Use default taffybar package 2024-11-24 01:31:19 -07:00
574885f327 [NixOS] Add beelink key 2024-11-23 21:22:55 -07:00
46899bf76a [NixOS] Reorder flake inputs 2024-11-20 16:06:11 -07:00
f7af858e16 [NixOS] Move dean's home directory back for now 2024-11-20 14:34:04 -07:00
5e3452c091 [NixOS] Disable mission-center for now 2024-11-20 14:07:24 -07:00
bb87510a0c [NixOS] Use google dns and mount shared with users group 2024-11-20 13:25:45 -07:00
4d554f50c1 [NixOS] A few more cdi/k3s fixes 2024-11-11 19:18:30 -07:00
4d72cbc1b4 [NixOS] Renable containerd cdi 2024-11-11 18:58:48 -07:00
88d85f11b2 [NixOS] Don't try to mount nvidia executables 2024-11-11 18:48:51 -07:00
be7448b710 [NixOS] Use open nvidia drivers 2024-11-11 18:39:32 -07:00
b742fc78cb [NixOS] Remove nvidia-container-runtime 2024-11-11 18:18:57 -07:00
b703588b79 [NixOS] Disable ventura nixquick 2024-11-11 14:06:12 -07:00
8e4d8ac662 [taffybar] Bump 2024-11-10 15:17:49 -07:00
27888a7a3e [NixOS] Always make imalison user id 1000 2024-11-10 15:17:35 -07:00
0d6624bc09 [NixOS] Comment out taint flags 2024-11-10 15:17:23 -07:00
50c28d68c2 [NixOS] Add kotlin language server 2024-11-10 15:17:06 -07:00
7e301c1452 [NixOS] Bump flake.lock 2024-11-10 15:16:50 -07:00
ad73cfebde Remove expressvpn 2024-11-10 15:16:44 -07:00
2ba7a7f805 [alacritty] Fix deprecation warning 2024-10-30 05:50:44 -06:00
867ebad8ea [NixOS] Fix nvidia issues on strixi-minaj 2024-10-29 23:39:08 -06:00
ebf91de2d8 [NixOS] Fix dns issues 2024-10-29 23:38:59 -06:00
e6832e3c1e [NixOS] Fix deprecation 2024-10-29 18:14:02 -06:00
15499b292a [NixOS] Add wantedBy to mount-railbird-bucket 2024-10-24 17:10:45 -06:00
1af9a5497b [NixOS] Restart mount bucket service on failure 2024-10-24 17:05:12 -06:00
a7b24c0fa4 [NixOS] Set gitlab host 2024-10-21 00:09:09 -06:00
661a6b6c2f [NixOS] Remove extra config from networkmanager 2024-10-20 20:31:35 -06:00
d4faa061dc [NixOS] Try gitlab on biskcomp 2024-10-20 16:29:38 -06:00
a2bbd4e04e [NixOS] Add mullvad 2024-10-20 16:16:14 -06:00
7e62881c4d Fix ns name collision 2024-10-17 20:31:15 -06:00
9f69f16471 [NixOS] Periodically check on railbird-bucket state 2024-10-09 13:21:30 -06:00
5525fda4bf [Emacs] Make magit faster for nixpkgs 2024-10-09 11:41:12 -06:00
ee7c0ed11c [NixOS] Fix pavolume 2024-10-09 11:40:56 -06:00
e2875e1741 [NixOS] Use my branch with multiple backup files 2024-10-09 11:32:54 -06:00
57e13b8319 [NixOS] Try to add a taint to ryzen-shine k3s 2024-10-09 11:32:35 -06:00
1d31f870c5 Fix actions runner in macos 2024-10-08 23:24:45 -06:00
9f3f835253 Actions runner working 2024-10-08 23:24:45 -06:00
066902e37a Actions runner runs as kat 2024-10-08 23:24:45 -06:00
d790bc9e25 Put gitea actions runner in its own user 2024-10-08 23:24:45 -06:00
1ea8333994 Gitea runner working 2024-10-08 23:24:45 -06:00
e464d8fec5 [nix-darwin] Updates 2024-10-08 23:24:45 -06:00
14a32c151c [NixOS] Add mac mini key 2024-10-08 23:24:45 -06:00
bfdf5f221e [Darwin-nix] Add cocoapods 2024-10-08 23:24:45 -06:00
ae29832dbc [NixOS] Disable k3s for now on railbird-sf 2024-10-08 13:17:26 -06:00
ae6ce6b19c [NixOS] Fix command 2024-10-07 15:16:16 -06:00
8e1abde359 [NixOS] Fix permissions 2024-10-07 15:12:41 -06:00
c25cd05b15 [NixOS] Just run bucket mounting as root 2024-10-07 15:01:43 -06:00
5deba06fb0 [NixOS] Trying to mount bucket 2024-10-07 15:00:14 -06:00
7dcc785da6 [Emacs] Add import shortcuts for numpy and sqlalchemy 2024-10-06 17:58:49 -06:00
5eb3654d0c [git] Remove dumb gitconfig 2024-10-03 18:43:05 -06:00
Your Name
cbcf03c784 [NixOS] Make gitea-runner a trusted user 2024-10-03 15:18:46 -06:00
Your Name
a9d5ee5eb0 Revert "[NixOS] Disable gitea-runner"
This reverts commit 8402c6f1d2.
2024-10-03 14:28:41 -06:00
Your Name
8402c6f1d2 [NixOS] Disable gitea-runner 2024-10-03 14:16:52 -06:00
Your Name
da8b6b3b75 [NixOS] Bump runner token 2024-10-03 01:22:57 -06:00
Your Name
526bf6e2a9 [NixOS] New gitea-runner secret 2024-10-03 01:22:57 -06:00
Your Name
04870cd682 [NixOS] Biskcomp dev.railbird.ai for k3s 2024-10-02 23:04:36 -06:00
Your Name
46108ab249 [NixOS] Fix 2024-10-02 22:05:11 -06:00
Your Name
a8e23460f9 [NixOS] Fix 2024-10-02 22:03:01 -06:00
Your Name
a88018fe47 [NixOS] Remove flags that don't work with agent for railbird-sf 2024-10-02 22:02:24 -06:00
Your Name
5757681ce0 [NixOS] railbird-sf is only an agent 2024-10-02 21:55:36 -06:00
Your Name
6c393b3837 [NixOS] Fix gpg key import 2024-10-02 19:55:09 -06:00
Your Name
618f927cb9 [NixOS] Fix cdi issues with k3s containerd 2024-10-02 18:54:27 -06:00
Your Name
bb259bf358 [NixOS] Add nixos-nvidia-cdi=enabled label to k3s 2024-10-02 16:24:02 -06:00
Your Name
3f7de563db [NixOS] Fix 2024-10-02 16:15:52 -06:00
Your Name
6ae5f4c503 [NixOS] Remove labels 2024-10-02 16:10:48 -06:00
Your Name
e6c3d55fc8 [NixOS] label -> labels 2024-10-02 16:05:20 -06:00
Your Name
36331ea60c [NixOS] Add label for nvidia cdi 2024-10-02 16:04:16 -06:00
Your Name
6b18d0accf [NixOS] Set cdi spec dirs 2024-10-02 15:43:59 -06:00
Your Name
9a764fc7c8 [NixOS] Its registry.yaml registries.yaml 2024-10-02 14:32:21 -06:00
Your Name
9100167e4d [NixOS] Another registry.yaml fix 2024-10-01 19:09:12 -06:00
Your Name
e0e98bc237 [NixOS] Fix whitespace issue in registry.yaml 2024-10-01 18:59:42 -06:00
Your Name
4c989fcda3 [NixOS] Make registry.yaml real 2024-10-01 18:29:32 -06:00
Your Name
35f8c10e7c [NixOS] k3s registry file working in principle 2024-10-01 16:27:34 -06:00
Your Name
ac49823b4c Try a local serverAddr for biskcomp 2024-09-30 21:36:34 -06:00
daaead9c1e [Emacs] Align with all cursors 2024-09-30 21:34:10 -06:00
Your Name
c5c86145b1 [NixOS] Encrypt k3s token to railbird-sf 2024-09-30 21:05:39 -06:00
Your Name
32755e1411 [NixOS] Enable k3s on biskcomp nixquick and railbird-sf 2024-09-30 20:47:12 -06:00
Your Name
de27a133e7 [NixOS] Take 3 2024-09-30 20:40:39 -06:00
Your Name
f89155e4d2 [NixOS] Actually fix 2024-09-30 20:40:03 -06:00
Your Name
f345cf8f18 [NixOS] Disable tmp2 2024-09-30 20:38:51 -06:00
Your Name
4cb9c006d7 [NixOS] railbird-sf tweaks 2024-10-01 00:41:48 +00:00
1dd54ba638 [NixOS] Allow another alias for api connection 2024-09-30 18:26:14 -06:00
517c2f333e [NixOS] Another fix 2024-09-30 17:25:02 -06:00
d850ba999d [NixOS] Add tls aliases 2024-09-30 17:23:42 -06:00
dd9f5ccf88 [NixOS] Try to fix insecure skip arg 2024-09-30 17:17:41 -06:00
59da59c74f [NixOS] Ignore insecure tls 2024-09-30 17:15:13 -06:00
1f36c4942b [NixOS] Fix serverAddr port for k3s 2024-09-30 16:49:10 -06:00
eaa46e7034 [NixOS] Fix k3s definition 2024-09-30 16:42:58 -06:00
f00d9bdb12 [NixOS] Try to connect jimi-hendnix to ryzen-shine in k3s 2024-09-30 16:35:50 -06:00
1003c33dee [NixOS] Use myModules.nvidia in jimi-hendnix 2024-09-30 15:19:29 -06:00
a493a530be [NixOS] k3s draft 2024-09-30 00:05:50 -06:00
01361b7217 [NixOS] Kubelet->gke kind of works but not really 2024-09-29 17:28:42 -06:00
cdd8ed60e9 [NixOS] Get kubelet partially working 2024-09-28 21:45:25 -06:00
fd033ba72c [NixOS] Add bencbox to syncthing 2024-09-27 19:45:57 -06:00
9aae7c0c16 [NixOS] Remove old virtualization style 2024-09-27 19:37:23 -06:00
aa9be16abf [NixOS] Add macos virtualmachine to nixquick 2024-09-27 14:28:23 -06:00
342fc4f4c6 [NixOS] Use nvidia module for nixquick 2024-09-27 14:23:32 -06:00
d62d538562 [NixOS] Put nvidia configuration into a file 2024-09-27 13:09:45 -06:00
ffb55c157b [NixOS] Fix deprecations in nixquick.nix 2024-09-26 15:43:37 -06:00
cd7698bebf [NixOS] Remove picom overlay for now 2024-09-26 15:26:26 -06:00
700cccfd60 [NixOS] Switch to pipewire and other fixes 2024-09-26 14:25:48 -06:00
814966b172 [Emacs] Disable indent-bars 2024-09-26 14:15:51 -06:00
c4a9a60112 [NixOS] modules -> myModules 2024-09-26 14:15:27 -06:00
9b9da29b7a [NixOS] Enable nvidia-container-toolkit 2024-09-26 11:16:58 -06:00
6cced9dad4 [NixOS] Enable nvidia dockerisation 2024-09-26 11:14:33 -06:00
50542d9b24 [NixOS] Enable kat on nixquick 2024-09-07 18:21:58 +00:00
701e8d7d82 With cursor 2024-08-31 17:03:08 +00:00
10b8f61d27 [NixOS] Picom is only needed for xmonad 2024-08-20 00:43:37 -06:00
58432fe908 [NixOS] Host docs.railbird.ai 2024-08-19 14:32:48 -06:00
79908fae93 [NixOS] Add ryzen-shine kubernetes token 2024-08-18 15:32:36 -06:00
b597b6e239 [Emacs] Use rubocop in ruby-mode 2024-08-18 15:32:10 -06:00
9e8cd58d7f [nix-darwin] Add gitea-runner token for mac-demarco-mini 2024-08-15 01:52:05 -06:00
6643428fca Revert "[NixOS] Use yshui's picom version from git"
This reverts commit 6cb9597df7.
2024-08-14 01:33:19 -06:00
10a732ad75 [taffybar] Bump version 2024-08-11 17:54:49 -06:00
e6a75734fb [NixOS] [zsh] Enable bracketed-paste-magic 2024-08-11 17:54:03 -06:00
eb22968ab4 [NixOS] Package renames 2024-08-11 17:52:53 -06:00
b12f84c007 [NixOS] Bump nixpkgs version 2024-08-11 17:52:37 -06:00
54645ba497 [NixOS] Fix deprecated option use in ryzen-shine 2024-08-11 17:52:17 -06:00
6cb9597df7 [NixOS] Use yshui's picom version from git 2024-08-11 17:51:56 -06:00
51e6116100 [Emacs] Search hidden files when doing directory based consult-rg search 2024-08-10 12:40:01 -06:00
4fd99eae63 [NixOS] Add interview user 2024-08-09 12:41:32 -06:00
b8f2452d11 [NixOS] Add ns function for incrementally searching through nixpkgs 2024-08-04 22:21:13 -06:00
ddb4a257cf Add nix-darwin justfile 2024-08-04 22:20:33 -06:00
Kat
3a6c9fbb49 Add nix-darwin 2024-08-04 22:20:33 -06:00
Kat
f3b8a769c6 [Emacs] Add swift-mode 2024-08-04 22:20:33 -06:00
4a27118f24 Delete travis 2024-08-04 22:20:33 -06:00
7ddc215dcc [Emacs] Remove indent-guide 2024-08-04 22:02:59 -06:00
3559edb3a5 [git] Don't ignore .sw* files (swift) 2024-07-28 14:20:58 -06:00
0e611dbb57 Fix location of DISABLE_REGISTRATION 2024-07-20 01:18:56 -06:00
3a71c8600c [NixOS] Remove sound.enable 2024-07-18 15:57:42 -06:00
280debd530 [NixOS] Disable gitea registration 2024-07-18 15:55:59 -06:00
e34248ede3 [NixOS] Add dean to syncthing 2024-07-18 14:09:37 -06:00
ea3ba8e2d6 [Emacs] Replace highlight-indent-guides with indent-bars 2024-07-11 01:44:13 -06:00
69411c14f6 [NixOS] Use testing kernel on strixi-minaj to fix audio 2024-07-04 12:44:23 -06:00
f560d87aa3 [linux] Allow selection of specific path for case where multiple brightness paths 2024-07-03 22:49:29 -06:00
e188936253 [NixOS] Use module from nixos-hardware for strixy-minaj 2024-07-03 02:12:56 -06:00
f91ff8c987 Revert "[NixOS] Use intel drivers on strixi-minaj"
This reverts commit 204569fff1.
2024-07-03 02:12:56 -06:00
f4b87c40eb Remove gtkrc 2024-07-02 17:19:51 -06:00
26fb168ec5 [NixOS] Set backup extension for home-manager 2024-07-02 17:18:03 -06:00
16e6b980ba [NixOS] Flake update past ssh issue 2024-07-01 22:45:03 -06:00
204569fff1 [NixOS] Use intel drivers on strixi-minaj 2024-07-01 22:44:44 -06:00
fb671d1401 [NixOS] Disable home-assistant 2024-06-13 04:49:59 +00:00
a18188d3b1 [NixOS] Fix zulip only works on x86 2024-06-12 21:09:50 -06:00
4515ea2e05 [NixOS] Allow dev keys for railbird user 2024-06-11 16:37:33 -06:00
KAT
60e1947dd8 [NixOS] Add swap to jimi-hendnix 2024-06-10 19:45:21 -06:00
0305fa4683 [NixOS] Bump railbird secrets 2024-06-11 01:27:29 +00:00
84a1f22326 Delete htop configuration 2024-06-10 18:26:56 -06:00
4cb057109f [Emacs] Add logg snippet 2024-06-10 18:25:40 -06:00
51d2863cdc [NixOS] Rename razer to david-blade 2024-06-10 18:25:25 -06:00
0c1cd15391 [Emacs] Fix avy in eat 2024-06-10 18:24:50 -06:00
cef3b04ebd [NixOS] Add railbird user 2024-06-10 16:13:19 -06:00
b9f87ac490 [NixOS] Allow agent forwarding 2024-06-03 04:08:10 +00:00
3b55c26a2c [NixOS] Enable ssh agent auth 2024-06-03 03:58:13 +00:00
dcd38e777a [NixOS] Add cuda-maintainers cache 2024-06-03 03:57:57 +00:00
2116f650f7 Revert "[starship] Switch prompt"
This reverts commit 2af8204750.
2024-06-03 02:07:31 +00:00
00139ef2fe [NixOS] [Emacs] Enable eat shell integration 2024-06-02 18:10:49 -06:00
2af8204750 [starship] Switch prompt 2024-06-02 18:10:49 -06:00
cd64244bd8 [Emacs] Disable org-wild notifications for kat 2024-06-02 18:06:48 -06:00
4cc68dedea [Emacs] Finish switching to eat including migrating term-projectile 2024-06-02 17:55:13 -06:00
77fe614b7b [Emacs] Add eat 2024-06-02 06:08:34 +00:00
6bbe7f186a [NixOS] Setup argcomplete completino for prb, prod-prb and railbird 2024-06-02 04:53:48 +00:00
77fc296e9e [NixOS] Add strixy-minaj-wsl 2024-06-02 04:04:05 +00:00
807944f182 [Emacs] Add ign and 401 snippets 2024-06-01 13:56:06 -06:00
97c2779d1b Merge pull request #25 from bcorner/master
Add ben to realUsers, users.nix
2024-05-29 00:11:41 -06:00
9d900057f6 Working ben.nix file.
Probably unrelated, had to remove ~/.zshrc and ~/.zprofile in order for
nixos-rebuild switch to work.
2024-05-29 01:03:49 -05:00
86b545761f Move home-manager.backupFileExtension entry to bottom of ben.nix 2024-05-29 00:02:55 -05:00
a8a66916f4 Add shellAliases, set backupFileExtension in ben.nix. 2024-05-28 23:01:07 -05:00
32d68061a5 Make sure user ben has sudo; quick fix, prefer no repeat extraGroups 2024-05-28 22:17:40 -05:00
94e7e738fb Add ben to realUsers, users.nix; key to keys.nix. zsh now default??
uhh paste contents of .profile into .zprofile I guess?
2024-05-28 21:07:48 -06:00
77cf8d46a3 Add ben module 2024-05-28 20:55:02 -06:00
b5fb07519c [NixOS] Add dean's new ssh key 2024-05-27 14:49:19 -06:00
0f7e3596de Merge pull request #24 from bcorner/master
[NixOS] Add bencbox
2024-05-15 20:07:31 -06:00
7aeed13a34 [NixOS] Add bencbox 2024-05-15 20:54:01 -05:00
7f87156a58 Merge pull request #23 from deanwenstrand/master
Add dean-zephyrus
2024-05-13 17:43:43 -06:00
8a7cec11cf Add dean-zephyrus 2024-05-13 21:50:27 +00:00
da865671ad [NixOS] Flake update 2024-05-11 23:42:45 -06:00
3004f57c1a [NixOS] Update vscode-server flake 2024-05-07 14:16:23 -06:00
f82c4fb659 [NixOS] Enable vscode-server 2024-05-07 14:09:44 -06:00
a4d6664b77 [NixOS] Add zulip 2024-05-05 22:25:58 -06:00
dd256a24f4 [git] Add find-merge 2024-04-28 17:18:24 -06:00
03829b74ce [NixOS] Reenable element-desktop 2024-04-28 17:18:24 -06:00
29e68d1714 [NixOS] Move my-python-packages to overlay 2024-04-28 17:18:24 -06:00
c0c51f571d [NixOS] Add global argcomplete completion support 2024-04-28 17:18:24 -06:00
9c54be10e1 [Emacs] Update mc-lists 2024-04-28 17:18:24 -06:00
eb69712a7c [Emacs] Add org-drill 2024-04-08 23:44:09 -06:00
b86cce1c12 [NixOS] Typo 2024-04-08 07:47:07 +00:00
1b44c66902 [NixOS] Add unpriveleged 2024-04-08 07:46:22 +00:00
b54cb9fceb [NixOS] Make kanivan keys work for syncthing 2024-04-08 07:39:11 +00:00
5fae69b391 [NixOS] Gitea secret can access syncthing user 2024-04-08 05:49:38 +00:00
b8d4cf59b8 [NixOS] Fix syncthing directory location 2024-03-23 21:20:40 -06:00
9ab7b41780 [NixOS] Update nixquick port 2024-03-22 16:50:54 -06:00
4f37050c1c Make nixquick the cache server instead of ryzen-shine 2024-03-22 16:22:48 -06:00
49d98cbca1 [NixOS] Fix for jay-lenovo 2024-03-21 18:11:06 -06:00
47ecc2a0c4 [NixOS] Fix strixi-minaj vs wsl identification in syncthing 2024-03-21 15:22:23 -06:00
149de8faae [NixOS] Fix perms issues with syncthing 2024-03-21 15:16:11 -06:00
487aae9a58 [NixOS] Remove picom inactive dim 2024-03-21 15:16:01 -06:00
fde42131d2 [NixOS] Remove 1896Office config 2024-03-20 22:01:15 -06:00
ddbe91c669 [taffybar] Fix 2024-03-20 03:43:21 -06:00
74a6e98e90 [taffybar] Fix taffybar.hs 2024-03-20 03:38:50 -06:00
1aaeeaedf0 [Linux] Generalize brightness manager to work for nvidia 2024-03-20 03:36:37 -06:00
e66a48a311 [NixOS] Rename strixy-minaj to strixi-minaj 2024-03-19 20:33:00 -06:00
ce1ba6dd90 [taffybar] Make strixy-minaj use laptop widgets 2024-03-19 20:31:04 -06:00
165646b395 [NixOS] Add new computers to syncthing 2024-03-19 20:27:37 -06:00
4de88f623c [NixOS] Set dpi for strixy-minaj 2024-03-14 20:52:13 -06:00
c656ebf8dc [NixOS] Add asusd on strixy-minaj 2024-03-14 19:48:38 -06:00
4c3ec2a29b [NixOS] Fix warning 2024-03-15 01:32:11 +00:00
6157a2b047 [NixOS] Add just 2024-03-15 01:32:03 +00:00
d149bc7755 [NixOS] Deprioritize railbird cache 2024-03-15 01:29:08 +00:00
4f669db603 [NixOS] Replace scripts with justfile 2024-03-15 01:28:56 +00:00
eccceb0d31 Add keymap for strixy-minaj 2024-03-15 00:52:28 +00:00
277432379c [NixOS] Add strixy-minaj 2024-03-15 00:24:38 +00:00
4e3e75c3e2 [NixOS] Add strixy-minaj 2024-03-14 16:56:27 -06:00
b2bf550aa0 [NixOS] Bump flake.lock 2024-03-12 00:10:59 -06:00
fdc4bda993 [NixOS] Allow an insecure nix 2024-03-12 00:10:11 -06:00
23a4d50b4a [NixOS] d-spy replaces dfeet 2024-03-12 00:08:32 -06:00
96012de10c Wrok 2024-03-12 00:00:10 -06:00
83940416d2 Hmm fix gitea runner 2024-03-11 23:58:51 -06:00
6e0427d4e0 Make gitea-runner docker work 2024-03-11 23:53:55 -06:00
6f2f2c54a7 Add fix_nix shell alias 2024-03-11 20:43:55 -06:00
3d121c8908 [Emacs] Add graphql-mode 2024-03-09 20:44:01 -07:00
66842d545a [Emacs] Add imalison:lsp-deferred-when-enabled 2024-03-06 18:15:21 -07:00
dc0dea2695 [git] Ignore the untracked directory 2024-03-01 00:58:52 -07:00
3017ea1b63 [NixOS] Add git-fame 2024-03-01 00:58:37 -07:00
ef7e685007 [Emacs] Add just-mode 2024-03-01 00:58:11 -07:00
4700030548 [NixOS] Fix picom service 2024-02-15 00:01:19 -07:00
45fd9aadef [NixOS] Flake update 2024-02-13 12:29:30 -07:00
b034f12adc [NixOS] Disable pgadmin 2024-02-13 12:27:54 -07:00
2458cb6081 [Alacritty] More fixes 2024-02-13 12:06:41 -07:00
2f49c3f86b [Alacritty] Move to toml 2024-02-12 22:54:04 -07:00
e7d536417c [NixOS] Changes for bump 2024-02-12 22:05:06 -07:00
61752b600d [NixOS] Remove gnupg override 2024-02-11 18:00:37 -07:00
00cfd236ab [NixOS] Add android studio to adele 2024-02-04 01:33:50 -07:00
88937e9acc [NixOS] Add rabbitmq 2024-01-29 19:51:02 -07:00
30146f4fdb [XMonad] Bump contrib 2024-01-29 01:55:14 -07:00
737167fe3f [git] Fix del-merged-branches 2024-01-29 01:55:14 -07:00
5809b012c5 taffybar fixes 2024-01-17 17:02:25 -07:00
90952a3dc0 [NixOS] Enable postgres by default 2024-01-05 01:34:26 -07:00
d8dee1504e Add postgres to jay lenovo 2024-01-04 20:50:46 -07:00
b744be3b00 [nixos] Dont haddock taffybar 2024-01-04 20:38:59 -07:00
8b83c429d1 [NixOS] Remove initialization for postgres 2024-01-04 20:02:11 -07:00
0c685bee88 [NixOS] Enable postgres on jimi-hendnix 2024-01-04 19:55:09 -07:00
830499c7d6 [NixOS] More sophisticated postgres initialization 2024-01-04 19:52:46 -07:00
18ef010bf1 [NixOS] Add postgres configuration 2024-01-03 13:45:20 -07:00
a75824ee2a [NixOS] Programatically get the list of users 2023-12-30 22:42:43 -07:00
0f65575a35 [NixOS] Add ffmpeg to essential 2023-12-22 22:13:36 -07:00
c9638dbbcd [NixOS] Allow openssl-1.1.1w 2023-12-21 16:21:32 -07:00
aff5aff63f [NixOS] Add neovim systemwide 2023-12-21 16:08:34 -07:00
0dd9976a38 [NixOS] Fix avahi warning 2023-12-21 16:04:06 -07:00
a98b45590a [NixOS] Add neovim for micah 2023-12-21 16:01:34 -07:00
6f1242b02c [NixOS] Add dean syncthing and railbird directory 2023-12-21 13:26:46 -07:00
1eea8c61e7 [NixOS] Backup gitea 2023-12-20 23:02:27 -07:00
f0d35c59dd [NixOS] Add ed25519 key for micah 2023-12-20 14:43:09 -07:00
e36ba46f34 [NixOS] Add micah 2023-12-20 14:30:55 -07:00
08ef11f1ec [NixOS] Use nvidia drivers on adele 2023-12-19 22:20:05 -07:00
fa9466dadd [NixOS] Add android-studio to jay-lenovo 2023-12-16 21:18:23 -07:00
0aa212b83d [Emacs] Run alejandra automatically 2023-12-16 13:46:53 -07:00
6b8789d566 [NixOS] Add script to expire home manager generations 2023-12-16 13:35:37 -07:00
1db30187cb [XMonad] Bump contrib pointer 2023-12-16 13:35:21 -07:00
e425d5c1f6 [Emacs] Add terraform-mode 2023-12-16 13:34:35 -07:00
98b49c63d4 [NixOS] Disable nix overlay 2023-12-15 20:23:19 -07:00
f933f5527d [Emacs] Add functions to allow git-link to work with dev.railbird.ai 2023-12-15 20:11:56 -07:00
8c49d47324 [Emacs] Adds groovy mode 2023-12-15 20:11:16 -07:00
42015c284a [NixOS] Fix vscode? 2023-12-13 11:56:10 -07:00
58b9a395d7 [NixOS] Use home manager for vscode support 2023-12-13 11:55:20 -07:00
b102fd2b85 [NixOS] Move nginx mods to biskcomp 2023-12-13 11:41:50 -07:00
4a7edcda75 [NixOS] Set system user for nginx 2023-12-13 11:39:43 -07:00
d3f552afda [NixOS] Fix nginx group 2023-12-13 11:38:45 -07:00
e8095c2081 [NixOS] Add primary group for nginx 2023-12-13 11:37:25 -07:00
f810880d90 [NixOS] Put vscode-server in baseModules 2023-12-13 11:35:47 -07:00
4b71ea64fe [NixOS] Share syncthing directory from biskcomp 2023-12-13 11:31:58 -07:00
5692999aa1 [NixOS] Add vscode server fix 2023-12-13 11:31:01 -07:00
416 changed files with 29253 additions and 3947 deletions

View File

@@ -0,0 +1,11 @@
{
"permissions": {
"allow": [
"Bash(rg:*)",
"Bash(wmctrl:*)",
"Bash(grep:*)",
"Bash(hyprctl:*)"
],
"deny": []
}
}

95
.github/workflows/cachix.yml vendored Normal file
View File

@@ -0,0 +1,95 @@
name: Build and Push Cachix (NixOS)
on:
push:
branches: [master]
paths:
- "nixos/**"
- "org-agenda-api/**"
- ".github/workflows/cachix.yml"
pull_request:
branches: [master]
paths:
- "nixos/**"
- "org-agenda-api/**"
- ".github/workflows/cachix.yml"
workflow_dispatch: {}
jobs:
nixos-strixi-minaj:
runs-on: ubuntu-latest
permissions:
contents: read
env:
# Avoid flaky/stalled CI due to unreachable substituters referenced in flake config
# (e.g. LAN caches). We keep this list explicit for CI reliability.
NIX_CONFIG: |
experimental-features = nix-command flakes
connect-timeout = 5
substituters = https://cache.nixos.org https://colonelpanic8-dotfiles.cachix.org https://org-agenda-api.cachix.org https://taffybar.cachix.org https://codex-cli.cachix.org https://claude-code.cachix.org
trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= colonelpanic8-dotfiles.cachix.org-1:O6GF3nptpeMFapX29okzO92eSWXR36zqW6ZF2C8P0eQ= org-agenda-api.cachix.org-1:liKFemKkOLV/rJt2txDNcpDjRsqLuBneBjkSw/UVXKA= taffybar.cachix.org-1:beZotJ1nVEsAnJxa3lWn0zwzZM7oeXmGh4ADRpHeeIo= codex-cli.cachix.org-1:1Br3H1hHoRYG22n//cGKJOk3cQXgYobUel6O8DgSing= claude-code.cachix.org-1:YeXf2aNu7UTX8Vwrze0za1WEDS+4DuI2kVeWEE4fsRk=
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Free disk space
run: |
set -euxo pipefail
df -h
sudo rm -rf /usr/share/dotnet || true
sudo rm -rf /usr/local/lib/android || true
sudo rm -rf /opt/ghc || true
sudo rm -rf /usr/local/share/boost || true
sudo apt-get clean || true
df -h
- name: Install Nix
uses: DeterminateSystems/nix-installer-action@v16
- name: Use GitHub Actions Cache for /nix/store
uses: DeterminateSystems/magic-nix-cache-action@v7
- name: Require Cachix config (push only)
if: github.event_name == 'push'
env:
CACHIX_CACHE_NAME: ${{ vars.CACHIX_CACHE_NAME }}
CACHIX_AUTH_TOKEN: ${{ secrets.CACHIX_AUTH_TOKEN }}
run: |
set -euo pipefail
if [ -z "${CACHIX_CACHE_NAME:-}" ]; then
echo "Missing repo variable CACHIX_CACHE_NAME (Settings -> Secrets and variables -> Actions -> Variables)." >&2
exit 1
fi
if [ -z "${CACHIX_AUTH_TOKEN:-}" ]; then
echo "Missing repo secret CACHIX_AUTH_TOKEN (Settings -> Secrets and variables -> Actions -> Secrets)." >&2
exit 1
fi
- name: Setup Cachix (push)
if: github.event_name == 'push'
uses: cachix/cachix-action@v15
with:
name: ${{ vars.CACHIX_CACHE_NAME }}
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
skipPush: false
- name: Setup Cachix (PR, no push)
if: github.event_name == 'pull_request' && vars.CACHIX_CACHE_NAME != ''
uses: cachix/cachix-action@v15
with:
name: ${{ vars.CACHIX_CACHE_NAME }}
skipPush: true
- name: Build NixOS system (strixi-minaj)
run: |
set -euxo pipefail
nix build \
--no-link \
--print-build-logs \
./nixos#nixosConfigurations.strixi-minaj.config.system.build.toplevel \
--override-input railbird-secrets ./nixos/ci/railbird-secrets-stub

54
.github/workflows/gh-pages.yml vendored Normal file
View File

@@ -0,0 +1,54 @@
name: Deploy to GitHub Pages
on:
push:
branches:
- master
pull_request:
branches:
- master
jobs:
build-and-deploy:
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Setup Emacs
uses: purcell/setup-emacs@master
with:
version: 29.1
- name: Setup Cask
uses: conao3/setup-cask@master
with:
version: snapshot
- name: Install dependencies
working-directory: gen-gh-pages
run: cask install
- name: Generate HTML
working-directory: gen-gh-pages
run: |
cask exec emacs --script generate-html.el
mv ../dotfiles/emacs.d/README.html ./index.html
- name: Deploy to GitHub Pages
if: github.event_name == 'push' && github.ref == 'refs/heads/master'
uses: peaceiris/actions-gh-pages@v4
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./gen-gh-pages
publish_branch: gh-pages
user_name: 'github-actions[bot]'
user_email: 'github-actions[bot]@users.noreply.github.com'
commit_message: 'Deploy to GitHub Pages: ${{ github.sha }}'
keep_files: false

23
.gitignore vendored
View File

@@ -23,5 +23,28 @@ gotools
/dotfiles/config/taffybar/result
/dotfiles/emacs.d/*.sqlite
/dotfiles/config/gtk-3.0/colors.css
/dotfiles/config/gtk-3.0/settings.ini
/dotfiles/emacs.d/.cache/
/dotfiles/emacs.d/projectile.cache
/dotfiles/emacs.d/projectile-bookmarks.eld
/dotfiles/config/fontconfig/conf.d/10-hm-fonts.conf
/dotfiles/config/fontconfig/conf.d/52-hm-default-fonts.conf
/dotfiles/config/taffybar/_scratch/
/dotfiles/config/taffybar/taffybar-*/
/dotfiles/config/taffybar/status-notifier-item/
/dotfiles/config/taffybar/.direnv/
/dotfiles/config/taffybar/dist-newstyle/
/dotfiles/config/taffybar/sni-priorities.dat
/dotfiles/config/xmonad/dist-newstyle/
/dotfiles/config/hypr/hyprscratch.conf
/.worktrees/
# Secrets and machine-local state (managed via agenix/pass instead of git)
/dotfiles/config/asciinema/config
/dotfiles/config/remmina/remmina.pref
/dotfiles/config/screencloud/ScreenCloud.conf
# Local tool state
/.playwright-cli/
/nixos/action-cache-dir/
/dotfiles/config/taffybar/dbus-menu/

View File

@@ -1,8 +0,0 @@
language: generic
script: bash ./gen-gh-pages/deploy.sh
env:
global:
- ENCRYPTION_LABEL: "73e6c870aa87"
- COMMIT_AUTHOR_EMAIL: "IvanMalison@gmail.com"
- COMMIT_AUTHOR_NAME: "Ivan Malison"

37
docs/cachix.md Normal file
View File

@@ -0,0 +1,37 @@
# Cachix for this repo
This repo's NixOS flake lives under `nixos/`.
The workflow in `.github/workflows/cachix.yml` can build the `strixi-minaj`
system closure on GitHub Actions and push the results to a Cachix cache.
## One-time setup
1. Create a Cachix cache (on cachix.org).
2. Create a Cachix auth token with write access to that cache.
3. In the GitHub repo settings:
- Add a repo variable `CACHIX_CACHE_NAME` (the cache name).
- Add a repo secret `CACHIX_AUTH_TOKEN` (the write token).
After that, pushes to `master` will populate the cache.
## Using the cache locally
Option A: ad-hoc (non-declarative)
```sh
cachix use <your-cache-name>
```
Option B: declarative via flake `nixConfig` (recommended for NixOS)
1. Get the cache public key from the Cachix UI:
- Open `https://app.cachix.org/cache/<your-cache-name>#pull`
- Copy the `Public Key` value shown there.
2. Add it to `nixos/flake.nix` under `nixConfig.extra-substituters` and
`nixConfig.extra-trusted-public-keys`.
Note: `nixos/nix.nix` sets `nix.settings.accept-flake-config = true`, so the
flake `nixConfig` is honored during rebuilds.

View File

@@ -0,0 +1,152 @@
# Org-Agenda-API Consolidation Design
## Overview
Consolidate org-agenda-api container builds and fly.io deployment into the dotfiles repository. This eliminates the separate `colonelpanic-org-agenda-api` repo and provides:
- Container outputs available to NixOS machines directly
- Fly.io deployment from the same repo
- Fewer repos to maintain
- Cachix integration for faster builds
## Directory Structure
```
/home/imalison/dotfiles/
├── nixos/
│ ├── flake.nix # Main flake, adds container output
│ ├── org-agenda-api.nix # Existing tangling module (stays here)
│ └── ...
├── org-agenda-api/
│ ├── container.nix # Container build logic (mkContainer, etc.)
│ ├── configs/
│ │ ├── colonelpanic/
│ │ │ ├── custom-config.el
│ │ │ └── overrides.el (optional)
│ │ └── kat/
│ │ └── custom-config.el
│ ├── fly/
│ │ ├── fly.toml
│ │ ├── deploy.sh
│ │ └── config-{instance}.env
│ └── secrets/
│ ├── secrets.nix # agenix declarations
│ └── *.age # encrypted secrets
└── dotfiles/emacs.d/
└── org-config.org # Source of truth for org config
```
## Flake Integration
The main dotfiles flake at `/home/imalison/dotfiles/nixos/flake.nix` exposes container outputs:
```nix
outputs = inputs @ { self, nixpkgs, flake-utils, ... }:
{
nixosConfigurations = { ... }; # existing
} // flake-utils.lib.eachDefaultSystem (system:
let
pkgs = import nixpkgs { inherit system; };
containerLib = import ../org-agenda-api/container.nix {
inherit pkgs system;
tangledConfig = (import ./org-agenda-api.nix {
inherit pkgs system;
inputs = inputs;
}).org-agenda-custom-config;
};
in {
packages = {
container-colonelpanic = containerLib.mkInstanceContainer "colonelpanic";
container-kat = containerLib.mkInstanceContainer "kat";
};
}
);
```
Build with: `nix build .#container-colonelpanic`
## Custom Elisp & Tangling
Single source of truth: `org-config.org` tangles to elisp files loaded by containers.
**What stays in custom-config.el (container-specific glue):**
- Path overrides (`/data/org` instead of `~/org`)
- Stubs for unavailable packages (`org-bullets-mode` no-op)
- Customize-to-setq format conversion
- Template conversion for org-agenda-api format
- Instance-specific settings
**Audit:** During implementation, verify no actual org logic is duplicated in custom-config.el.
## Cachix Integration
### Phase 1: Use upstream cache as substituter
Add to dotfiles flake's `nixConfig`:
```nix
nixConfig = {
extra-substituters = [
"https://org-agenda-api.cachix.org"
];
extra-trusted-public-keys = [
"org-agenda-api.cachix.org-1:PUBLIC_KEY_HERE"
];
};
```
Benefits:
- `container-base` (~500MB+ dependencies) fetched from cache
- Rebuilds only process the small custom config layer
### Phase 2 (future): Push custom builds
Set up GitHub Action or local push for colonelpanic-specific container builds.
## Fly.io Deployment
**What moves:**
- `fly.toml``dotfiles/org-agenda-api/fly/fly.toml`
- `deploy.sh``dotfiles/org-agenda-api/fly/deploy.sh`
- `configs/*/config.env``dotfiles/org-agenda-api/fly/config-{instance}.env`
- Agenix secrets → `dotfiles/org-agenda-api/secrets/`
**Deploy script changes:**
- Build path: `nix build "../nixos#container-${INSTANCE}"`
- Secrets path adjusts to new location
- Otherwise same logic
## Implementation Phases
### Phase 1: Pull latest & verify current state
- Pull latest changes in org-agenda-api and colonelpanic-org-agenda-api
- Build container, verify it works
- Fix any issues before restructuring
### Phase 2: Create dotfiles structure
- Create `/home/imalison/dotfiles/org-agenda-api/` directory
- Move container.nix logic (adapted from current colonelpanic-org-agenda-api flake)
- Move instance configs (colonelpanic/, kat/)
- Move fly.io deployment files
- Move agenix secrets
### Phase 3: Integrate with dotfiles flake
- Update `/home/imalison/dotfiles/nixos/flake.nix` to expose container outputs
- Add cachix substituter configuration
- Test build from dotfiles: `nix build .#container-colonelpanic`
### Phase 4: Verify deployment
- Test deploy.sh from new location
- Verify fly.io deployment works
- Run the container locally on a NixOS machine
### Phase 5: Audit & cleanup
- Review custom-config.el for any duplicated org logic
- Archive colonelpanic-org-agenda-api repo
- Update any references/documentation
## Repos Affected
- **dotfiles** - Receives container build + fly.io deployment
- **colonelpanic-org-agenda-api** - Becomes obsolete after migration
- **org-agenda-api** (upstream) - No changes, used as flake input

128
dotfiles/agents/AGENTS.md Normal file
View File

@@ -0,0 +1,128 @@
# Agentic Session Preferences
## Tmux session titling
- Treat this chat as the controller for the current tmux session.
- Run the tmux rename one-liner eagerly without checking whether TMUX is set first.
- Maintain a session/window/pane title that updates when the task focus changes substantially.
- Prefer automatic titling: infer a concise <task> from the current user request and context without asking.
- Title format: "<project> - <task>".
- <project> is the basename of the current project directory.
- Prefer git repo root basename if available; otherwise use basename of the current working directory.
- <task> is a short, user-friendly description of what we are doing.
- Ask for a short descriptive <task> only when the task is ambiguous or you are not confident in an inferred title.
- When the task changes substantially, update the <task> automatically if clear; otherwise ask for an updated <task>.
- When a title is provided or updated, immediately run this one-liner:
tmux rename-session '<project> - <task>' \; rename-window '<project> - <task>' \; select-pane -T '<project> - <task>'
- Assume you are inside tmux, so do not use -t unless the user asks to target a specific session.
- For Claude Code sessions, a UserPromptSubmit hook will also update titles automatically based on the latest prompt.
## Pane usage
- Do not create extra panes or windows unless the user asks.
## Git worktrees
- Default to creating git worktrees under a project-local `.worktrees/` directory at the repository root.
- For a repository at `<repo_root>`, use worktree paths like `<repo_root>/.worktrees/<task-or-branch>`.
- Create `.worktrees/` if needed before running `git worktree add`.
- Only use a non-`.worktrees/` location when the user explicitly asks for a different path.
## NixOS workflow
- This system is managed with a Nix flake at `~/dotfiles/nixos`.
- Use `just switch` from that directory for rebuilds instead of plain `nixos-rebuild`.
- Host configs live under `machines/`; choose the appropriate host when needed.
## Ad-hoc utilities via Nix
- If you want to use a CLI utility you know about but it is not currently available on PATH, prefer using `nix run` / `nix shell` to get it temporarily rather than installing it globally.
- Use `nix run` for a single command:
nix run nixpkgs#ripgrep -- rg -n "pattern" .
- Use `nix shell` when you need multiple tools available for a short sequence of commands:
nix shell nixpkgs#{jq,ripgrep} --command bash -lc 'rg -n "pattern" . | head'
- If you are not sure what the package is called in nixpkgs, use:
nix search nixpkgs <name-or-keyword>
## Personal Information
- Full Legal Name: Ivan Anthony Malison
- Email: IvanMalison@gmail.com
- Country of Citizenship: United States of America
- Birthday: August 2, 1990 (1990-08-02)
- Address: 100 Broderick St APT 401, San Francisco, CA 94117, United States
- Employer: Railbird Inc.
- GitHub: colonelpanic8
- Phone: 301-244-8534
- Primary Credit Card: Chase-Reserve
## Repository Overview
This is an org-mode repository containing personal task management, calendars, habits, and project tracking files. It serves as the central hub for Ivan's personal organization.
## Available Tools
### Chrome DevTools MCP
A browser automation MCP is available for interacting with web pages. Use it to:
- Navigate to websites and fill out forms
- Take screenshots and snapshots of pages
- Click elements, type text, and interact with web UIs
- Read page content and extract information
- Automate multi-step web workflows (booking, purchasing, form submission, etc.)
### Google Workspace CLI (`gws`)
The local `gws` CLI is available for Google Workspace operations. Use it to:
- Search, read, and send Gmail messages
- Manage Gmail labels and filters
- Download attachments and inspect message payloads
- Access Drive, Calendar, Docs, Sheets, and other Google Workspace APIs
## Credentials via `pass`
Many credentials and personal details are stored in `pass` (the standard unix password manager). There are hundreds of entries covering a wide range of things, so always search before asking the user for information. Use `pass find <keyword>` to search and `pass show <entry>` to retrieve values.
Examples of what's stored:
- Personal documents - driver's license, passport number, etc.
- Credit/debit cards - card numbers, expiration, CVV for various cards
- Banking - account numbers, online banking logins
- Travel & loyalty - airline accounts, hotel programs, CLEAR, etc.
- Website logins - credentials for hundreds of services
- API keys & tokens - GitHub, various services
- The store is regularly updated with new entries. Always do a dynamic lookup with `pass find` rather than assuming what's there.
- Provide credentials to tools/config at runtime via environment variables or inline `pass` usage instead of committing them.
- Never hardcode credentials or store them in plain text files.
## Guidelines
- When filling out forms or making purchases, pull personal info from this file and credentials from `pass` rather than asking the user to provide them.
- For web tasks, prefer using the Chrome DevTools MCP to automate interactions directly.
- For email tasks, prefer using `gws gmail` over navigating to Gmail in the browser.
- If a task requires a credential not found in `pass`, ask the user rather than guessing.
- This repo's org files (gtd.org, calendar.org, habits.org, projects.org) contain task and scheduling data. The org-agenda-api skill/service can also be used to query agenda data programmatically.
## Project links (local symlink index)
- Paths in this section are relative to this file's directory (`dotfiles/agents/`).
- Keep a local symlink index under `./project-links/` for projects that are frequently referenced.
- Treat these links as machine-local discovery state maintained by agents (do not commit machine-specific targets).
- Reuse existing symlinks first. If a link is missing or stale, search for the repo, then update the link with:
ln -sfn "<absolute-path-to-repo>" "./project-links/<link-name>"
- If a project cannot be found quickly, do a targeted search (starting from likely roots) and only then widen the search.
## Project constellation guides
- Keep per-constellation context in `./project-guides/` and keep this file minimal.
- When a request involves one of these projects:
- Open the guide first.
- If a mentioned repo/package name matches a guide's related-project list, open that guide even if the user did not name the constellation explicitly.
- Ensure required links exist under `./project-links/`.
- If links are missing, run a targeted search from likely roots, then create/update the symlink.
- Guide index:
- `./project-guides/mova-org-agenda-api.md`
- `./project-guides/taffybar.md`
- `./project-guides/railbird.md`
- `./project-guides/org-emacs-packages.md`

View File

@@ -0,0 +1,70 @@
#!/usr/bin/env bash
set -euo pipefail
if [[ -z "${TMUX:-}" ]]; then
exit 0
fi
input=$(cat)
read -r cwd prompt <<'PY' < <(python3 - <<'PY'
import json, os, sys
try:
data = json.load(sys.stdin)
except Exception:
data = {}
cwd = data.get("cwd") or os.getcwd()
prompt = (data.get("prompt") or "").strip()
print(cwd)
print(prompt)
PY
)
if [[ -z "${cwd}" ]]; then
cwd="$PWD"
fi
project_root=$(git -C "$cwd" rev-parse --show-toplevel 2>/dev/null || true)
if [[ -n "$project_root" ]]; then
project=$(basename "$project_root")
else
project=$(basename "$cwd")
fi
prompt_first_line=$(printf '%s' "$prompt" | head -n 1 | tr '\n' ' ' | sed -e 's/[[:space:]]\+/ /g' -e 's/^ *//; s/ *$//')
lower=$(printf '%s' "$prompt_first_line" | tr '[:upper:]' '[:lower:]')
case "$lower" in
""|"ok"|"okay"|"thanks"|"thx"|"cool"|"yep"|"yes"|"no"|"sure"|"done"|"k")
exit 0
;;
esac
task="$prompt_first_line"
if [[ -z "$task" ]]; then
task="work"
fi
# Trim to a reasonable length for tmux status bars.
if [[ ${#task} -gt 60 ]]; then
task="${task:0:57}..."
fi
title="$project - $task"
state_dir="${HOME}/.agents/state"
state_file="$state_dir/tmux-title"
mkdir -p "$state_dir"
if [[ -f "$state_file" ]]; then
last_title=$(cat "$state_file" 2>/dev/null || true)
if [[ "$last_title" == "$title" ]]; then
exit 0
fi
fi
printf '%s' "$title" > "$state_file"
# Update session, window, and pane titles.
tmux rename-session "$title" \; rename-window "$title" \; select-pane -T "$title"

View File

@@ -0,0 +1,29 @@
# Mova / org-agenda-api constellation
## Scope
- Use this guide for requests involving the mova constellation, including `org-agenda-api`.
- Primary anchor is the mova root repo; start there and branch out.
## Related packages/projects (trigger list)
- If any of these names are mentioned, open this guide for context.
- `mova-dev`: coordination repo for the mova ecosystem and cross-repo workflows.
- `mova`: React Native app (iOS/Android/Web).
- `org-agenda-api`: Emacs Lisp HTTP API and deployment container.
- `org-window-habit`: habit-tracking logic used by org workflows.
- `org-wild-notifier`: org notification logic and scheduling behavior.
- `dotfiles` (within mova-dev context): infra/config and deployment glue for org-agenda-api.
## Symlink targets
- `./project-links/mova-dev` -> mova constellation root.
## Discovery hints
- Check likely roots first, especially `~/Projects`.
- Common local path is `~/Projects/mova-dev`, but do not assume it exists.
- If the symlink is missing or stale, search by directory name first, then by repo names.
## Read-first docs
- `./project-links/mova-dev/README.md`
- `./project-links/mova-dev/org-agenda-api/README.md` (if present)
## Notes
- Prefer treating mova root docs as canonical project context.

View File

@@ -0,0 +1,25 @@
# Org / Emacs package constellation
## Scope
- Use this guide for org-related package repos, including `org-window-habit`.
- This is especially relevant when repos are managed through local Emacs package trees.
## Related packages/projects (trigger list)
- If any of these names are mentioned, open this guide for context.
- `org-window-habit`: org habit-tracking package/repo.
- `org-wild-notifier`: org notification package/repo.
- `org-agenda-api`: Emacs Lisp HTTP API project that loads org package deps.
- `elpaca`: Emacs package manager tree where local checkouts may live.
- `elpa`: traditional Emacs package install tree (fallback search area).
## Symlink targets
- `./project-links/org-window-habit` -> org-window-habit repo/root.
## Discovery hints
- Start with Emacs roots, especially `~/.emacs.d`.
- Prefer checking package manager trees (including `elpaca`) before broader searches.
- Common pattern is nested repos under `~/.emacs.d` package directories.
## Read-first docs
- `./project-links/org-window-habit/README.md`
- `./project-links/org-window-habit/README.org` (if present)

View File

@@ -0,0 +1,28 @@
# Railbird constellation
## Scope
- Use this guide for requests involving railbird backend/main repo and railbird mobile app work.
## Related packages/projects (trigger list)
- If any of these names are mentioned, open this guide for context.
- `railbird`: primary backend/main railbird repository.
- `railbird-mobile`: primary mobile app repository.
- `railbird2`: alternate/new-generation backend repo.
- `railbird-mobile2`: alternate/new-generation mobile repo.
- `railbird-docs`: documentation repository.
- `railbird-landing-page`: marketing/landing site repository.
- `railbird-alert-tuning`: alert/tuning and operational experimentation repo.
- `railbird-agents-architecture`: architecture notes/prototypes for agent workflows.
## Symlink targets
- `./project-links/railbird` -> primary railbird repo.
- `./project-links/railbird-mobile` -> railbird mobile app repo.
## Discovery hints
- Start from `~/Projects`.
- Common backend location is `~/Projects/railbird`.
- Mobile repo often also lives under `~/Projects`, but name/path may vary by machine.
## Read-first docs
- `./project-links/railbird/README.md`
- `./project-links/railbird-mobile/README.md` (if present)

View File

@@ -0,0 +1,30 @@
# Taffybar constellation
## Scope
- Use this guide for requests involving taffybar itself or local taffybar configuration.
## Related packages/projects (trigger list)
- If any of these names are mentioned, open this guide for context.
- `taffybar`: top-level desktop bar library/app.
- `imalison-taffybar`: personal taffybar configuration package/repo.
- `gtk-sni-tray`: StatusNotifier tray integration for taffybar.
- `gtk-strut`: X11/WM strut handling used by taffybar ecosystem.
- `status-notifier-item`: StatusNotifier protocol/types library.
- `dbus-menu`: DBus menu protocol support used by tray integrations.
- `dbus-hslogger`: DBus logging helper used in ecosystem packages.
## Symlink targets
- `./project-links/taffybar-main` -> main taffybar repo.
- `./project-links/taffybar-config` -> local taffybar config root.
## Discovery hints
- Start with `~/.config/taffybar`.
- Common layout is:
- config root at `~/.config/taffybar`
- main repo at `~/.config/taffybar/taffybar`
- Other taffybar-related repos may exist elsewhere; find them from docs in the main repo.
## Read-first docs
- `./project-links/taffybar-main/README.md`
- `./project-links/taffybar-config/README.md` (if present)
- `./project-links/taffybar-config/AGENTS.md` (if present)

View File

@@ -0,0 +1,2 @@
*
!.gitignore

View File

@@ -0,0 +1 @@
79bd4e36950d6270

View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf of
any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don\'t include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,279 @@
---
name: "imagegen"
description: "Generate or edit raster images when the task benefits from AI-created bitmap visuals such as photos, illustrations, textures, sprites, mockups, or transparent-background cutouts. Use when Codex should create a brand-new image, transform an existing image, or derive visual variants from references, and the output should be a bitmap asset rather than repo-native code or vector. Do not use when the task is better handled by editing existing SVG/vector/code-native assets, extending an established icon or logo system, or building the visual directly in HTML/CSS/canvas."
---
# Image Generation Skill
Generates or edits images for the current project (for example website assets, game assets, UI mockups, product mockups, wireframes, logo design, photorealistic images, or infographics).
## Top-level modes and rules
This skill has exactly two top-level modes:
- **Default built-in tool mode (preferred):** built-in `image_gen` tool for normal image generation and editing. Does not require `OPENAI_API_KEY`.
- **Fallback CLI mode (explicit-only):** `scripts/image_gen.py` CLI. Use only when the user explicitly asks for the CLI path. Requires `OPENAI_API_KEY`.
Within the explicit CLI fallback only, the CLI exposes three subcommands:
- `generate`
- `edit`
- `generate-batch`
Rules:
- Use the built-in `image_gen` tool by default for all normal image generation and editing requests.
- Never switch to CLI fallback automatically.
- If the built-in tool fails or is unavailable, tell the user the CLI fallback exists and that it requires `OPENAI_API_KEY`. Proceed only if the user explicitly asks for that fallback.
- If the user explicitly asks for CLI mode, use the bundled `scripts/image_gen.py` workflow. Do not create one-off SDK runners.
- Never modify `scripts/image_gen.py`. If something is missing, ask the user before doing anything else.
Built-in save-path policy:
- In built-in tool mode, Codex saves generated images under `$CODEX_HOME/*` by default.
- Do not describe or rely on OS temp as the default built-in destination.
- Do not describe or rely on a destination-path argument (if any) on the built-in `image_gen` tool. If a specific location is needed, generate first and then move or copy the selected output from `$CODEX_HOME/generated_images/...`.
- Save-path precedence in built-in mode:
1. If the user names a destination, move or copy the selected output there.
2. If the image is meant for the current project, move or copy the final selected image into the workspace before finishing.
3. If the image is only for preview or brainstorming, render it inline; the underlying file can remain at the default `$CODEX_HOME/*` path.
- Never leave a project-referenced asset only at the default `$CODEX_HOME/*` path.
- Do not overwrite an existing asset unless the user explicitly asked for replacement; otherwise create a sibling versioned filename such as `hero-v2.png` or `item-icon-edited.png`.
Shared prompt guidance for both modes lives in `references/prompting.md` and `references/sample-prompts.md`.
Fallback-only docs/resources for CLI mode:
- `references/cli.md`
- `references/image-api.md`
- `references/codex-network.md`
- `scripts/image_gen.py`
## When to use
- Generate a new image (concept art, product shot, cover, website hero)
- Generate a new image using one or more reference images for style, composition, or mood
- Edit an existing image (inpainting, lighting or weather transformations, background replacement, object removal, compositing, transparent background)
- Produce many assets or variants for one task
## When not to use
- Extending or matching an existing SVG/vector icon set, logo system, or illustration library inside the repo
- Creating simple shapes, diagrams, wireframes, or icons that are better produced directly in SVG, HTML/CSS, or canvas
- Making a small project-local asset edit when the source file already exists in an editable native format
- Any task where the user clearly wants deterministic code-native output instead of a generated bitmap
## Decision tree
Think about two separate questions:
1. **Intent:** is this a new image or an edit of an existing image?
2. **Execution strategy:** is this one asset or many assets/variants?
Intent:
- If the user wants to modify an existing image while preserving parts of it, treat the request as **edit**.
- If the user provides images only as references for style, composition, mood, or subject guidance, treat the request as **generate**.
- If the user provides no images, treat the request as **generate**.
Built-in edit semantics:
- Built-in edit mode is for images already visible in the conversation context, such as attached images or images generated earlier in the thread.
- If the user wants to edit a local image file with the built-in tool, first load it with built-in `view_image` tool so the image is visible in the conversation context, then proceed with the built-in edit flow.
- Do not promise arbitrary filesystem-path editing through the built-in tool.
- If a local file still needs direct file-path control, masks, or other explicit CLI-only parameters, use the explicit CLI fallback only when the user asks for it.
- For edits, preserve invariants aggressively and save non-destructively by default.
Execution strategy:
- In the built-in default path, produce many assets or variants by issuing one `image_gen` call per requested asset or variant.
- In the explicit CLI fallback path, use the CLI `generate-batch` subcommand only when the user explicitly chose CLI mode and needs many prompts/assets.
Assume the user wants a new image unless they clearly ask to change an existing one.
## Workflow
1. Decide the top-level mode: built-in by default, fallback CLI only if explicitly requested.
2. Decide the intent: `generate` or `edit`.
3. Decide whether the output is preview-only or meant to be consumed by the current project.
4. Decide the execution strategy: single asset vs repeated built-in calls vs CLI `generate-batch`.
5. Collect inputs up front: prompt(s), exact text (verbatim), constraints/avoid list, and any input images.
6. For every input image, label its role explicitly:
- reference image
- edit target
- supporting insert/style/compositing input
7. If the edit target is only on the local filesystem and you are staying on the built-in path, inspect it with `view_image` first so the image is available in conversation context.
8. If the user asked for a photo, illustration, sprite, product image, banner, or other explicitly raster-style asset, use `image_gen` rather than substituting SVG/HTML/CSS placeholders. If the request is for an icon, logo, or UI graphic that should match existing repo-native SVG/vector/code assets, prefer editing those directly instead.
9. Augment the prompt based on specificity:
- If the user's prompt is already specific and detailed, normalize it into a clear spec without adding creative requirements.
- If the user's prompt is generic, add tasteful augmentation only when it materially improves output quality.
10. Use the built-in `image_gen` tool by default.
11. If the user explicitly chooses the CLI fallback, then and only then use the fallback-only docs for quality, `input_fidelity`, masks, output format, output paths, and network setup.
12. Inspect outputs and validate: subject, style, composition, text accuracy, and invariants/avoid items.
13. Iterate with a single targeted change, then re-check.
14. For preview-only work, render the image inline; the underlying file may remain at the default `$CODEX_HOME/generated_images/...` path.
15. For project-bound work, move or copy the selected artifact into the workspace and update any consuming code or references. Never leave a project-referenced asset only at the default `$CODEX_HOME/generated_images/...` path.
16. For batches, persist only the selected finals in the workspace unless the user explicitly asked to keep discarded variants.
17. Always report the final saved path for any workspace-bound asset, plus the final prompt and whether the built-in tool or fallback CLI mode was used.
## Prompt augmentation
Reformat user prompts into a structured, production-oriented spec. Make the user's goal clearer and more actionable, but do not blindly add detail.
Treat this as prompt-shaping guidance, not a closed schema. Use only the lines that help, and add a short extra labeled line when it materially improves clarity.
### Specificity policy
Use the user's prompt specificity to decide how much augmentation is appropriate:
- If the prompt is already specific and detailed, preserve that specificity and only normalize/structure it.
- If the prompt is generic, you may add tasteful augmentation when it will materially improve the result.
Allowed augmentations:
- composition or framing hints
- polish level or intended-use hints
- practical layout guidance
- reasonable scene concreteness that supports the stated request
Not allowed augmentations:
- extra characters or objects that are not implied by the request
- brand names, slogans, palettes, or narrative beats that are not implied
- arbitrary side-specific placement unless the surrounding layout supports it
## Use-case taxonomy (exact slugs)
Classify each request into one of these buckets and keep the slug consistent across prompts and references.
Generate:
- photorealistic-natural — candid/editorial lifestyle scenes with real texture and natural lighting.
- product-mockup — product/packaging shots, catalog imagery, merch concepts.
- ui-mockup — app/web interface mockups and wireframes; specify the desired fidelity.
- infographic-diagram — diagrams/infographics with structured layout and text.
- logo-brand — logo/mark exploration, vector-friendly.
- illustration-story — comics, childrens book art, narrative scenes.
- stylized-concept — style-driven concept art, 3D/stylized renders.
- historical-scene — period-accurate/world-knowledge scenes.
Edit:
- text-localization — translate/replace in-image text, preserve layout.
- identity-preserve — try-on, person-in-scene; lock face/body/pose.
- precise-object-edit — remove/replace a specific element (including interior swaps).
- lighting-weather — time-of-day/season/atmosphere changes only.
- background-extraction — transparent background / clean cutout.
- style-transfer — apply reference style while changing subject/scene.
- compositing — multi-image insert/merge with matched lighting/perspective.
- sketch-to-render — drawing/line art to photoreal render.
## Shared prompt schema
Use the following labeled spec as shared prompt scaffolding for both top-level modes:
```text
Use case: <taxonomy slug>
Asset type: <where the asset will be used>
Primary request: <user's main prompt>
Input images: <Image 1: role; Image 2: role> (optional)
Scene/backdrop: <environment>
Subject: <main subject>
Style/medium: <photo/illustration/3D/etc>
Composition/framing: <wide/close/top-down; placement>
Lighting/mood: <lighting + mood>
Color palette: <palette notes>
Materials/textures: <surface details>
Text (verbatim): "<exact text>"
Constraints: <must keep/must avoid>
Avoid: <negative constraints>
```
Notes:
- `Asset type` and `Input images` are prompt scaffolding, not dedicated CLI flags.
- `Scene/backdrop` refers to the visual setting. It is not the same as the fallback CLI `background` parameter, which controls output transparency behavior.
- Fallback-only execution notes such as `Quality:`, `Input fidelity:`, masks, output format, and output paths belong in the explicit CLI path only. Do not treat them as built-in `image_gen` tool arguments.
Augmentation rules:
- Keep it short.
- Add only the details needed to improve the prompt materially.
- For edits, explicitly list invariants (`change only X; keep Y unchanged`).
- If any critical detail is missing and blocks success, ask a question; otherwise proceed.
## Examples
### Generation example (hero image)
```text
Use case: product-mockup
Asset type: landing page hero
Primary request: a minimal hero image of a ceramic coffee mug
Style/medium: clean product photography
Composition/framing: wide composition with usable negative space for page copy if needed
Lighting/mood: soft studio lighting
Constraints: no logos, no text, no watermark
```
### Edit example (invariants)
```text
Use case: precise-object-edit
Asset type: product photo background replacement
Primary request: replace only the background with a warm sunset gradient
Constraints: change only the background; keep the product and its edges unchanged; no text; no watermark
```
## Prompting best practices
- Structure prompt as scene/backdrop -> subject -> details -> constraints.
- Include intended use (ad, UI mock, infographic) to set the mode and polish level.
- Use camera/composition language for photorealism.
- Only use SVG/vector stand-ins when the user explicitly asked for vector output or a non-image placeholder.
- Quote exact text and specify typography + placement.
- For tricky words, spell them letter-by-letter and require verbatim rendering.
- For multi-image inputs, reference images by index and describe how they should be used.
- For edits, repeat invariants every iteration to reduce drift.
- Iterate with single-change follow-ups.
- If the prompt is generic, add only the extra detail that will materially help.
- If the prompt is already detailed, normalize it instead of expanding it.
- For explicit CLI fallback only, see `references/cli.md` and `references/image-api.md` for `quality`, `input_fidelity`, masks, output format, and output-path guidance.
More principles shared by both modes: `references/prompting.md`.
Copy/paste specs shared by both modes: `references/sample-prompts.md`.
## Guidance by asset type
Asset-type templates (website assets, game assets, wireframes, logo) are consolidated in `references/sample-prompts.md`.
## Fallback CLI mode only
### Temp and output conventions
These conventions apply only to the explicit CLI fallback. They do not describe built-in `image_gen` output behavior.
- Use `tmp/imagegen/` for intermediate files (for example JSONL batches); delete them when done.
- Write final artifacts under `output/imagegen/`.
- Use `--out` or `--out-dir` to control output paths; keep filenames stable and descriptive.
### Dependencies
Prefer `uv` for dependency management in this repo.
Required Python package:
```bash
uv pip install openai
```
Optional for downscaling only:
```bash
uv pip install pillow
```
Portability note:
- If you are using the installed skill outside this repo, install dependencies into that environment with its package manager.
- In uv-managed environments, `uv pip install ...` remains the preferred path.
### Environment
- `OPENAI_API_KEY` must be set for live API calls.
- Do not ask the user for `OPENAI_API_KEY` when using the built-in `image_gen` tool.
- Never ask the user to paste the full key in chat. Ask them to set it locally and confirm when ready.
If the key is missing, give the user these steps:
1. Create an API key in the OpenAI platform UI: https://platform.openai.com/api-keys
2. Set `OPENAI_API_KEY` as an environment variable in their system.
3. Offer to guide them through setting the environment variable for their OS/shell if needed.
If installation is not possible in this environment, tell the user which dependency is missing and how to install it into their active environment.
### Script-mode notes
- CLI commands + examples: `references/cli.md`
- API parameter quick reference: `references/image-api.md`
- Network approvals / sandbox settings for CLI mode: `references/codex-network.md`
## Reference map
- `references/prompting.md`: shared prompting principles for both modes.
- `references/sample-prompts.md`: shared copy/paste prompt recipes for both modes.
- `references/cli.md`: fallback-only CLI usage via `scripts/image_gen.py`.
- `references/image-api.md`: fallback-only API/CLI parameter reference.
- `references/codex-network.md`: fallback-only network/sandbox troubleshooting for CLI mode.
- `scripts/image_gen.py`: fallback-only CLI implementation. Do not load or use it unless the user explicitly chooses CLI mode.

View File

@@ -0,0 +1,6 @@
interface:
display_name: "Image Gen"
short_description: "Generate or edit images for websites, games, and more"
icon_small: "./assets/imagegen-small.svg"
icon_large: "./assets/imagegen.png"
default_prompt: "Generate or edit the visual assets for this task with the built-in `image_gen` tool by default. First confirm that the task actually calls for a raster image; if the project already has SVG/vector/code-native assets and the user wants to extend or match those, do not use this skill. If the task includes reference images, treat them as references unless the user clearly wants an existing image modified. For multi-asset requests, loop built-in calls rather than treating batch as a separate top-level mode. Only use the fallback CLI if the user explicitly asks for it, and keep CLI-only controls such as `generate-batch`, `quality`, `input_fidelity`, masks, and output paths on that fallback path."

View File

@@ -0,0 +1,5 @@
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
<path fill="currentColor" d="M7.51 6.827a1 1 0 1 1 .278 1.982 1 1 0 0 1-.278-1.982Z"/>
<path fill="currentColor" fill-rule="evenodd" d="M8.31 4.47c.368-.016.699.008 1.016.124l.186.075c.423.194.786.5 1.047.888l.067.107c.148.253.235.533.3.848.073.354.126.797.193 1.343l.277 2.25.088.745c.024.224.041.425.049.605.013.322-.004.615-.085.896l-.04.12a2.53 2.53 0 0 1-.802 1.115l-.16.118c-.281.189-.596.292-.956.366a9.46 9.46 0 0 1-.6.1l-.743.094-2.25.277c-.547.067-.99.121-1.35.136a2.765 2.765 0 0 1-.896-.085l-.12-.039a2.533 2.533 0 0 1-1.115-.802l-.118-.161c-.189-.28-.292-.596-.366-.956a9.42 9.42 0 0 1-.1-.599l-.094-.744-.276-2.25a17.884 17.884 0 0 1-.137-1.35c-.015-.367.009-.698.124-1.015l.076-.185c.193-.423.5-.787.887-1.048l.107-.067c.253-.148.534-.234.849-.3.354-.073.796-.126 1.343-.193l2.25-.277.744-.088c.224-.024.425-.041.606-.049Zm-2.905 5.978a1.47 1.47 0 0 0-.875.074c-.127.052-.267.146-.475.344-.212.204-.462.484-.822.889l-.314.351c.018.115.036.219.055.313.061.295.127.458.206.575l.07.094c.167.211.39.372.645.465l.109.032c.119.027.273.038.499.029.308-.013.7-.06 1.264-.13l2.25-.275.727-.093.198-.03-2.05-1.64a16.848 16.848 0 0 0-.96-.738c-.18-.121-.31-.19-.421-.23l-.106-.03Zm2.95-4.915c-.154.006-.33.021-.536.043l-.729.086-2.25.276c-.564.07-.956.118-1.257.18a1.937 1.937 0 0 0-.478.15l-.097.057a1.47 1.47 0 0 0-.515.608l-.044.107c-.048.133-.073.307-.06.608.012.307.06.7.129 1.264l.22 1.8.178-.197c.145-.159.278-.298.403-.418.255-.243.507-.437.809-.56l.181-.067a2.526 2.526 0 0 1 1.328-.06l.118.029c.27.079.517.215.772.387.287.194.619.46 1.03.789l2.52 2.016c.146-.148.26-.326.332-.524l.031-.109c.027-.119.039-.273.03-.499a8.311 8.311 0 0 0-.044-.536l-.086-.728-.276-2.25c-.07-.564-.118-.956-.18-1.258a1.935 1.935 0 0 0-.15-.477l-.057-.098a1.468 1.468 0 0 0-.608-.515l-.107-.043c-.133-.049-.306-.074-.607-.061Z" clip-rule="evenodd"/>
<path fill="currentColor" d="M7.783 1.272c.36.014.803.07 1.35.136l2.25.277.743.095c.224.03.423.062.6.099.36.074.675.177.955.366l.161.118c.364.29.642.675.802 1.115l.04.12c.081.28.098.574.085.896a9.42 9.42 0 0 1-.05.605l-.087.745-.277 2.25c-.067.547-.12.989-.193 1.343a2.765 2.765 0 0 1-.3.848l-.067.107a2.534 2.534 0 0 1-.415.474l-.086.064a.532.532 0 0 1-.622-.858l.13-.13c.04-.046.077-.094.111-.145l.057-.098c.055-.109.104-.256.15-.477.062-.302.11-.694.18-1.258l.276-2.25.086-.728c.022-.207.037-.382.043-.536.01-.226-.002-.38-.029-.5l-.032-.108a1.469 1.469 0 0 0-.464-.646l-.094-.069c-.118-.08-.28-.145-.575-.206a8.285 8.285 0 0 0-.53-.088l-.728-.092-2.25-.276c-.565-.07-.956-.117-1.264-.13a1.94 1.94 0 0 0-.5.029l-.108.032a1.469 1.469 0 0 0-.647.465l-.068.094c-.054.08-.102.18-.146.33l-.04.1a.533.533 0 0 1-.98-.403l.055-.166c.059-.162.133-.314.23-.457l.117-.16c.29-.365.675-.643 1.115-.803l.12-.04c.28-.08.574-.097.896-.084Z"/>
</svg>

After

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 KiB

View File

@@ -0,0 +1,160 @@
# CLI reference (`scripts/image_gen.py`)
This file is for the fallback CLI mode only. Read it only after the user explicitly asks to use `scripts/image_gen.py` instead of the built-in `image_gen` tool.
`generate-batch` is a CLI subcommand in this fallback path. It is not a top-level mode of the skill.
## What this CLI does
- `generate`: generate a new image from a prompt
- `edit`: edit one or more existing images
- `generate-batch`: run many generation jobs from a JSONL file
Real API calls require **network access** + `OPENAI_API_KEY`. `--dry-run` does not.
## Quick start (works from any repo)
Set a stable path to the skill CLI (default `CODEX_HOME` is `~/.codex`):
```
export CODEX_HOME="${CODEX_HOME:-$HOME/.codex}"
export IMAGE_GEN="$CODEX_HOME/skills/imagegen/scripts/image_gen.py"
```
Install dependencies into that environment with its package manager. In uv-managed environments, `uv pip install ...` remains the preferred path.
## Quick start
Dry-run (no API call; no network required; does not require the `openai` package):
```bash
python "$IMAGE_GEN" generate \
--prompt "Test" \
--out output/imagegen/test.png \
--dry-run
```
Notes:
- One-off dry-runs print the API payload and the computed output path(s).
- Repo-local finals should live under `output/imagegen/`.
Generate (requires `OPENAI_API_KEY` + network):
```bash
python "$IMAGE_GEN" generate \
--prompt "A cozy alpine cabin at dawn" \
--size 1024x1024 \
--out output/imagegen/alpine-cabin.png
```
Edit:
```bash
python "$IMAGE_GEN" edit \
--image input.png \
--prompt "Replace only the background with a warm sunset" \
--out output/imagegen/sunset-edit.png
```
## Guardrails
- Use the bundled CLI directly (`python "$IMAGE_GEN" ...`) after activating the correct environment.
- Do **not** create one-off runners (for example `gen_images.py`) unless the user explicitly asks for a custom wrapper.
- **Never modify** `scripts/image_gen.py`. If something is missing, ask the user before doing anything else.
## Defaults
- Model: `gpt-image-1.5`
- Supported model family for this CLI: GPT Image models (`gpt-image-*`)
- Size: `1024x1024`
- Quality: `auto`
- Output format: `png`
- Default one-off output path: `output/imagegen/output.png`
- Background: unspecified unless `--background` is set
## Quality, input fidelity, and masks (CLI fallback only)
These are explicit CLI controls. They are not built-in `image_gen` tool arguments.
- `--quality` works for `generate`, `edit`, and `generate-batch`: `low|medium|high|auto`
- `--input-fidelity` is **edit-only** and validated as `low|high`
- `--mask` is **edit-only**
Example:
```bash
python "$IMAGE_GEN" edit \
--image input.png \
--prompt "Change only the background" \
--quality high \
--input-fidelity high \
--out output/imagegen/background-edit.png
```
Mask notes:
- For multi-image edits, pass repeated `--image` flags. Their order is meaningful, so describe each image by index and role in the prompt.
- The CLI accepts a single `--mask`.
- Use a PNG mask when possible; the script treats mask handling as best-effort and does not perform full preflight validation beyond file checks/warnings.
- In the edit prompt, repeat invariants (`change only the background; keep the subject unchanged`) to reduce drift.
## Output handling
- Use `tmp/imagegen/` for temporary JSONL inputs or scratch files.
- Use `output/imagegen/` for final outputs.
- Reruns fail if a target file already exists unless you pass `--force`.
- `--out-dir` changes one-off naming to `image_1.<ext>`, `image_2.<ext>`, and so on.
- Downscaled copies use the default suffix `-web` unless you override it.
## Common recipes
Generate with augmentation fields:
```bash
python "$IMAGE_GEN" generate \
--prompt "A minimal hero image of a ceramic coffee mug" \
--use-case "product-mockup" \
--style "clean product photography" \
--composition "wide product shot with usable negative space for page copy" \
--constraints "no logos, no text" \
--out output/imagegen/mug-hero.png
```
Generate + also write a downscaled copy for fast web loading:
```bash
python "$IMAGE_GEN" generate \
--prompt "A cozy alpine cabin at dawn" \
--size 1024x1024 \
--downscale-max-dim 1024 \
--out output/imagegen/alpine-cabin.png
```
Generate multiple prompts concurrently (async batch):
```bash
mkdir -p tmp/imagegen output/imagegen/batch
cat > tmp/imagegen/prompts.jsonl << 'EOF'
{"prompt":"Cavernous hangar interior with a compact shuttle parked near the center","use_case":"stylized-concept","composition":"wide-angle, low-angle","lighting":"volumetric light rays through drifting fog","constraints":"no logos or trademarks; no watermark","size":"1536x1024"}
{"prompt":"Gray wolf in profile in a snowy forest","use_case":"photorealistic-natural","composition":"eye-level","constraints":"no logos or trademarks; no watermark","size":"1024x1024"}
EOF
python "$IMAGE_GEN" generate-batch \
--input tmp/imagegen/prompts.jsonl \
--out-dir output/imagegen/batch \
--concurrency 5
rm -f tmp/imagegen/prompts.jsonl
```
Notes:
- `generate-batch` requires `--out-dir`.
- generate-batch requires --out-dir.
- Use `--concurrency` to control parallelism (default `5`).
- Per-job overrides are supported in JSONL (for example `size`, `quality`, `background`, `output_format`, `output_compression`, `moderation`, `n`, `model`, `out`, and prompt-augmentation fields).
- `--n` generates multiple variants for a single prompt; `generate-batch` is for many different prompts.
- In batch mode, per-job `out` is treated as a filename under `--out-dir`.
## CLI notes
- Supported sizes: `1024x1024`, `1536x1024`, `1024x1536`, or `auto`.
- Transparent backgrounds require `output_format` to be `png` or `webp`.
- `--prompt-file`, `--output-compression`, `--moderation`, `--max-attempts`, `--fail-fast`, `--force`, and `--no-augment` are supported.
- This CLI is intended for GPT Image models. Do not assume older non-GPT image-model behavior applies here.
## See also
- API parameter quick reference for fallback CLI mode: `references/image-api.md`
- Prompt examples shared across both top-level modes: `references/sample-prompts.md`
- Network/sandbox notes for fallback CLI mode: `references/codex-network.md`

View File

@@ -0,0 +1,33 @@
# Codex network approvals / sandbox notes
This file is for the fallback CLI mode only. Read it only after the user explicitly asks to use `scripts/image_gen.py`.
This guidance is intentionally isolated from `SKILL.md` because it can vary by environment and may become stale. Prefer the defaults in your environment when in doubt.
## Why am I asked to approve image generation calls?
The fallback CLI uses the OpenAI Image API, so it needs outbound network access. In many Codex setups, network access is disabled by default and/or the approval policy requires confirmation before networked commands run.
## Important note about approvals vs network
- `--ask-for-approval never` suppresses approval prompts.
- It does **not** by itself enable network access.
- In `workspace-write`, network access still depends on your Codex configuration (for example `[sandbox_workspace_write] network_access = true`).
## How do I reduce repeated approval prompts?
If you trust the repo and want fewer prompts, use a configuration or profile that both:
- enables network for the sandbox mode you plan to use
- sets an approval policy that matches your risk tolerance
Example `~/.codex/config.toml` pattern:
```toml
approval_policy = "on-request"
sandbox_mode = "workspace-write"
[sandbox_workspace_write]
network_access = true
```
If you want quieter automation after network is enabled, you can choose a stricter approval policy, but do that intentionally and with care.
## Safety note
Enabling network and reducing approvals lowers friction, but increases risk if you run untrusted code or work in an untrusted repository.

View File

@@ -0,0 +1,49 @@
# Image API quick reference
This file is for the fallback CLI mode only. Use it only after the user explicitly asks to use `scripts/image_gen.py` instead of the built-in `image_gen` tool.
These parameters describe the Image API and bundled CLI fallback surface. Do not assume they are normal arguments on the built-in `image_gen` tool.
## Scope
- This fallback CLI is intended for GPT Image models (`gpt-image-1.5`, `gpt-image-1`, and `gpt-image-1-mini`).
- The built-in `image_gen` tool and the fallback CLI do not expose the same controls.
## Endpoints
- Generate: `POST /v1/images/generations` (`client.images.generate(...)`)
- Edit: `POST /v1/images/edits` (`client.images.edit(...)`)
## Core parameters for GPT Image models
- `prompt`: text prompt
- `model`: image model
- `n`: number of images (1-10)
- `size`: `1024x1024`, `1536x1024`, `1024x1536`, or `auto`
- `quality`: `low`, `medium`, `high`, or `auto`
- `background`: output transparency behavior (`transparent`, `opaque`, or `auto`) for generated output; this is not the same thing as the prompt's visual scene/backdrop
- `output_format`: `png` (default), `jpeg`, `webp`
- `output_compression`: 0-100 (jpeg/webp only)
- `moderation`: `auto` (default) or `low`
## Edit-specific parameters
- `image`: one or more input images. For GPT Image models, you can provide up to 16 images.
- `mask`: optional mask image
- `input_fidelity`: `low` (default) or `high`
Model-specific note for `input_fidelity`:
- `gpt-image-1` and `gpt-image-1-mini` preserve all input images, but the first image gets richer textures and finer details.
- `gpt-image-1.5` preserves the first 5 input images with higher fidelity.
## Output
- `data[]` list with `b64_json` per image
- The bundled `scripts/image_gen.py` CLI decodes `b64_json` and writes output files for you.
## Limits and notes
- Input images and masks must be under 50MB.
- Use the edits endpoint when the user requests changes to an existing image.
- Masking is prompt-guided; exact shapes are not guaranteed.
- Large sizes and high quality increase latency and cost.
- High `input_fidelity` can materially increase input token usage.
- If a request fails because a specific option is unsupported by the selected GPT Image model, retry manually without that option.
## Important boundary
- `quality`, `input_fidelity`, explicit masks, `background`, `output_format`, and related parameters are fallback-only execution controls.
- Do not assume they are built-in `image_gen` tool arguments.

View File

@@ -0,0 +1,98 @@
# Prompting best practices
These prompting principles are shared by both top-level modes of the skill:
- built-in `image_gen` tool (default)
- explicit `scripts/image_gen.py` CLI fallback
This file is about prompt structure, specificity, and iteration. Fallback-only execution controls such as `quality`, `input_fidelity`, masks, output format, and output paths live in the fallback docs.
## Contents
- [Structure](#structure)
- [Specificity policy](#specificity-policy)
- [Allowed and disallowed augmentation](#allowed-and-disallowed-augmentation)
- [Composition and layout](#composition-and-layout)
- [Constraints and invariants](#constraints-and-invariants)
- [Text in images](#text-in-images)
- [Input images and references](#input-images-and-references)
- [Iterate deliberately](#iterate-deliberately)
- [Fallback-only execution controls](#fallback-only-execution-controls)
- [Use-case tips](#use-case-tips)
- [Where to find copy/paste recipes](#where-to-find-copypaste-recipes)
## Structure
- Use a consistent order: scene/backdrop -> subject -> key details -> constraints -> output intent.
- Include intended use (ad, UI mock, infographic) to set the level of polish.
- For complex requests, use short labeled lines instead of one long paragraph.
## Specificity policy
- If the user prompt is already specific and detailed, normalize it into a clean spec without adding creative requirements.
- If the prompt is generic, you may add tasteful detail when it materially improves the output.
- Treat examples in `sample-prompts.md` as fully-authored recipes, not as the default amount of augmentation to add to every request.
## Allowed and disallowed augmentation
Allowed augmentation for generic prompts:
- composition and framing cues
- intended-use or polish-level hints
- practical layout guidance
- reasonable scene concreteness that supports the request
Do not add:
- extra characters, props, or objects that are not implied
- brand palettes, slogans, or story beats that are not implied
- arbitrary side-specific placement unless the surrounding layout supports it
## Composition and layout
- Specify framing and viewpoint (close-up, wide, top-down) and placement only when it materially helps.
- Call out negative space if the asset clearly needs room for UI or copy.
- Avoid making left/right layout decisions unless the user or surrounding layout supports them.
## Constraints and invariants
- State what must not change (`keep background unchanged`).
- For edits, say `change only X; keep Y unchanged` and repeat invariants on every iteration to reduce drift.
## Text in images
- Put literal text in quotes or ALL CAPS and specify typography (font style, size, color, placement).
- Spell uncommon words letter-by-letter if accuracy matters.
- For in-image copy, require verbatim rendering and no extra characters.
## Input images and references
- Do not assume that every provided image is an edit target.
- Label each image by index and role (`Image 1: edit target`, `Image 2: style reference`).
- If the user provides images for style, composition, or mood guidance and does not ask to modify them, treat the request as generation with references.
- If the user asks to preserve an existing image while changing specific parts, treat the request as an edit.
- For compositing, describe how the images interact (`place the subject from Image 2 into Image 1`).
## Iterate deliberately
- Start with a clean base prompt, then make small single-change edits.
- Re-specify critical constraints when you iterate.
- Prefer one targeted follow-up at a time over rewriting the whole prompt.
## Fallback-only execution controls
- `quality`, `input_fidelity`, explicit masks, output format, and output paths are fallback-only execution controls.
- Do not assume they are built-in `image_gen` tool arguments.
- If the user explicitly chooses CLI fallback, see `references/cli.md` and `references/image-api.md` for those controls.
## Use-case tips
Generate:
- photorealistic-natural: Prompt as if a real photo is captured in the moment; use photography language (lens, lighting, framing); call for real texture; avoid over-stylized polish unless requested.
- product-mockup: Describe the product/packaging and materials; ensure clean silhouette and label clarity; if in-image text is needed, require verbatim rendering and specify typography.
- ui-mockup: Describe the target fidelity first (shippable mockup or low-fi wireframe), then focus on layout, hierarchy, and practical UI elements; avoid concept-art language.
- infographic-diagram: Define the audience and layout flow; label parts explicitly; require verbatim text.
- logo-brand: Keep it simple and scalable; ask for a strong silhouette and balanced negative space; avoid decorative flourishes unless requested.
- illustration-story: Define panels or scene beats; keep each action concrete.
- stylized-concept: Specify style cues, material finish, and rendering approach (3D, painterly, clay) without inventing new story elements.
- historical-scene: State the location/date and required period accuracy; constrain clothing, props, and environment to match the era.
Edit:
- text-localization: Change only the text; preserve layout, typography, spacing, and hierarchy; no extra words or reflow unless needed.
- identity-preserve: Lock identity (face, body, pose, hair, expression); change only the specified elements; match lighting and shadows.
- precise-object-edit: Specify exactly what to remove/replace; preserve surrounding texture and lighting; keep everything else unchanged.
- lighting-weather: Change only environmental conditions (light, shadows, atmosphere, precipitation); keep geometry, framing, and subject identity.
- background-extraction: Request a clean cutout; crisp silhouette; no halos; preserve label text exactly; no restyling.
- style-transfer: Specify style cues to preserve (palette, texture, brushwork) and what must change; add `no extra elements` to prevent drift.
- compositing: Reference inputs by index; specify what moves where; match lighting, perspective, and scale; keep the base framing unchanged.
- sketch-to-render: Preserve layout, proportions, and perspective; choose materials and lighting that support the supplied sketch without adding new elements.
## Where to find copy/paste recipes
For copy/paste prompt specs (examples only), see `references/sample-prompts.md`. This file focuses on principles, specificity, and iteration patterns.

View File

@@ -0,0 +1,376 @@
# Sample prompts (copy/paste)
These prompt recipes are shared across both top-level modes of the skill:
- built-in `image_gen` tool (default)
- explicit `scripts/image_gen.py` CLI fallback
Use these as starting points. They are intentionally complete prompt recipes, not the default amount of augmentation to add to every user request.
When adapting a user's prompt:
- keep user-provided requirements
- only add detail according to the specificity policy in `SKILL.md`
- do not treat every example below as permission to invent extra story elements
The labeled lines are prompt scaffolding, not a closed schema. `Asset type` and `Input images` are prompt-only scaffolding; the CLI does not expose them as dedicated flags.
Execution details such as explicit CLI flags, `quality`, `input_fidelity`, masks, output formats, and local output paths depend on mode. Use the built-in tool by default; only apply CLI-specific controls after the user explicitly opts into fallback mode.
For prompting principles (structure, specificity, invariants, iteration), see `references/prompting.md`.
## Generate
### photorealistic-natural
```
Use case: photorealistic-natural
Primary request: candid photo of an elderly sailor on a small fishing boat adjusting a net
Scene/backdrop: coastal water with soft haze
Subject: weathered skin with wrinkles and sun texture
Style/medium: photorealistic candid photo
Composition/framing: medium close-up, eye-level
Lighting/mood: soft coastal daylight, shallow depth of field, subtle film grain
Materials/textures: real skin texture, worn fabric, salt-worn wood
Constraints: natural color balance; no heavy retouching; no glamorization; no watermark
Avoid: studio polish; staged look
```
### product-mockup
```
Use case: product-mockup
Primary request: premium product photo of a matte black shampoo bottle with a minimal label
Scene/backdrop: clean studio gradient from light gray to white
Subject: single bottle centered with subtle reflection
Style/medium: premium product photography
Composition/framing: centered, slight three-quarter angle, generous padding
Lighting/mood: softbox lighting, clean highlights, controlled shadows
Materials/textures: matte plastic, crisp label printing
Constraints: no logos or trademarks; no watermark
```
### ui-mockup
```
Use case: ui-mockup
Primary request: mobile app home screen for a local farmers market with vendors and daily specials
Asset type: mobile app screen
Style/medium: realistic product UI, not concept art
Composition/framing: clean vertical mobile layout with clear hierarchy
Constraints: practical layout, clear typography, no logos or trademarks, no watermark
```
### infographic-diagram
```
Use case: infographic-diagram
Primary request: detailed infographic of an automatic coffee machine flow
Scene/backdrop: clean, light neutral background
Subject: bean hopper -> grinder -> brew group -> boiler -> water tank -> drip tray
Style/medium: clean vector-like infographic with clear callouts and arrows
Composition/framing: vertical poster layout, top-to-bottom flow
Text (verbatim): "Bean Hopper", "Grinder", "Brew Group", "Boiler", "Water Tank", "Drip Tray"
Constraints: clear labels, strong contrast, no logos or trademarks, no watermark
```
### logo-brand
```
Use case: logo-brand
Primary request: original logo for "Field & Flour", a local bakery
Style/medium: vector logo mark; flat colors; minimal
Composition/framing: single centered logo on a plain background with generous padding
Constraints: strong silhouette, balanced negative space; original design only; no gradients unless essential; no trademarks; no watermark
```
### illustration-story
```
Use case: illustration-story
Primary request: 4-panel comic about a pet left alone at home
Scene/backdrop: cozy living room across panels
Subject: pet reacting to the owner leaving, then relaxing, then returning to a composed pose
Style/medium: comic illustration with clear panels
Composition/framing: 4 equal-sized vertical panels, readable actions per panel
Constraints: no text; no logos or trademarks; no watermark
```
### stylized-concept
```
Use case: stylized-concept
Primary request: cavernous hangar interior with tall support beams and drifting fog
Scene/backdrop: industrial hangar interior, deep scale, light haze
Subject: compact shuttle parked near the center
Style/medium: cinematic concept art, industrial realism
Composition/framing: wide-angle, low-angle
Lighting/mood: volumetric light rays cutting through fog
Constraints: no logos or trademarks; no watermark
```
### historical-scene
```
Use case: historical-scene
Primary request: outdoor crowd scene in Bethel, New York on August 16, 1969
Scene/backdrop: open field with period-appropriate staging
Subject: crowd in period-accurate clothing, authentic environment
Style/medium: photorealistic photo
Composition/framing: wide shot, eye-level
Constraints: period-accurate details; no modern objects; no logos or trademarks; no watermark
```
## Asset type templates (taxonomy-aligned)
### Website assets template
```
Use case: <photorealistic-natural|stylized-concept|product-mockup|infographic-diagram|ui-mockup>
Asset type: <hero image / section illustration / blog header>
Primary request: <short description>
Scene/backdrop: <environment or abstract backdrop>
Subject: <main subject>
Style/medium: <photo/illustration/3D>
Composition/framing: <wide/centered; note usable negative space only if needed>
Lighting/mood: <soft/bright/neutral>
Color palette: <brand colors or neutral>
Constraints: <no text; no logos; no watermark; leave room for UI if needed>
```
### Website assets example: minimal hero background
```
Use case: stylized-concept
Asset type: landing page hero background
Primary request: minimal abstract background with a soft gradient and subtle texture
Style/medium: matte illustration / soft-rendered abstract background
Composition/framing: wide composition with usable negative space for page copy
Lighting/mood: gentle studio glow
Color palette: restrained neutral palette
Constraints: no text; no logos; no watermark
```
### Website assets example: feature section illustration
```
Use case: stylized-concept
Asset type: feature section illustration
Primary request: simple abstract shapes suggesting connection and flow
Scene/backdrop: subtle light-gray backdrop with faint texture
Style/medium: flat illustration; soft shadows; restrained contrast
Composition/framing: centered cluster; open margins for UI
Color palette: muted neutral palette
Constraints: no text; no logos; no watermark
```
### Website assets example: blog header image
```
Use case: photorealistic-natural
Asset type: blog header image
Primary request: overhead desk scene with notebook, pen, and coffee cup
Scene/backdrop: warm wooden tabletop
Style/medium: photorealistic photo
Composition/framing: wide crop with clean room for page copy
Lighting/mood: soft morning light
Constraints: no text; no logos; no watermark
```
### Game assets template
```
Use case: stylized-concept
Asset type: <game environment concept art / game character concept / game UI icon / tileable game texture>
Primary request: <biome/scene/character/icon/material>
Scene/backdrop: <location + set dressing> (if applicable)
Subject: <main focal element(s)>
Style/medium: <realistic/stylized>; <concept art / character render / UI icon / texture>
Composition/framing: <wide/establishing/top-down>; <camera angle>; <focal point placement>
Lighting/mood: <time of day>; <mood>; <volumetric/fog/etc>
Constraints: no logos or trademarks; no watermark
```
### Game assets example: environment concept art
```
Use case: stylized-concept
Asset type: game environment concept art
Primary request: cavernous hangar interior with tall support beams and drifting fog
Scene/backdrop: industrial hangar interior, deep scale, light haze
Subject: compact shuttle parked near the center
Style/medium: cinematic concept art, industrial realism
Composition/framing: wide-angle, low-angle
Lighting/mood: volumetric light rays cutting through fog
Constraints: no logos or trademarks; no watermark
```
### Game assets example: character concept
```
Use case: stylized-concept
Asset type: game character concept
Primary request: desert scout character with layered travel gear
Subject: long coat, satchel, practical travel clothing
Style/medium: character render; stylized realism
Composition/framing: neutral hero pose on a simple backdrop
Constraints: no logos or trademarks; no watermark
```
### Game assets example: UI icon
```
Use case: stylized-concept
Asset type: game UI icon
Primary request: round shield icon with a subtle rune pattern
Style/medium: painted game UI icon
Composition/framing: centered icon; generous padding; clear silhouette
Constraints: no text; no background scene elements; no logos or trademarks; no watermark
```
### Game assets example: tileable texture
```
Use case: stylized-concept
Asset type: tileable game texture
Primary request: worn sandstone blocks
Style/medium: seamless tileable texture; PBR-ish look
Scene/backdrop: neutral lighting reference only
Constraints: seamless edges; no obvious focal elements; no text; no logos or trademarks; no watermark
```
### Wireframe template
```
Use case: ui-mockup
Asset type: website wireframe
Primary request: <page or flow to sketch>
Style/medium: low-fi grayscale wireframe
Composition/framing: <landscape or portrait to match expected device>
Subject: <sections in order; grid/columns; key labels>
Constraints: no color; no logos; no real photos; no watermark
```
### Wireframe example: homepage (desktop)
```
Use case: ui-mockup
Asset type: website wireframe
Primary request: SaaS homepage layout with clear hierarchy
Style/medium: low-fi grayscale wireframe
Subject: top nav; hero with headline and CTA; three feature cards; testimonial strip; pricing preview; footer
Composition/framing: landscape desktop layout
Constraints: label major blocks; no color; no logos; no real photos; no watermark
```
### Wireframe example: pricing page
```
Use case: ui-mockup
Asset type: website wireframe
Primary request: pricing page layout with comparison table
Style/medium: low-fi grayscale wireframe
Subject: header; plan toggle; 3 pricing cards; comparison table; FAQ accordion; footer
Composition/framing: desktop or tablet layout
Constraints: label key areas; no color; no logos; no real photos; no watermark
```
### Wireframe example: mobile onboarding flow
```
Use case: ui-mockup
Asset type: mobile onboarding wireframe
Primary request: three-screen mobile onboarding flow
Style/medium: low-fi grayscale wireframe
Subject: screen 1 headline and CTA; screen 2 feature bullets; screen 3 form fields and CTA
Composition/framing: portrait mobile layout
Constraints: label screens and blocks; no color; no logos; no real photos; no watermark
```
### Logo template
```
Use case: logo-brand
Asset type: logo concept
Primary request: <brand idea or symbol concept>
Style/medium: vector logo mark; flat colors; minimal
Composition/framing: centered mark; clear silhouette; generous margin
Color palette: <1-2 colors; high contrast>
Text (verbatim): "<exact name>" (only if needed)
Constraints: no gradients; no mockups; no 3D; no watermark
```
### Logo example: abstract symbol mark
```
Use case: logo-brand
Asset type: logo concept
Primary request: geometric leaf symbol suggesting sustainability and growth
Style/medium: vector logo mark; flat colors; minimal
Composition/framing: centered mark; clear silhouette
Color palette: deep green and off-white
Constraints: no text unless requested; no gradients; no mockups; no 3D; no watermark
```
### Logo example: monogram mark
```
Use case: logo-brand
Asset type: logo concept
Primary request: interlocking monogram of the letters "AV"
Style/medium: vector logo mark; flat colors; minimal
Composition/framing: centered mark; balanced spacing
Color palette: black on white
Constraints: no gradients; no mockups; no 3D; no watermark
```
### Logo example: wordmark
```
Use case: logo-brand
Asset type: logo concept
Primary request: clean wordmark for a modern studio
Style/medium: vector wordmark; flat colors; minimal
Text (verbatim): "Studio North"
Composition/framing: centered text; even letter spacing
Constraints: no gradients; no mockups; no 3D; no watermark
```
## Edit
### text-localization
```
Use case: text-localization
Input images: Image 1: original infographic
Primary request: replace "Bean Hopper", "Grinder", "Brew Group", "Boiler", "Water Tank", and "Drip Tray" with "Tolva", "Molino", "Grupo de infusión", "Caldera", "Depósito de agua", and "Bandeja de goteo"
Constraints: change only the text; preserve layout, typography, spacing, and hierarchy; no extra words; do not alter logos or imagery
```
### identity-preserve
```
Use case: identity-preserve
Input images: Image 1: person photo; Image 2..N: clothing references
Primary request: replace only the clothing with the provided garments
Constraints: preserve face, body shape, pose, hair, expression, and identity; match lighting and shadows; keep the background unchanged; no accessories or text
```
### precise-object-edit
```
Use case: precise-object-edit
Input images: Image 1: room photo
Primary request: replace only the white chairs with wooden chairs
Constraints: preserve camera angle, room lighting, floor shadows, and surrounding objects; keep all other aspects unchanged
```
### lighting-weather
```
Use case: lighting-weather
Input images: Image 1: original photo
Primary request: make it look like a winter evening with gentle snowfall
Constraints: preserve subject identity, geometry, camera angle, and composition; change only lighting, atmosphere, and weather
```
### background-extraction
```
Use case: background-extraction
Input images: Image 1: product photo
Primary request: isolate the product on a clean transparent background
Constraints: crisp silhouette; no halos or fringing; preserve label text exactly; no restyling
```
### style-transfer
```
Use case: style-transfer
Input images: Image 1: style reference
Primary request: apply Image 1's visual style to a man riding a motorcycle on a plain white backdrop
Constraints: preserve palette, texture, and brushwork; no extra elements
```
### compositing
```
Use case: compositing
Input images: Image 1: base scene; Image 2: subject to insert
Primary request: place the subject from Image 2 next to the person in Image 1
Constraints: match lighting, perspective, and scale; keep the base framing unchanged; no extra elements
```
### sketch-to-render
```
Use case: sketch-to-render
Input images: Image 1: drawing
Primary request: turn the drawing into a photorealistic image
Constraints: preserve layout, proportions, and perspective; choose realistic materials and lighting; do not add new elements or text
```

View File

@@ -0,0 +1,926 @@
#!/usr/bin/env python3
"""Fallback CLI for explicit image generation or editing with GPT Image models.
Used only when the user explicitly opts into CLI fallback mode.
Defaults to gpt-image-1.5 and a structured prompt augmentation workflow.
"""
from __future__ import annotations
import argparse
import asyncio
import base64
import json
import os
from pathlib import Path
import re
import sys
import time
from typing import Any, Dict, Iterable, List, Optional, Tuple
from io import BytesIO
DEFAULT_MODEL = "gpt-image-1.5"
DEFAULT_SIZE = "1024x1024"
DEFAULT_QUALITY = "auto"
DEFAULT_OUTPUT_FORMAT = "png"
DEFAULT_CONCURRENCY = 5
DEFAULT_DOWNSCALE_SUFFIX = "-web"
DEFAULT_OUTPUT_PATH = "output/imagegen/output.png"
GPT_IMAGE_MODEL_PREFIX = "gpt-image-"
ALLOWED_SIZES = {"1024x1024", "1536x1024", "1024x1536", "auto"}
ALLOWED_QUALITIES = {"low", "medium", "high", "auto"}
ALLOWED_BACKGROUNDS = {"transparent", "opaque", "auto", None}
ALLOWED_INPUT_FIDELITIES = {"low", "high", None}
MAX_IMAGE_BYTES = 50 * 1024 * 1024
MAX_BATCH_JOBS = 500
def _die(message: str, code: int = 1) -> None:
print(f"Error: {message}", file=sys.stderr)
raise SystemExit(code)
def _warn(message: str) -> None:
print(f"Warning: {message}", file=sys.stderr)
def _dependency_hint(package: str, *, upgrade: bool = False) -> str:
command = f"uv pip install {'-U ' if upgrade else ''}{package}"
return (
"Activate the repo-selected environment first, then install it with "
f"`{command}`. If this repo uses a local virtualenv, start with "
"`source .venv/bin/activate`; otherwise use this repo's configured shared fallback "
"environment. If your project declares dependencies, prefer that project's normal "
"`uv sync` flow."
)
def _ensure_api_key(dry_run: bool) -> None:
if os.getenv("OPENAI_API_KEY"):
print("OPENAI_API_KEY is set.", file=sys.stderr)
return
if dry_run:
_warn("OPENAI_API_KEY is not set; dry-run only.")
return
_die("OPENAI_API_KEY is not set. Export it before running.")
def _read_prompt(prompt: Optional[str], prompt_file: Optional[str]) -> str:
if prompt and prompt_file:
_die("Use --prompt or --prompt-file, not both.")
if prompt_file:
path = Path(prompt_file)
if not path.exists():
_die(f"Prompt file not found: {path}")
return path.read_text(encoding="utf-8").strip()
if prompt:
return prompt.strip()
_die("Missing prompt. Use --prompt or --prompt-file.")
return "" # unreachable
def _check_image_paths(paths: Iterable[str]) -> List[Path]:
resolved: List[Path] = []
for raw in paths:
path = Path(raw)
if not path.exists():
_die(f"Image file not found: {path}")
if path.stat().st_size > MAX_IMAGE_BYTES:
_warn(f"Image exceeds 50MB limit: {path}")
resolved.append(path)
return resolved
def _normalize_output_format(fmt: Optional[str]) -> str:
if not fmt:
return DEFAULT_OUTPUT_FORMAT
fmt = fmt.lower()
if fmt not in {"png", "jpeg", "jpg", "webp"}:
_die("output-format must be png, jpeg, jpg, or webp.")
return "jpeg" if fmt == "jpg" else fmt
def _validate_size(size: str) -> None:
if size not in ALLOWED_SIZES:
_die(
"size must be one of 1024x1024, 1536x1024, 1024x1536, or auto for GPT image models."
)
def _validate_quality(quality: str) -> None:
if quality not in ALLOWED_QUALITIES:
_die("quality must be one of low, medium, high, or auto.")
def _validate_background(background: Optional[str]) -> None:
if background not in ALLOWED_BACKGROUNDS:
_die("background must be one of transparent, opaque, or auto.")
def _validate_input_fidelity(input_fidelity: Optional[str]) -> None:
if input_fidelity not in ALLOWED_INPUT_FIDELITIES:
_die("input-fidelity must be one of low or high.")
def _validate_model(model: str) -> None:
if not model.startswith(GPT_IMAGE_MODEL_PREFIX):
_die(
"model must be a GPT Image model (for example gpt-image-1.5, gpt-image-1, or gpt-image-1-mini)."
)
def _validate_transparency(background: Optional[str], output_format: str) -> None:
if background == "transparent" and output_format not in {"png", "webp"}:
_die("transparent background requires output-format png or webp.")
def _validate_generate_payload(payload: Dict[str, Any]) -> None:
_validate_model(str(payload.get("model", DEFAULT_MODEL)))
n = int(payload.get("n", 1))
if n < 1 or n > 10:
_die("n must be between 1 and 10")
size = str(payload.get("size", DEFAULT_SIZE))
quality = str(payload.get("quality", DEFAULT_QUALITY))
background = payload.get("background")
_validate_size(size)
_validate_quality(quality)
_validate_background(background)
oc = payload.get("output_compression")
if oc is not None and not (0 <= int(oc) <= 100):
_die("output_compression must be between 0 and 100")
def _build_output_paths(
out: str,
output_format: str,
count: int,
out_dir: Optional[str],
) -> List[Path]:
ext = "." + output_format
if out_dir:
out_base = Path(out_dir)
out_base.mkdir(parents=True, exist_ok=True)
return [out_base / f"image_{i}{ext}" for i in range(1, count + 1)]
out_path = Path(out)
if out_path.exists() and out_path.is_dir():
out_path.mkdir(parents=True, exist_ok=True)
return [out_path / f"image_{i}{ext}" for i in range(1, count + 1)]
if out_path.suffix == "":
out_path = out_path.with_suffix(ext)
elif output_format and out_path.suffix.lstrip(".").lower() != output_format:
_warn(
f"Output extension {out_path.suffix} does not match output-format {output_format}."
)
if count == 1:
return [out_path]
return [
out_path.with_name(f"{out_path.stem}-{i}{out_path.suffix}")
for i in range(1, count + 1)
]
def _augment_prompt(args: argparse.Namespace, prompt: str) -> str:
fields = _fields_from_args(args)
return _augment_prompt_fields(args.augment, prompt, fields)
def _augment_prompt_fields(augment: bool, prompt: str, fields: Dict[str, Optional[str]]) -> str:
if not augment:
return prompt
sections: List[str] = []
if fields.get("use_case"):
sections.append(f"Use case: {fields['use_case']}")
sections.append(f"Primary request: {prompt}")
if fields.get("scene"):
sections.append(f"Scene/background: {fields['scene']}")
if fields.get("subject"):
sections.append(f"Subject: {fields['subject']}")
if fields.get("style"):
sections.append(f"Style/medium: {fields['style']}")
if fields.get("composition"):
sections.append(f"Composition/framing: {fields['composition']}")
if fields.get("lighting"):
sections.append(f"Lighting/mood: {fields['lighting']}")
if fields.get("palette"):
sections.append(f"Color palette: {fields['palette']}")
if fields.get("materials"):
sections.append(f"Materials/textures: {fields['materials']}")
if fields.get("text"):
sections.append(f"Text (verbatim): \"{fields['text']}\"")
if fields.get("constraints"):
sections.append(f"Constraints: {fields['constraints']}")
if fields.get("negative"):
sections.append(f"Avoid: {fields['negative']}")
return "\n".join(sections)
def _fields_from_args(args: argparse.Namespace) -> Dict[str, Optional[str]]:
return {
"use_case": getattr(args, "use_case", None),
"scene": getattr(args, "scene", None),
"subject": getattr(args, "subject", None),
"style": getattr(args, "style", None),
"composition": getattr(args, "composition", None),
"lighting": getattr(args, "lighting", None),
"palette": getattr(args, "palette", None),
"materials": getattr(args, "materials", None),
"text": getattr(args, "text", None),
"constraints": getattr(args, "constraints", None),
"negative": getattr(args, "negative", None),
}
def _print_request(payload: dict) -> None:
print(json.dumps(payload, indent=2, sort_keys=True))
def _decode_and_write(images: List[str], outputs: List[Path], force: bool) -> None:
for idx, image_b64 in enumerate(images):
if idx >= len(outputs):
break
out_path = outputs[idx]
if out_path.exists() and not force:
_die(f"Output already exists: {out_path} (use --force to overwrite)")
out_path.parent.mkdir(parents=True, exist_ok=True)
out_path.write_bytes(base64.b64decode(image_b64))
print(f"Wrote {out_path}")
def _derive_downscale_path(path: Path, suffix: str) -> Path:
if suffix and not suffix.startswith("-") and not suffix.startswith("_"):
suffix = "-" + suffix
return path.with_name(f"{path.stem}{suffix}{path.suffix}")
def _downscale_image_bytes(image_bytes: bytes, *, max_dim: int, output_format: str) -> bytes:
try:
from PIL import Image
except Exception:
_die(f"Downscaling requires Pillow. {_dependency_hint('pillow')}")
if max_dim < 1:
_die("--downscale-max-dim must be >= 1")
with Image.open(BytesIO(image_bytes)) as img:
img.load()
w, h = img.size
scale = min(1.0, float(max_dim) / float(max(w, h)))
target = (max(1, int(round(w * scale))), max(1, int(round(h * scale))))
resized = img if target == (w, h) else img.resize(target, Image.Resampling.LANCZOS)
fmt = output_format.lower()
if fmt == "jpg":
fmt = "jpeg"
if fmt == "jpeg":
if resized.mode in ("RGBA", "LA") or ("transparency" in getattr(resized, "info", {})):
bg = Image.new("RGB", resized.size, (255, 255, 255))
bg.paste(resized.convert("RGBA"), mask=resized.convert("RGBA").split()[-1])
resized = bg
else:
resized = resized.convert("RGB")
out = BytesIO()
resized.save(out, format=fmt.upper())
return out.getvalue()
def _decode_write_and_downscale(
images: List[str],
outputs: List[Path],
*,
force: bool,
downscale_max_dim: Optional[int],
downscale_suffix: str,
output_format: str,
) -> None:
for idx, image_b64 in enumerate(images):
if idx >= len(outputs):
break
out_path = outputs[idx]
if out_path.exists() and not force:
_die(f"Output already exists: {out_path} (use --force to overwrite)")
out_path.parent.mkdir(parents=True, exist_ok=True)
raw = base64.b64decode(image_b64)
out_path.write_bytes(raw)
print(f"Wrote {out_path}")
if downscale_max_dim is None:
continue
derived = _derive_downscale_path(out_path, downscale_suffix)
if derived.exists() and not force:
_die(f"Output already exists: {derived} (use --force to overwrite)")
derived.parent.mkdir(parents=True, exist_ok=True)
resized = _downscale_image_bytes(raw, max_dim=downscale_max_dim, output_format=output_format)
derived.write_bytes(resized)
print(f"Wrote {derived}")
def _create_client():
try:
from openai import OpenAI
except ImportError:
_die(f"openai SDK not installed in the active environment. {_dependency_hint('openai')}")
return OpenAI()
def _create_async_client():
try:
from openai import AsyncOpenAI
except ImportError:
try:
import openai as _openai # noqa: F401
except ImportError:
_die(
f"openai SDK not installed in the active environment. {_dependency_hint('openai')}"
)
_die(
"AsyncOpenAI not available in this openai SDK version. "
f"{_dependency_hint('openai', upgrade=True)}"
)
return AsyncOpenAI()
def _slugify(value: str) -> str:
value = value.strip().lower()
value = re.sub(r"[^a-z0-9]+", "-", value)
value = re.sub(r"-{2,}", "-", value).strip("-")
return value[:60] if value else "job"
def _normalize_job(job: Any, idx: int) -> Dict[str, Any]:
if isinstance(job, str):
prompt = job.strip()
if not prompt:
_die(f"Empty prompt at job {idx}")
return {"prompt": prompt}
if isinstance(job, dict):
if "prompt" not in job or not str(job["prompt"]).strip():
_die(f"Missing prompt for job {idx}")
return job
_die(f"Invalid job at index {idx}: expected string or object.")
return {} # unreachable
def _read_jobs_jsonl(path: str) -> List[Dict[str, Any]]:
p = Path(path)
if not p.exists():
_die(f"Input file not found: {p}")
jobs: List[Dict[str, Any]] = []
for line_no, raw in enumerate(p.read_text(encoding="utf-8").splitlines(), start=1):
line = raw.strip()
if not line or line.startswith("#"):
continue
try:
item: Any
if line.startswith("{"):
item = json.loads(line)
else:
item = line
jobs.append(_normalize_job(item, idx=line_no))
except json.JSONDecodeError as exc:
_die(f"Invalid JSON on line {line_no}: {exc}")
if not jobs:
_die("No jobs found in input file.")
if len(jobs) > MAX_BATCH_JOBS:
_die(f"Too many jobs ({len(jobs)}). Max is {MAX_BATCH_JOBS}.")
return jobs
def _merge_non_null(dst: Dict[str, Any], src: Dict[str, Any]) -> Dict[str, Any]:
merged = dict(dst)
for k, v in src.items():
if v is not None:
merged[k] = v
return merged
def _job_output_paths(
*,
out_dir: Path,
output_format: str,
idx: int,
prompt: str,
n: int,
explicit_out: Optional[str],
) -> List[Path]:
out_dir.mkdir(parents=True, exist_ok=True)
ext = "." + output_format
if explicit_out:
base = Path(explicit_out)
if base.suffix == "":
base = base.with_suffix(ext)
elif base.suffix.lstrip(".").lower() != output_format:
_warn(
f"Job {idx}: output extension {base.suffix} does not match output-format {output_format}."
)
base = out_dir / base.name
else:
slug = _slugify(prompt[:80])
base = out_dir / f"{idx:03d}-{slug}{ext}"
if n == 1:
return [base]
return [
base.with_name(f"{base.stem}-{i}{base.suffix}")
for i in range(1, n + 1)
]
def _extract_retry_after_seconds(exc: Exception) -> Optional[float]:
# Best-effort: openai SDK errors vary by version. Prefer a conservative fallback.
for attr in ("retry_after", "retry_after_seconds"):
val = getattr(exc, attr, None)
if isinstance(val, (int, float)) and val >= 0:
return float(val)
msg = str(exc)
m = re.search(r"retry[- ]after[:= ]+([0-9]+(?:\\.[0-9]+)?)", msg, re.IGNORECASE)
if m:
try:
return float(m.group(1))
except Exception:
return None
return None
def _is_rate_limit_error(exc: Exception) -> bool:
name = exc.__class__.__name__.lower()
if "ratelimit" in name or "rate_limit" in name:
return True
msg = str(exc).lower()
return "429" in msg or "rate limit" in msg or "too many requests" in msg
def _is_transient_error(exc: Exception) -> bool:
if _is_rate_limit_error(exc):
return True
name = exc.__class__.__name__.lower()
if "timeout" in name or "timedout" in name or "tempor" in name:
return True
msg = str(exc).lower()
return "timeout" in msg or "timed out" in msg or "connection reset" in msg
async def _generate_one_with_retries(
client: Any,
payload: Dict[str, Any],
*,
attempts: int,
job_label: str,
) -> Any:
last_exc: Optional[Exception] = None
for attempt in range(1, attempts + 1):
try:
return await client.images.generate(**payload)
except Exception as exc:
last_exc = exc
if not _is_transient_error(exc):
raise
if attempt == attempts:
raise
sleep_s = _extract_retry_after_seconds(exc)
if sleep_s is None:
sleep_s = min(60.0, 2.0**attempt)
print(
f"{job_label} attempt {attempt}/{attempts} failed ({exc.__class__.__name__}); retrying in {sleep_s:.1f}s",
file=sys.stderr,
)
await asyncio.sleep(sleep_s)
raise last_exc or RuntimeError("unknown error")
async def _run_generate_batch(args: argparse.Namespace) -> int:
jobs = _read_jobs_jsonl(args.input)
out_dir = Path(args.out_dir)
base_fields = _fields_from_args(args)
base_payload = {
"model": args.model,
"n": args.n,
"size": args.size,
"quality": args.quality,
"background": args.background,
"output_format": args.output_format,
"output_compression": args.output_compression,
"moderation": args.moderation,
}
if args.dry_run:
for i, job in enumerate(jobs, start=1):
prompt = str(job["prompt"]).strip()
fields = _merge_non_null(base_fields, job.get("fields", {}))
# Allow flat job keys as well (use_case, scene, etc.)
fields = _merge_non_null(fields, {k: job.get(k) for k in base_fields.keys()})
augmented = _augment_prompt_fields(args.augment, prompt, fields)
job_payload = dict(base_payload)
job_payload["prompt"] = augmented
job_payload = _merge_non_null(job_payload, {k: job.get(k) for k in base_payload.keys()})
job_payload = {k: v for k, v in job_payload.items() if v is not None}
_validate_generate_payload(job_payload)
effective_output_format = _normalize_output_format(job_payload.get("output_format"))
_validate_transparency(job_payload.get("background"), effective_output_format)
job_payload["output_format"] = effective_output_format
n = int(job_payload.get("n", 1))
outputs = _job_output_paths(
out_dir=out_dir,
output_format=effective_output_format,
idx=i,
prompt=prompt,
n=n,
explicit_out=job.get("out"),
)
downscaled = None
if args.downscale_max_dim is not None:
downscaled = [
str(_derive_downscale_path(p, args.downscale_suffix)) for p in outputs
]
_print_request(
{
"endpoint": "/v1/images/generations",
"job": i,
"outputs": [str(p) for p in outputs],
"outputs_downscaled": downscaled,
**job_payload,
}
)
return 0
client = _create_async_client()
sem = asyncio.Semaphore(args.concurrency)
any_failed = False
async def run_job(i: int, job: Dict[str, Any]) -> Tuple[int, Optional[str]]:
nonlocal any_failed
prompt = str(job["prompt"]).strip()
job_label = f"[job {i}/{len(jobs)}]"
fields = _merge_non_null(base_fields, job.get("fields", {}))
fields = _merge_non_null(fields, {k: job.get(k) for k in base_fields.keys()})
augmented = _augment_prompt_fields(args.augment, prompt, fields)
payload = dict(base_payload)
payload["prompt"] = augmented
payload = _merge_non_null(payload, {k: job.get(k) for k in base_payload.keys()})
payload = {k: v for k, v in payload.items() if v is not None}
n = int(payload.get("n", 1))
_validate_generate_payload(payload)
effective_output_format = _normalize_output_format(payload.get("output_format"))
_validate_transparency(payload.get("background"), effective_output_format)
payload["output_format"] = effective_output_format
outputs = _job_output_paths(
out_dir=out_dir,
output_format=effective_output_format,
idx=i,
prompt=prompt,
n=n,
explicit_out=job.get("out"),
)
try:
async with sem:
print(f"{job_label} starting", file=sys.stderr)
started = time.time()
result = await _generate_one_with_retries(
client,
payload,
attempts=args.max_attempts,
job_label=job_label,
)
elapsed = time.time() - started
print(f"{job_label} completed in {elapsed:.1f}s", file=sys.stderr)
images = [item.b64_json for item in result.data]
_decode_write_and_downscale(
images,
outputs,
force=args.force,
downscale_max_dim=args.downscale_max_dim,
downscale_suffix=args.downscale_suffix,
output_format=effective_output_format,
)
return i, None
except Exception as exc:
any_failed = True
print(f"{job_label} failed: {exc}", file=sys.stderr)
if args.fail_fast:
raise
return i, str(exc)
tasks = [asyncio.create_task(run_job(i, job)) for i, job in enumerate(jobs, start=1)]
try:
await asyncio.gather(*tasks)
except Exception:
for t in tasks:
if not t.done():
t.cancel()
raise
return 1 if any_failed else 0
def _generate_batch(args: argparse.Namespace) -> None:
exit_code = asyncio.run(_run_generate_batch(args))
if exit_code:
raise SystemExit(exit_code)
def _generate(args: argparse.Namespace) -> None:
prompt = _read_prompt(args.prompt, args.prompt_file)
prompt = _augment_prompt(args, prompt)
payload = {
"model": args.model,
"prompt": prompt,
"n": args.n,
"size": args.size,
"quality": args.quality,
"background": args.background,
"output_format": args.output_format,
"output_compression": args.output_compression,
"moderation": args.moderation,
}
payload = {k: v for k, v in payload.items() if v is not None}
output_format = _normalize_output_format(args.output_format)
_validate_transparency(args.background, output_format)
payload["output_format"] = output_format
output_paths = _build_output_paths(args.out, output_format, args.n, args.out_dir)
downscaled = None
if args.downscale_max_dim is not None:
downscaled = [str(_derive_downscale_path(p, args.downscale_suffix)) for p in output_paths]
if args.dry_run:
_print_request(
{
"endpoint": "/v1/images/generations",
"outputs": [str(p) for p in output_paths],
"outputs_downscaled": downscaled,
**payload,
}
)
return
print(
"Calling Image API (generation). This can take up to a couple of minutes.",
file=sys.stderr,
)
started = time.time()
client = _create_client()
result = client.images.generate(**payload)
elapsed = time.time() - started
print(f"Generation completed in {elapsed:.1f}s.", file=sys.stderr)
images = [item.b64_json for item in result.data]
_decode_write_and_downscale(
images,
output_paths,
force=args.force,
downscale_max_dim=args.downscale_max_dim,
downscale_suffix=args.downscale_suffix,
output_format=output_format,
)
def _edit(args: argparse.Namespace) -> None:
prompt = _read_prompt(args.prompt, args.prompt_file)
prompt = _augment_prompt(args, prompt)
image_paths = _check_image_paths(args.image)
mask_path = Path(args.mask) if args.mask else None
if mask_path:
if not mask_path.exists():
_die(f"Mask file not found: {mask_path}")
if mask_path.suffix.lower() != ".png":
_warn(f"Mask should be a PNG with an alpha channel: {mask_path}")
if mask_path.stat().st_size > MAX_IMAGE_BYTES:
_warn(f"Mask exceeds 50MB limit: {mask_path}")
payload = {
"model": args.model,
"prompt": prompt,
"n": args.n,
"size": args.size,
"quality": args.quality,
"background": args.background,
"output_format": args.output_format,
"output_compression": args.output_compression,
"input_fidelity": args.input_fidelity,
"moderation": args.moderation,
}
payload = {k: v for k, v in payload.items() if v is not None}
output_format = _normalize_output_format(args.output_format)
_validate_transparency(args.background, output_format)
payload["output_format"] = output_format
_validate_input_fidelity(args.input_fidelity)
output_paths = _build_output_paths(args.out, output_format, args.n, args.out_dir)
downscaled = None
if args.downscale_max_dim is not None:
downscaled = [str(_derive_downscale_path(p, args.downscale_suffix)) for p in output_paths]
if args.dry_run:
payload_preview = dict(payload)
payload_preview["image"] = [str(p) for p in image_paths]
if mask_path:
payload_preview["mask"] = str(mask_path)
_print_request(
{
"endpoint": "/v1/images/edits",
"outputs": [str(p) for p in output_paths],
"outputs_downscaled": downscaled,
**payload_preview,
}
)
return
print(
f"Calling Image API (edit) with {len(image_paths)} image(s).",
file=sys.stderr,
)
started = time.time()
client = _create_client()
with _open_files(image_paths) as image_files, _open_mask(mask_path) as mask_file:
request = dict(payload)
request["image"] = image_files if len(image_files) > 1 else image_files[0]
if mask_file is not None:
request["mask"] = mask_file
result = client.images.edit(**request)
elapsed = time.time() - started
print(f"Edit completed in {elapsed:.1f}s.", file=sys.stderr)
images = [item.b64_json for item in result.data]
_decode_write_and_downscale(
images,
output_paths,
force=args.force,
downscale_max_dim=args.downscale_max_dim,
downscale_suffix=args.downscale_suffix,
output_format=output_format,
)
def _open_files(paths: List[Path]):
return _FileBundle(paths)
def _open_mask(mask_path: Optional[Path]):
if mask_path is None:
return _NullContext()
return _SingleFile(mask_path)
class _NullContext:
def __enter__(self):
return None
def __exit__(self, exc_type, exc, tb):
return False
class _SingleFile:
def __init__(self, path: Path):
self._path = path
self._handle = None
def __enter__(self):
self._handle = self._path.open("rb")
return self._handle
def __exit__(self, exc_type, exc, tb):
if self._handle:
try:
self._handle.close()
except Exception:
pass
return False
class _FileBundle:
def __init__(self, paths: List[Path]):
self._paths = paths
self._handles: List[object] = []
def __enter__(self):
self._handles = [p.open("rb") for p in self._paths]
return self._handles
def __exit__(self, exc_type, exc, tb):
for handle in self._handles:
try:
handle.close()
except Exception:
pass
return False
def _add_shared_args(parser: argparse.ArgumentParser) -> None:
parser.add_argument("--model", default=DEFAULT_MODEL)
parser.add_argument("--prompt")
parser.add_argument("--prompt-file")
parser.add_argument("--n", type=int, default=1)
parser.add_argument("--size", default=DEFAULT_SIZE)
parser.add_argument("--quality", default=DEFAULT_QUALITY)
parser.add_argument("--background")
parser.add_argument("--output-format")
parser.add_argument("--output-compression", type=int)
parser.add_argument("--moderation")
parser.add_argument("--out", default=DEFAULT_OUTPUT_PATH)
parser.add_argument("--out-dir")
parser.add_argument("--force", action="store_true")
parser.add_argument("--dry-run", action="store_true")
parser.add_argument("--augment", dest="augment", action="store_true")
parser.add_argument("--no-augment", dest="augment", action="store_false")
parser.set_defaults(augment=True)
# Prompt augmentation hints
parser.add_argument("--use-case")
parser.add_argument("--scene")
parser.add_argument("--subject")
parser.add_argument("--style")
parser.add_argument("--composition")
parser.add_argument("--lighting")
parser.add_argument("--palette")
parser.add_argument("--materials")
parser.add_argument("--text")
parser.add_argument("--constraints")
parser.add_argument("--negative")
# Post-processing (optional): generate an additional downscaled copy for fast web loading.
parser.add_argument("--downscale-max-dim", type=int)
parser.add_argument("--downscale-suffix", default=DEFAULT_DOWNSCALE_SUFFIX)
def main() -> int:
parser = argparse.ArgumentParser(
description="Fallback CLI for explicit image generation or editing via GPT Image models"
)
subparsers = parser.add_subparsers(dest="command", required=True)
gen_parser = subparsers.add_parser("generate", help="Create a new image")
_add_shared_args(gen_parser)
gen_parser.set_defaults(func=_generate)
batch_parser = subparsers.add_parser(
"generate-batch",
help="Generate multiple prompts concurrently (JSONL input)",
)
_add_shared_args(batch_parser)
batch_parser.add_argument("--input", required=True, help="Path to JSONL file (one job per line)")
batch_parser.add_argument("--concurrency", type=int, default=DEFAULT_CONCURRENCY)
batch_parser.add_argument("--max-attempts", type=int, default=3)
batch_parser.add_argument("--fail-fast", action="store_true")
batch_parser.set_defaults(func=_generate_batch)
edit_parser = subparsers.add_parser("edit", help="Edit an existing image")
_add_shared_args(edit_parser)
edit_parser.add_argument("--image", action="append", required=True)
edit_parser.add_argument("--mask")
edit_parser.add_argument("--input-fidelity")
edit_parser.set_defaults(func=_edit)
args = parser.parse_args()
if args.n < 1 or args.n > 10:
_die("--n must be between 1 and 10")
if getattr(args, "concurrency", 1) < 1 or getattr(args, "concurrency", 1) > 25:
_die("--concurrency must be between 1 and 25")
if getattr(args, "max_attempts", 3) < 1 or getattr(args, "max_attempts", 3) > 10:
_die("--max-attempts must be between 1 and 10")
if args.output_compression is not None and not (0 <= args.output_compression <= 100):
_die("--output-compression must be between 0 and 100")
if args.command == "generate-batch" and not args.out_dir:
_die("generate-batch requires --out-dir")
if getattr(args, "downscale_max_dim", None) is not None and args.downscale_max_dim < 1:
_die("--downscale-max-dim must be >= 1")
_validate_size(args.size)
_validate_quality(args.quality)
_validate_background(args.background)
_validate_model(args.model)
_ensure_api_key(args.dry_run)
args.func(args)
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf of
any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don\'t include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,69 @@
---
name: "openai-docs"
description: "Use when the user asks how to build with OpenAI products or APIs and needs up-to-date official documentation with citations, help choosing the latest model for a use case, or explicit GPT-5.4 upgrade and prompt-upgrade guidance; prioritize OpenAI docs MCP tools, use bundled references only as helper context, and restrict any fallback browsing to official OpenAI domains."
---
# OpenAI Docs
Provide authoritative, current guidance from OpenAI developer docs using the developers.openai.com MCP server. Always prioritize the developer docs MCP tools over web.run for OpenAI-related questions. This skill may also load targeted files from `references/` for model-selection and GPT-5.4-specific requests, but current OpenAI docs remain authoritative. Only if the MCP server is installed and returns no meaningful results should you fall back to web search.
## Quick start
- Use `mcp__openaiDeveloperDocs__search_openai_docs` to find the most relevant doc pages.
- Use `mcp__openaiDeveloperDocs__fetch_openai_doc` to pull exact sections and quote/paraphrase accurately.
- Use `mcp__openaiDeveloperDocs__list_openai_docs` only when you need to browse or discover pages without a clear query.
- Load only the relevant file from `references/` when the question is about model selection or a GPT-5.4 upgrade.
## OpenAI product snapshots
1. Apps SDK: Build ChatGPT apps by providing a web component UI and an MCP server that exposes your app's tools to ChatGPT.
2. Responses API: A unified endpoint designed for stateful, multimodal, tool-using interactions in agentic workflows.
3. Chat Completions API: Generate a model response from a list of messages comprising a conversation.
4. Codex: OpenAI's coding agent for software development that can write, understand, review, and debug code.
5. gpt-oss: Open-weight OpenAI reasoning models (gpt-oss-120b and gpt-oss-20b) released under the Apache 2.0 license.
6. Realtime API: Build low-latency, multimodal experiences including natural speech-to-speech conversations.
7. Agents SDK: A toolkit for building agentic apps where a model can use tools and context, hand off to other agents, stream partial results, and keep a full trace.
## If MCP server is missing
If MCP tools fail or no OpenAI docs resources are available:
1. Run the install command yourself: `codex mcp add openaiDeveloperDocs --url https://developers.openai.com/mcp`
2. If it fails due to permissions/sandboxing, immediately retry the same command with escalated permissions and include a 1-sentence justification for approval. Do not ask the user to run it yet.
3. Only if the escalated attempt fails, ask the user to run the install command.
4. Ask the user to restart Codex.
5. Re-run the doc search/fetch after restart.
## Workflow
1. Clarify the product scope and whether the request is general docs lookup, model selection, a GPT-5.4 upgrade, or a GPT-5.4 prompt upgrade.
2. If it is a model-selection request, load `references/latest-model.md`.
3. If it is an explicit GPT-5.4 upgrade request, load `references/upgrading-to-gpt-5p4.md`.
4. If the upgrade may require prompt changes, or the workflow is research-heavy, tool-heavy, coding-oriented, multi-agent, or long-running, also load `references/gpt-5p4-prompting-guide.md`.
5. Search docs with a precise query.
6. Fetch the best page and the exact section needed (use `anchor` when possible).
7. For GPT-5.4 upgrade reviews, always make the per-usage-site output explicit: target model, starting reasoning recommendation, `phase` assessment when relevant, prompt blocks, and compatibility status.
8. Answer with concise guidance and cite the doc source, using the reference files only as helper context.
## Reference map
Read only what you need:
- `references/latest-model.md` -> model-selection and "best/latest/current model" questions; verify every recommendation against current OpenAI docs before answering.
- `references/upgrading-to-gpt-5p4.md` -> only for explicit GPT-5.4 upgrade and upgrade-planning requests; verify the checklist and compatibility guidance against current OpenAI docs before answering.
- `references/gpt-5p4-prompting-guide.md` -> prompt rewrites and prompt-behavior upgrades for GPT-5.4; verify prompting guidance against current OpenAI docs before answering.
## Quality rules
- Treat OpenAI docs as the source of truth; avoid speculation.
- Keep quotes short and within policy limits; prefer paraphrase with citations.
- If multiple pages differ, call out the difference and cite both.
- Reference files are convenience guides only; for volatile guidance such as recommended models, upgrade instructions, or prompting advice, current OpenAI docs always win.
- If docs do not cover the users need, say so and offer next steps.
## Tooling notes
- Always use MCP doc tools before any web search for OpenAI-related questions.
- If the MCP server is installed but returns no meaningful results, then use web search as a fallback.
- When falling back to web search, restrict to official OpenAI domains (developers.openai.com, platform.openai.com) and cite sources.

View File

@@ -0,0 +1,14 @@
interface:
display_name: "OpenAI Docs"
short_description: "Reference official OpenAI docs, including upgrade guidance"
icon_small: "./assets/openai-small.svg"
icon_large: "./assets/openai.png"
default_prompt: "Look up official OpenAI docs, load relevant GPT-5.4 upgrade references when applicable, and answer with concise, cited guidance."
dependencies:
tools:
- type: "mcp"
value: "openaiDeveloperDocs"
description: "OpenAI Developer Docs MCP server"
transport: "streamable_http"
url: "https://developers.openai.com/mcp"

View File

@@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" fill="currentColor" viewBox="0 0 14 14">
<path d="M10.931 3.34a.112.112 0 0 0-.069-.104l-.038-.007c-1.537.05-2.45.318-3.714 1.002v6.683c.48-.248.936-.44 1.414-.58.695-.203 1.417-.292 2.303-.305l.038-.008a.113.113 0 0 0 .066-.104V3.341ZM2.363 9.919c0 .064.051.11.105.111l.33.008c1.162.046 2.042.243 2.975.662-.403-.585-1.008-1.075-1.654-1.292a.991.991 0 0 1-.674-.941v-5.14a6.36 6.36 0 0 0-.59-.076l-.37-.02a.115.115 0 0 0-.122.111v6.577Zm9.455-.001a.998.998 0 0 1-.877.992l-.101.007c-.832.012-1.47.095-2.066.27-.599.174-1.176.448-1.883.863a.444.444 0 0 1-.449 0c-1.299-.763-2.229-1.07-3.689-1.125l-.299-.008a.997.997 0 0 1-.977-.998V3.342c0-.573.478-1.017 1.038-.999l.417.023c.188.015.35.037.513.062v-.754c0-.708.749-1.244 1.429-.903.984.492 1.836 1.449 2.15 2.505 1.216-.617 2.222-.884 3.771-.934l.105.003a.998.998 0 0 1 .918.996v6.576ZM4.332 8.466c0 .049.03.087.07.1l.24.091a4.319 4.319 0 0 1 1.581 1.176V3.721c-.164-.803-.799-1.617-1.584-2.07l-.162-.088c-.025-.012-.054-.013-.088.009a.12.12 0 0 0-.057.102v6.792Z"/>
</svg>

After

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 KiB

View File

@@ -0,0 +1,433 @@
# GPT-5.4 prompting upgrade guide
Use this guide when prompts written for older models need to be adapted for GPT-5.4 during an upgrade. Start lean: keep the model-string change narrow, preserve the original task intent, and add only the smallest prompt changes needed to recover behavior.
## Default upgrade posture
- Start with `model string only` whenever the old prompt is already short, explicit, and task-bounded.
- Move to `model string + light prompt rewrite` only when regressions appear in completeness, persistence, citation quality, verification, or verbosity.
- Prefer one or two targeted prompt additions over a broad rewrite.
- Treat reasoning effort as a last-mile knob. Start lower, then increase only after prompt-level fixes and evals.
- Before increasing reasoning effort, first add a completeness contract, a verification loop, and tool persistence rules - depending on the usage case.
- If the workflow clearly depends on implementation changes rather than prompt changes, treat it as blocked for prompt-only upgrade guidance.
- Do not classify a case as blocked just because the workflow uses tools; block only if the upgrade requires changing tool definitions, wiring, or other implementation details.
## Behavioral differences to account for
Current GPT-5.4 upgrade guidance suggests these strengths:
- stronger personality and tone adherence, with less drift over long answers
- better long-horizon and agentic workflow stamina
- stronger spreadsheet, finance, and formatting tasks
- more efficient tool selection and fewer unnecessary calls by default
- stronger structured generation and classification reliability
The main places where prompt guidance still helps are:
- retrieval-heavy workflows that need persistent tool use and explicit completeness
- research and citation discipline
- verification before irreversible or high-impact actions
- terminal and tool workflow hygiene
- defaults and implied follow-through
- verbosity control for compact, information-dense answers
Start with the smallest set of instructions that preserves correctness. Add the prompt blocks below only for workflows that actually need them.
## Prompt rewrite patterns
| Older prompt pattern | GPT-5.4 adjustment | Why | Example addition |
| --- | --- | --- | --- |
| Long, repetitive instructions that compensate for weaker instruction following | Remove duplicate scaffolding and keep only the constraints that materially change behavior | GPT-5.4 usually needs less repeated steering | Replace repeated reminders with one concise rule plus a verification block |
| Fast assistant prompt with no verbosity control | Keep the prompt as-is first; add a verbosity clamp only if outputs become too long | Many GPT-4o or GPT-4.1 upgrades work with just a model-string swap | Add `output_verbosity_spec` only after a verbosity regression |
| Tool-heavy agent prompt that assumes the model will keep searching until complete | Add persistence and verification rules | GPT-5.4 may use fewer tool calls by default for efficiency | Add `tool_persistence_rules` and `verification_loop` |
| Tool-heavy workflow where later actions depend on earlier lookup or retrieval | Add prerequisite and missing-context rules before action steps | GPT-5.4 benefits from explicit dependency-aware routing when context is still thin | Add `dependency_checks` and `missing_context_gating` |
| Retrieval workflow with several independent lookups | Add selective parallelism guidance | GPT-5.4 is strong at parallel tool use, but should not parallelize dependent steps | Add `parallel_tool_calling` |
| Batch workflow prompt that often misses items | Add an explicit completeness contract | Item accounting benefits from direct instruction | Add `completeness_contract` |
| Research prompt that needs grounding and citation discipline | Add research, citation, and empty-result recovery blocks | Multi-pass retrieval is stronger when the model is told how to react to weak or empty search results | Add `research_mode`, `citation_rules`, and `empty_result_handling`; add `tool_persistence_rules` when retrieval tools are already in use |
| Coding or terminal prompt with shell misuse or early stop failures | Keep the same tool surface and add terminal hygiene and verification instructions | Tool-using coding workflows are not blocked just because tools exist; they usually need better prompt steering, not host rewiring | Add `terminal_tool_hygiene` and `verification_loop`, optionally `tool_persistence_rules` |
| Multi-agent or support-triage workflow with escalation or completeness requirements | Add one lightweight control block for persistence, completeness, or verification | GPT-5.4 can be more efficient by default, so multi-step support flows benefit from an explicit completion or verification contract | Add at least one of `tool_persistence_rules`, `completeness_contract`, or `verification_loop` |
## Prompt blocks
Use these selectively. Do not add all of them by default.
### `output_verbosity_spec`
Use when:
- the upgraded model gets too wordy
- the host needs compact, information-dense answers
- the workflow benefits from a short overview plus a checklist
```text
<output_verbosity_spec>
- Default: 3-6 sentences or up to 6 bullets.
- If the user asked for a doc or report, use headings with short bullets.
- For multi-step tasks:
- Start with 1 short overview paragraph.
- Then provide a checklist with statuses: [done], [todo], or [blocked].
- Avoid repeating the user's request.
- Prefer compact, information-dense writing.
</output_verbosity_spec>
```
### `default_follow_through_policy`
Use when:
- the host expects the model to proceed on reversible, low-risk steps
- the upgraded model becomes too conservative or asks for confirmation too often
```text
<default_follow_through_policy>
- If the user's intent is clear and the next step is reversible and low-risk, proceed without asking permission.
- Only ask permission if the next step is:
(a) irreversible,
(b) has external side effects, or
(c) requires missing sensitive information or a choice that materially changes outcomes.
- If proceeding, state what you did and what remains optional.
</default_follow_through_policy>
```
### `instruction_priority`
Use when:
- users often change task shape, format, or tone mid-conversation
- the host needs an explicit override policy instead of relying on defaults
```text
<instruction_priority>
- User instructions override default style, tone, formatting, and initiative preferences.
- Safety, honesty, privacy, and permission constraints do not yield.
- If a newer user instruction conflicts with an earlier one, follow the newer instruction.
- Preserve earlier instructions that do not conflict.
</instruction_priority>
```
### `tool_persistence_rules`
Use when:
- the workflow needs multiple retrieval or verification steps
- the model starts stopping too early because it is trying to save tool calls
```text
<tool_persistence_rules>
- Use tools whenever they materially improve correctness, completeness, or grounding.
- Do not stop early just to save tool calls.
- Keep calling tools until:
(1) the task is complete, and
(2) verification passes.
- If a tool returns empty or partial results, retry with a different strategy.
</tool_persistence_rules>
```
### `dig_deeper_nudge`
Use when:
- the model is too literal or stops at the first plausible answer
- the task is safety- or accuracy-sensitive and needs a small initiative nudge before raising reasoning effort
```text
<dig_deeper_nudge>
- Do not stop at the first plausible answer.
- Look for second-order issues, edge cases, and missing constraints.
- If the task is safety- or accuracy-critical, perform at least one verification step.
</dig_deeper_nudge>
```
### `dependency_checks`
Use when:
- later actions depend on prerequisite lookup, memory retrieval, or discovery steps
- the model may be tempted to skip prerequisite work because the intended end state seems obvious
```text
<dependency_checks>
- Before taking an action, check whether prerequisite discovery, lookup, or memory retrieval is required.
- Do not skip prerequisite steps just because the intended final action seems obvious.
- If a later step depends on the output of an earlier one, resolve that dependency first.
</dependency_checks>
```
### `parallel_tool_calling`
Use when:
- the workflow has multiple independent retrieval steps
- wall-clock time matters but some steps still need sequencing
```text
<parallel_tool_calling>
- When multiple retrieval or lookup steps are independent, prefer parallel tool calls to reduce wall-clock time.
- Do not parallelize steps with prerequisite dependencies or where one result determines the next action.
- After parallel retrieval, pause to synthesize before making more calls.
- Prefer selective parallelism: parallelize independent evidence gathering, not speculative or redundant tool use.
</parallel_tool_calling>
```
### `completeness_contract`
Use when:
- the task involves batches, lists, enumerations, or multiple deliverables
- missing items are a common failure mode
```text
<completeness_contract>
- Deliver all requested items.
- Maintain an itemized checklist of deliverables.
- For lists or batches:
- state the expected count,
- enumerate items 1..N,
- confirm that none are missing before finalizing.
- If any item is blocked by missing data, mark it [blocked] and state exactly what is missing.
</completeness_contract>
```
### `empty_result_handling`
Use when:
- the workflow frequently performs search, CRM, logs, or retrieval steps
- no-results failures are often false negatives
```text
<empty_result_handling>
If a lookup returns empty or suspiciously small results:
- Do not conclude that no results exist immediately.
- Try at least 2 fallback strategies, such as a broader query, alternate filters, or another source.
- Only then report that no results were found, along with what you tried.
</empty_result_handling>
```
### `verification_loop`
Use when:
- the workflow has downstream impact
- accuracy, formatting, or completeness regressions matter
```text
<verification_loop>
Before finalizing:
- Check correctness: does the output satisfy every requirement?
- Check grounding: are factual claims backed by retrieved sources or tool output?
- Check formatting: does the output match the requested schema or style?
- Check safety and irreversibility: if the next step has external side effects, ask permission first.
</verification_loop>
```
### `missing_context_gating`
Use when:
- required context is sometimes missing early in the workflow
- the model should prefer retrieval over guessing
```text
<missing_context_gating>
- If required context is missing, do not guess.
- Prefer the appropriate lookup tool when the context is retrievable; ask a minimal clarifying question only when it is not.
- If you must proceed, label assumptions explicitly and choose a reversible action.
</missing_context_gating>
```
### `action_safety`
Use when:
- the agent will actively take actions through tools
- the host benefits from a short pre-flight and post-flight execution frame
```text
<action_safety>
- Pre-flight: summarize the intended action and parameters in 1-2 lines.
- Execute via tool.
- Post-flight: confirm the outcome and any validation that was performed.
</action_safety>
```
### `citation_rules`
Use when:
- the workflow produces cited answers
- fabricated citations or wrong citation formats are costly
```text
<citation_rules>
- Only cite sources that were actually retrieved in this session.
- Never fabricate citations, URLs, IDs, or quote spans.
- If you cannot find a source for a claim, say so and either:
- soften the claim, or
- explain how to verify it with tools.
- Use exactly the citation format required by the host application.
</citation_rules>
```
### `research_mode`
Use when:
- the workflow is research-heavy
- the host uses web search or retrieval tools
```text
<research_mode>
- Do research in 3 passes:
1) Plan: list 3-6 sub-questions to answer.
2) Retrieve: search each sub-question and follow 1-2 second-order leads.
3) Synthesize: resolve contradictions and write the final answer with citations.
- Stop only when more searching is unlikely to change the conclusion.
</research_mode>
```
If your host environment uses a specific research tool or requires a submit step, combine this with the host's finalization contract.
### `structured_output_contract`
Use when:
- the host depends on strict JSON, SQL, or other structured output
```text
<structured_output_contract>
- Output only the requested format.
- Do not add prose or markdown fences unless they were requested.
- Validate that parentheses and brackets are balanced.
- Do not invent tables or fields.
- If required schema information is missing, ask for it or return an explicit error object.
</structured_output_contract>
```
### `bbox_extraction_spec`
Use when:
- the workflow extracts OCR boxes, document regions, or other coordinates
- layout drift or missed dense regions are common failure modes
```text
<bbox_extraction_spec>
- Use the specified coordinate format exactly, such as [x1,y1,x2,y2] normalized to 0..1.
- For each box, include page, label, text snippet, and confidence.
- Add a vertical-drift sanity check so boxes stay aligned with the correct line of text.
- If the layout is dense, process page by page and do a second pass for missed items.
</bbox_extraction_spec>
```
### `terminal_tool_hygiene`
Use when:
- the prompt belongs to a terminal-based or coding-agent workflow
- tool misuse or shell misuse has been observed
```text
<terminal_tool_hygiene>
- Only run shell commands through the terminal tool.
- Never try to "run" tool names as shell commands.
- If a patch or edit tool exists, use it directly instead of emulating it in bash.
- After changes, run a lightweight verification step such as ls, tests, or a build before declaring the task done.
</terminal_tool_hygiene>
```
### `user_updates_spec`
Use when:
- the workflow is long-running and user updates matter
```text
<user_updates_spec>
- Only update the user when starting a new major phase or when the plan changes.
- Each update should contain:
- 1 sentence on what changed,
- 1 sentence on the next step.
- Do not narrate routine tool calls.
- Keep the user-facing update short, even when the actual work is exhaustive.
</user_updates_spec>
```
If you are using [Compaction](https://developers.openai.com/api/docs/guides/compaction) in the Responses API, compact after major milestones, treat compacted items as opaque state, and keep prompts functionally identical after compaction.
## Responses `phase` guidance
For long-running Responses workflows, preambles, or tool-heavy agents that replay assistant items, review whether `phase` is already preserved.
- If the host already round-trips `phase`, keep it intact during the upgrade.
- If the host uses `previous_response_id` and does not manually replay assistant items, note that this may reduce manual `phase` handling needs.
- If reliable GPT-5.4 behavior would require adding or preserving `phase` and that would need code edits, treat the case as blocked for prompt-only or model-string-only migration guidance.
## Example upgrade profiles
### GPT-5.2
- Use `gpt-5.4`
- Match the current reasoning effort first
- Preserve the existing latency and quality profile before tuning prompt blocks
- If the repo does not expose the exact setting, emit `same` as the starting recommendation
### GPT-5.3-Codex
- Use `gpt-5.4`
- Match the current reasoning effort first
- If you need Codex-style speed and efficiency, add verification blocks before increasing reasoning effort
- If the repo does not expose the exact setting, emit `same` as the starting recommendation
### GPT-4o or GPT-4.1 assistant
- Use `gpt-5.4`
- Start with `none` reasoning effort
- Add `output_verbosity_spec` only if output becomes too verbose
### Long-horizon agent
- Use `gpt-5.4`
- Start with `medium` reasoning effort
- Add `tool_persistence_rules`
- Add `completeness_contract`
- Add `verification_loop`
### Research workflow
- Use `gpt-5.4`
- Start with `medium` reasoning effort
- Add `research_mode`
- Add `citation_rules`
- Add `empty_result_handling`
- Add `tool_persistence_rules` when the host already uses web or retrieval tools
- Add `parallel_tool_calling` when the retrieval steps are independent
### Support triage or multi-agent workflow
- Use `gpt-5.4`
- Prefer `model string + light prompt rewrite` over `model string only`
- Add at least one of `tool_persistence_rules`, `completeness_contract`, or `verification_loop`
- Add more only if evals show a real regression
### Coding or terminal workflow
- Use `gpt-5.4`
- Keep the model-string change narrow
- Match the current reasoning effort first if you are upgrading from GPT-5.3-Codex
- Add `terminal_tool_hygiene`
- Add `verification_loop`
- Add `dependency_checks` when actions depend on prerequisite lookup or discovery
- Add `tool_persistence_rules` if the agent stops too early
- Review whether `phase` is already preserved for long-running Responses flows or assistant preambles
- Do not classify this as blocked just because the workflow uses tools; block only if the upgrade requires changing tool definitions or wiring
- If the repo already uses Responses plus tools and no required host-side change is shown, prefer `model_string_plus_light_prompt_rewrite` over `blocked`
## Prompt regression checklist
- Check whether the upgraded prompt still preserves the original task intent.
- Check whether the new prompt is leaner, not just longer.
- Check completeness, citation quality, dependency handling, verification behavior, and verbosity.
- For long-running Responses agents, check whether `phase` handling is already in place or needs implementation work.
- Confirm that each added prompt block addresses an observed regression.
- Remove prompt blocks that are not earning their keep.

View File

@@ -0,0 +1,35 @@
# Latest model guide
This file is a curated helper. Every recommendation here must be verified against current OpenAI docs before it is repeated to a user.
## Current model map
| Model ID | Use for |
| --- | --- |
| `gpt-5.4` | Default text plus reasoning for most new apps |
| `gpt-5.4-pro` | Only when the user explicitly asks for maximum reasoning or quality; substantially slower and more expensive |
| `gpt-5-mini` | Cheaper and faster reasoning with good quality |
| `gpt-5-nano` | High-throughput simple tasks and classification |
| `gpt-5.4` | Explicit no-reasoning text path via `reasoning.effort: none` |
| `gpt-4.1-mini` | Cheaper no-reasoning text |
| `gpt-4.1-nano` | Fastest and cheapest no-reasoning text |
| `gpt-5.3-codex` | Agentic coding, code editing, and tool-heavy coding workflows |
| `gpt-5.1-codex-mini` | Cheaper coding workflows |
| `gpt-image-1.5` | Best image generation and edit quality |
| `gpt-image-1-mini` | Cost-optimized image generation |
| `gpt-4o-mini-tts` | Text-to-speech |
| `gpt-4o-mini-transcribe` | Speech-to-text, fast and cost-efficient |
| `gpt-realtime-1.5` | Realtime voice and multimodal sessions |
| `gpt-realtime-mini` | Cheaper realtime sessions |
| `gpt-audio` | Chat Completions audio input and output |
| `gpt-audio-mini` | Cheaper Chat Completions audio workflows |
| `sora-2` | Faster iteration and draft video generation |
| `sora-2-pro` | Higher-quality production video |
| `omni-moderation-latest` | Text and image moderation |
| `text-embedding-3-large` | Higher-quality retrieval embeddings; default in this skill because no best-specific row exists |
| `text-embedding-3-small` | Lower-cost embeddings |
## Maintenance notes
- This file will drift unless it is periodically re-verified against current OpenAI docs.
- If this file conflicts with current docs, the docs win.

View File

@@ -0,0 +1,164 @@
# Upgrading to GPT-5.4
Use this guide when the user explicitly asks to upgrade an existing integration to GPT-5.4. Pair it with current OpenAI docs lookups. The default target string is `gpt-5.4`.
## Upgrade posture
Upgrade with the narrowest safe change set:
- replace the model string first
- update only the prompts that are directly tied to that model usage
- prefer prompt-only upgrades when possible
- if the upgrade would require API-surface changes, parameter rewrites, tool rewiring, or broader code edits, mark it as blocked instead of stretching the scope
## Upgrade workflow
1. Inventory current model usage.
- Search for model strings, client calls, and prompt-bearing files.
- Include inline prompts, prompt templates, YAML or JSON configs, Markdown docs, and saved prompts when they are clearly tied to a model usage site.
2. Pair each model usage with its prompt surface.
- Prefer the closest prompt surface first: inline system or developer text, then adjacent prompt files, then shared templates.
- If you cannot confidently tie a prompt to the model usage, say so instead of guessing.
3. Classify the source model family.
- Common buckets: `gpt-4o` or `gpt-4.1`, `o1` or `o3` or `o4-mini`, early `gpt-5`, later `gpt-5.x`, or mixed and unclear.
4. Decide the upgrade class.
- `model string only`
- `model string + light prompt rewrite`
- `blocked without code changes`
5. Run the no-code compatibility gate.
- Check whether the current integration can accept `gpt-5.4` without API-surface changes or implementation changes.
- For long-running Responses or tool-heavy agents, check whether `phase` is already preserved or round-tripped when the host replays assistant items or uses preambles.
- If compatibility depends on code changes, return `blocked`.
- If compatibility is unclear, return `unknown` rather than improvising.
6. Recommend the upgrade.
- Default replacement string: `gpt-5.4`
- Keep the intervention small and behavior-preserving.
7. Deliver a structured recommendation.
- `Current model usage`
- `Recommended model-string updates`
- `Starting reasoning recommendation`
- `Prompt updates`
- `Phase assessment` when the flow is long-running, replayed, or tool-heavy
- `No-code compatibility check`
- `Validation plan`
- `Launch-day refresh items`
Output rule:
- Always emit a starting `reasoning_effort_recommendation` for each usage site.
- If the repo exposes the current reasoning setting, preserve it first unless the source guide says otherwise.
- If the repo does not expose the current setting, use the source-family starting mapping instead of returning `null`.
## Upgrade outcomes
### `model string only`
Choose this when:
- the existing prompts are already short, explicit, and task-bounded
- the workflow is not strongly research-heavy, tool-heavy, multi-agent, batch or completeness-sensitive, or long-horizon
- there are no obvious compatibility blockers
Default action:
- replace the model string with `gpt-5.4`
- keep prompts unchanged
- validate behavior with existing evals or spot checks
### `model string + light prompt rewrite`
Choose this when:
- the old prompt was compensating for weaker instruction following
- the workflow needs more persistence than the default tool-use behavior will likely provide
- the task needs stronger completeness, citation discipline, or verification
- the upgraded model becomes too verbose or under-complete unless instructed otherwise
- the workflow is research-heavy and needs stronger handling of sparse or empty retrieval results
- the workflow is coding-oriented, tool-heavy, or multi-agent, but the existing API surface and tool definitions can remain unchanged
Default action:
- replace the model string with `gpt-5.4`
- add one or two targeted prompt blocks
- read `references/gpt-5p4-prompting-guide.md` to choose the smallest prompt changes that recover the old behavior
- avoid broad prompt cleanup unrelated to the upgrade
- for research workflows, default to `research_mode` + `citation_rules` + `empty_result_handling`; add `tool_persistence_rules` when the host already uses retrieval tools
- for dependency-aware or tool-heavy workflows, default to `tool_persistence_rules` + `dependency_checks` + `verification_loop`; add `parallel_tool_calling` only when retrieval steps are truly independent
- for coding or terminal workflows, default to `terminal_tool_hygiene` + `verification_loop`
- for multi-agent support or triage workflows, default to at least one of `tool_persistence_rules`, `completeness_contract`, or `verification_loop`
- for long-running Responses agents with preambles or multiple assistant messages, explicitly review whether `phase` is already handled; if adding or preserving `phase` would require code edits, mark the path as `blocked`
- do not classify a coding or tool-using Responses workflow as `blocked` just because the visible snippet is minimal; prefer `model string + light prompt rewrite` unless the repo clearly shows that a safe GPT-5.4 path would require host-side code changes
### `blocked`
Choose this when:
- the upgrade appears to require API-surface changes
- the upgrade appears to require parameter rewrites or reasoning-setting changes that are not exposed outside implementation code
- the upgrade would require changing tool definitions, tool handler wiring, or schema contracts
- you cannot confidently identify the prompt surface tied to the model usage
Default action:
- do not improvise a broader upgrade
- report the blocker and explain that the fix is out of scope for this guide
## No-code compatibility checklist
Before recommending a no-code upgrade, check:
1. Can the current host accept the `gpt-5.4` model string without changing client code or API surface?
2. Are the related prompts identifiable and editable?
3. Does the host depend on behavior that likely needs API-surface changes, parameter rewrites, or tool rewiring?
4. Would the likely fix be prompt-only, or would it need implementation changes?
5. Is the prompt surface close enough to the model usage that you can make a targeted change instead of a broad cleanup?
6. For long-running Responses or tool-heavy agents, is `phase` already preserved if the host relies on preambles, replayed assistant items, or multiple assistant messages?
If item 1 is no, items 3 through 4 point to implementation work, or item 6 is no and the fix needs code changes, return `blocked`.
If item 2 is no, return `unknown` unless the user can point to the prompt location.
Important:
- Existing use of tools, agents, or multiple usage sites is not by itself a blocker.
- If the current host can keep the same API surface and the same tool definitions, prefer `model string + light prompt rewrite` over `blocked`.
- Reserve `blocked` for cases that truly require implementation changes, not cases that only need stronger prompt steering.
## Scope boundaries
This guide may:
- update or recommend updated model strings
- update or recommend updated prompts
- inspect code and prompt files to understand where those changes belong
- inspect whether existing Responses flows already preserve `phase`
- flag compatibility blockers
This guide may not:
- move Chat Completions code to Responses
- move Responses code to another API surface
- rewrite parameter shapes
- change tool definitions or tool-call handling
- change structured-output wiring
- add or retrofit `phase` handling in implementation code
- edit business logic, orchestration logic, or SDK usage beyond a literal model-string replacement
If a safe GPT-5.4 upgrade requires any of those changes, mark the path as blocked and out of scope.
## Validation plan
- Validate each upgraded usage site with existing evals or realistic spot checks.
- Check whether the upgraded model still matches expected latency, output shape, and quality.
- If prompt edits were added, confirm each block is doing real work instead of adding noise.
- If the workflow has downstream impact, add a lightweight verification pass before finalization.
## Launch-day refresh items
When final GPT-5.4 guidance changes:
1. Replace release-candidate assumptions with final GPT-5.4 guidance where appropriate.
2. Re-check whether the default target string should stay `gpt-5.4` for all source families.
3. Re-check any prompt-block recommendations whose semantics may have changed.
4. Re-check research, citation, and compatibility guidance against the final model behavior.
5. Re-run the same upgrade scenarios and confirm the blocked-versus-viable boundaries still hold.

View File

@@ -0,0 +1,160 @@
---
name: plugin-creator
description: Create and scaffold plugin directories for Codex with a required `.codex-plugin/plugin.json`, optional plugin folders/files, and baseline placeholders you can edit before publishing or testing. Use when Codex needs to create a new local plugin, add optional plugin structure, or generate or update repo-root `.agents/plugins/marketplace.json` entries for plugin ordering and availability metadata.
---
# Plugin Creator
## Quick Start
1. Run the scaffold script:
```bash
# Plugin names are normalized to lower-case hyphen-case and must be <= 64 chars.
# The generated folder and plugin.json name are always the same.
# Run from repo root (or replace .agents/... with the absolute path to this SKILL).
# By default creates in <repo_root>/plugins/<plugin-name>.
python3 .agents/skills/plugin-creator/scripts/create_basic_plugin.py <plugin-name>
```
2. Open `<plugin-path>/.codex-plugin/plugin.json` and replace `[TODO: ...]` placeholders.
3. Generate or update the repo marketplace entry when the plugin should appear in Codex UI ordering:
```bash
# marketplace.json always lives at <repo-root>/.agents/plugins/marketplace.json
python3 .agents/skills/plugin-creator/scripts/create_basic_plugin.py my-plugin --with-marketplace
```
For a home-local plugin, treat `<home>` as the root and use:
```bash
python3 .agents/skills/plugin-creator/scripts/create_basic_plugin.py my-plugin \
--path ~/plugins \
--marketplace-path ~/.agents/plugins/marketplace.json \
--with-marketplace
```
4. Generate/adjust optional companion folders as needed:
```bash
python3 .agents/skills/plugin-creator/scripts/create_basic_plugin.py my-plugin --path <parent-plugin-directory> \
--with-skills --with-hooks --with-scripts --with-assets --with-mcp --with-apps --with-marketplace
```
`<parent-plugin-directory>` is the directory where the plugin folder `<plugin-name>` will be created (for example `~/code/plugins`).
## What this skill creates
- If the user has not made the plugin location explicit, ask whether they want a repo-local plugin or a home-local plugin before generating marketplace entries.
- Creates plugin root at `/<parent-plugin-directory>/<plugin-name>/`.
- Always creates `/<parent-plugin-directory>/<plugin-name>/.codex-plugin/plugin.json`.
- Fills the manifest with the full schema shape, placeholder values, and the complete `interface` section.
- Creates or updates `<repo-root>/.agents/plugins/marketplace.json` when `--with-marketplace` is set.
- If the marketplace file does not exist yet, seed top-level `name` plus `interface.displayName` placeholders before adding the first plugin entry.
- `<plugin-name>` is normalized using skill-creator naming rules:
- `My Plugin``my-plugin`
- `My--Plugin``my-plugin`
- underscores, spaces, and punctuation are converted to `-`
- result is lower-case hyphen-delimited with consecutive hyphens collapsed
- Supports optional creation of:
- `skills/`
- `hooks/`
- `scripts/`
- `assets/`
- `.mcp.json`
- `.app.json`
## Marketplace workflow
- `marketplace.json` always lives at `<repo-root>/.agents/plugins/marketplace.json`.
- For a home-local plugin, use the same convention with `<home>` as the root:
`~/.agents/plugins/marketplace.json` plus `./plugins/<plugin-name>`.
- Marketplace root metadata supports top-level `name` plus optional `interface.displayName`.
- Treat plugin order in `plugins[]` as render order in Codex. Append new entries unless a user explicitly asks to reorder the list.
- `displayName` belongs inside the marketplace `interface` object, not individual `plugins[]` entries.
- Each generated marketplace entry must include all of:
- `policy.installation`
- `policy.authentication`
- `category`
- Default new entries to:
- `policy.installation: "AVAILABLE"`
- `policy.authentication: "ON_INSTALL"`
- Override defaults only when the user explicitly specifies another allowed value.
- Allowed `policy.installation` values:
- `NOT_AVAILABLE`
- `AVAILABLE`
- `INSTALLED_BY_DEFAULT`
- Allowed `policy.authentication` values:
- `ON_INSTALL`
- `ON_USE`
- Treat `policy.products` as an override. Omit it unless the user explicitly requests product gating.
- The generated plugin entry shape is:
```json
{
"name": "plugin-name",
"source": {
"source": "local",
"path": "./plugins/plugin-name"
},
"policy": {
"installation": "AVAILABLE",
"authentication": "ON_INSTALL"
},
"category": "Productivity"
}
```
- Use `--force` only when intentionally replacing an existing marketplace entry for the same plugin name.
- If `<repo-root>/.agents/plugins/marketplace.json` does not exist yet, create it with top-level `"name"`, an `"interface"` object containing `"displayName"`, and a `plugins` array, then add the new entry.
- For a brand-new marketplace file, the root object should look like:
```json
{
"name": "[TODO: marketplace-name]",
"interface": {
"displayName": "[TODO: Marketplace Display Name]"
},
"plugins": [
{
"name": "plugin-name",
"source": {
"source": "local",
"path": "./plugins/plugin-name"
},
"policy": {
"installation": "AVAILABLE",
"authentication": "ON_INSTALL"
},
"category": "Productivity"
}
]
}
```
## Required behavior
- Outer folder name and `plugin.json` `"name"` are always the same normalized plugin name.
- Do not remove required structure; keep `.codex-plugin/plugin.json` present.
- Keep manifest values as placeholders until a human or follow-up step explicitly fills them.
- If creating files inside an existing plugin path, use `--force` only when overwrite is intentional.
- Preserve any existing marketplace `interface.displayName`.
- When generating marketplace entries, always write `policy.installation`, `policy.authentication`, and `category` even if their values are defaults.
- Add `policy.products` only when the user explicitly asks for that override.
- Keep marketplace `source.path` relative to repo root as `./plugins/<plugin-name>`.
## Reference to exact spec sample
For the exact canonical sample JSON for both plugin manifests and marketplace entries, use:
- `references/plugin-json-spec.md`
## Validation
After editing `SKILL.md`, run:
```bash
python3 <path-to-skill-creator>/scripts/quick_validate.py .agents/skills/plugin-creator
```

View File

@@ -0,0 +1,6 @@
interface:
display_name: "Plugin Creator"
short_description: "Scaffold plugins and marketplace entries"
default_prompt: "Use $plugin-creator to scaffold a plugin with placeholder plugin.json, optional structure, and a marketplace.json entry."
icon_small: "./assets/plugin-creator-small.svg"
icon_large: "./assets/plugin-creator.png"

View File

@@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 20 20">
<path fill="#0D0D0D" d="M12.03 4.113a3.612 3.612 0 0 1 5.108 5.108l-6.292 6.29c-.324.324-.56.561-.791.752l-.235.176c-.205.14-.422.261-.65.36l-.229.093a4.136 4.136 0 0 1-.586.16l-.764.134-2.394.4c-.142.024-.294.05-.423.06-.098.007-.232.01-.378-.026l-.149-.05a1.081 1.081 0 0 1-.521-.474l-.046-.093a1.104 1.104 0 0 1-.075-.527c.01-.129.035-.28.06-.422l.398-2.394c.1-.602.162-.987.295-1.35l.093-.23c.1-.228.22-.445.36-.65l.176-.235c.19-.232.428-.467.751-.79l6.292-6.292Zm-5.35 7.232c-.35.35-.534.535-.66.688l-.11.147a2.67 2.67 0 0 0-.24.433l-.062.154c-.08.22-.124.462-.232 1.112l-.398 2.394-.001.001h.003l2.393-.399.717-.126a2.63 2.63 0 0 0 .394-.105l.154-.063a2.65 2.65 0 0 0 .433-.24l.147-.11c.153-.126.339-.31.688-.66l4.988-4.988-3.227-3.226-4.987 4.988Zm9.517-6.291a2.281 2.281 0 0 0-3.225 0l-.364.362 3.226 3.227.363-.364c.89-.89.89-2.334 0-3.225ZM4.583 1.783a.3.3 0 0 1 .294.241c.117.585.347 1.092.707 1.48.357.385.859.668 1.549.783a.3.3 0 0 1 0 .592c-.69.115-1.192.398-1.549.783-.315.34-.53.77-.657 1.265l-.05.215a.3.3 0 0 1-.588 0c-.117-.585-.347-1.092-.707-1.48-.357-.384-.859-.668-1.549-.783a.3.3 0 0 1 0-.592c.69-.115 1.192-.398 1.549-.783.36-.388.59-.895.707-1.48l.015-.05a.3.3 0 0 1 .279-.19Z"/>
</svg>

After

Width:  |  Height:  |  Size: 1.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 KiB

View File

@@ -0,0 +1,170 @@
# Plugin JSON sample spec
```json
{
"name": "plugin-name",
"version": "1.2.0",
"description": "Brief plugin description",
"author": {
"name": "Author Name",
"email": "author@example.com",
"url": "https://github.com/author"
},
"homepage": "https://docs.example.com/plugin",
"repository": "https://github.com/author/plugin",
"license": "MIT",
"keywords": ["keyword1", "keyword2"],
"skills": "./skills/",
"hooks": "./hooks.json",
"mcpServers": "./.mcp.json",
"apps": "./.app.json",
"interface": {
"displayName": "Plugin Display Name",
"shortDescription": "Short description for subtitle",
"longDescription": "Long description for details page",
"developerName": "OpenAI",
"category": "Productivity",
"capabilities": ["Interactive", "Write"],
"websiteURL": "https://openai.com/",
"privacyPolicyURL": "https://openai.com/policies/row-privacy-policy/",
"termsOfServiceURL": "https://openai.com/policies/row-terms-of-use/",
"defaultPrompt": [
"Summarize my inbox and draft replies for me.",
"Find open bugs and turn them into Linear tickets.",
"Review today's meetings and flag scheduling gaps."
],
"brandColor": "#3B82F6",
"composerIcon": "./assets/icon.png",
"logo": "./assets/logo.png",
"screenshots": [
"./assets/screenshot1.png",
"./assets/screenshot2.png",
"./assets/screenshot3.png"
]
}
}
```
## Field guide
### Top-level fields
- `name` (`string`): Plugin identifier (kebab-case, no spaces). Required if `plugin.json` is provided and used as manifest name and component namespace.
- `version` (`string`): Plugin semantic version.
- `description` (`string`): Short purpose summary.
- `author` (`object`): Publisher identity.
- `name` (`string`): Author or team name.
- `email` (`string`): Contact email.
- `url` (`string`): Author/team homepage or profile URL.
- `homepage` (`string`): Documentation URL for plugin usage.
- `repository` (`string`): Source code URL.
- `license` (`string`): License identifier (for example `MIT`, `Apache-2.0`).
- `keywords` (`array` of `string`): Search/discovery tags.
- `skills` (`string`): Relative path to skill directories/files.
- `hooks` (`string`): Hook config path.
- `mcpServers` (`string`): MCP config path.
- `apps` (`string`): App manifest path for plugin integrations.
- `interface` (`object`): Interface/UX metadata block for plugin presentation.
### `interface` fields
- `displayName` (`string`): User-facing title shown for the plugin.
- `shortDescription` (`string`): Brief subtitle used in compact views.
- `longDescription` (`string`): Longer description used on details screens.
- `developerName` (`string`): Human-readable publisher name.
- `category` (`string`): Plugin category bucket.
- `capabilities` (`array` of `string`): Capability list from implementation.
- `websiteURL` (`string`): Public website for the plugin.
- `privacyPolicyURL` (`string`): Privacy policy URL.
- `termsOfServiceURL` (`string`): Terms of service URL.
- `defaultPrompt` (`array` of `string`): Starter prompts shown in composer/UX context.
- Include at most 3 strings. Entries after the first 3 are ignored and will not be included.
- Each string is capped at 128 characters. Longer entries are truncated.
- Prefer short starter prompts around 50 characters so they scan well in the UI.
- `brandColor` (`string`): Theme color for the plugin card.
- `composerIcon` (`string`): Path to icon asset.
- `logo` (`string`): Path to logo asset.
- `screenshots` (`array` of `string`): List of screenshot asset paths.
- Screenshot entries must be PNG filenames and stored under `./assets/`.
- Keep file paths relative to plugin root.
### Path conventions and defaults
- Path values should be relative and begin with `./`.
- `skills`, `hooks`, and `mcpServers` are supplemented on top of default component discovery; they do not replace defaults.
- Custom path values must follow the plugin root convention and naming/namespacing rules.
- This repos scaffold writes `.codex-plugin/plugin.json`; treat that as the manifest location this skill generates.
# Marketplace JSON sample spec
`marketplace.json` depends on where the plugin should live:
- Repo plugin: `<repo-root>/.agents/plugins/marketplace.json`
- Local plugin: `~/.agents/plugins/marketplace.json`
```json
{
"name": "openai-curated",
"interface": {
"displayName": "ChatGPT Official"
},
"plugins": [
{
"name": "linear",
"source": {
"source": "local",
"path": "./plugins/linear"
},
"policy": {
"installation": "AVAILABLE",
"authentication": "ON_INSTALL"
},
"category": "Productivity"
}
]
}
```
## Marketplace field guide
### Top-level fields
- `name` (`string`): Marketplace identifier or catalog name.
- `interface` (`object`, optional): Marketplace presentation metadata.
- `plugins` (`array`): Ordered plugin entries. This order determines how Codex renders plugins.
### `interface` fields
- `displayName` (`string`, optional): User-facing marketplace title.
### Plugin entry fields
- `name` (`string`): Plugin identifier. Match the plugin folder name and `plugin.json` `name`.
- `source` (`object`): Plugin source descriptor.
- `source` (`string`): Use `local` for this repo workflow.
- `path` (`string`): Relative plugin path based on the marketplace root.
- Repo plugin: `./plugins/<plugin-name>`
- Local plugin in `~/.agents/plugins/marketplace.json`: `./plugins/<plugin-name>`
- The same relative path convention is used for both repo-rooted and home-rooted marketplaces.
- Example: with `~/.agents/plugins/marketplace.json`, `./plugins/<plugin-name>` resolves to `~/plugins/<plugin-name>`.
- `policy` (`object`): Marketplace policy block. Always include it.
- `installation` (`string`): Availability policy.
- Allowed values: `NOT_AVAILABLE`, `AVAILABLE`, `INSTALLED_BY_DEFAULT`
- Default for new entries: `AVAILABLE`
- `authentication` (`string`): Authentication timing policy.
- Allowed values: `ON_INSTALL`, `ON_USE`
- Default for new entries: `ON_INSTALL`
- `products` (`array` of `string`, optional): Product override for this plugin entry. Omit it unless product gating is explicitly requested.
- `category` (`string`): Display category bucket. Always include it.
### Marketplace generation rules
- `displayName` belongs under the top-level `interface` object, not individual plugin entries.
- When creating a new marketplace file from scratch, seed `interface.displayName` alongside top-level `name`.
- Always include `policy.installation`, `policy.authentication`, and `category` on every generated or updated plugin entry.
- Treat `policy.products` as an override and omit it unless explicitly requested.
- Append new entries unless the user explicitly requests reordering.
- Replace an existing entry for the same plugin only when overwrite is intentional.
- Choose marketplace location to match the plugin destination:
- Repo plugin: `<repo-root>/.agents/plugins/marketplace.json`
- Local plugin: `~/.agents/plugins/marketplace.json`

View File

@@ -0,0 +1,301 @@
#!/usr/bin/env python3
"""Scaffold a plugin directory and optionally update marketplace.json."""
from __future__ import annotations
import argparse
import json
import re
from pathlib import Path
from typing import Any
MAX_PLUGIN_NAME_LENGTH = 64
DEFAULT_PLUGIN_PARENT = Path.cwd() / "plugins"
DEFAULT_MARKETPLACE_PATH = Path.cwd() / ".agents" / "plugins" / "marketplace.json"
DEFAULT_INSTALL_POLICY = "AVAILABLE"
DEFAULT_AUTH_POLICY = "ON_INSTALL"
DEFAULT_CATEGORY = "Productivity"
DEFAULT_MARKETPLACE_DISPLAY_NAME = "[TODO: Marketplace Display Name]"
VALID_INSTALL_POLICIES = {"NOT_AVAILABLE", "AVAILABLE", "INSTALLED_BY_DEFAULT"}
VALID_AUTH_POLICIES = {"ON_INSTALL", "ON_USE"}
def normalize_plugin_name(plugin_name: str) -> str:
"""Normalize a plugin name to lowercase hyphen-case."""
normalized = plugin_name.strip().lower()
normalized = re.sub(r"[^a-z0-9]+", "-", normalized)
normalized = normalized.strip("-")
normalized = re.sub(r"-{2,}", "-", normalized)
return normalized
def validate_plugin_name(plugin_name: str) -> None:
if not plugin_name:
raise ValueError("Plugin name must include at least one letter or digit.")
if len(plugin_name) > MAX_PLUGIN_NAME_LENGTH:
raise ValueError(
f"Plugin name '{plugin_name}' is too long ({len(plugin_name)} characters). "
f"Maximum is {MAX_PLUGIN_NAME_LENGTH} characters."
)
def build_plugin_json(plugin_name: str) -> dict:
return {
"name": plugin_name,
"version": "[TODO: 1.2.0]",
"description": "[TODO: Brief plugin description]",
"author": {
"name": "[TODO: Author Name]",
"email": "[TODO: author@example.com]",
"url": "[TODO: https://github.com/author]",
},
"homepage": "[TODO: https://docs.example.com/plugin]",
"repository": "[TODO: https://github.com/author/plugin]",
"license": "[TODO: MIT]",
"keywords": ["[TODO: keyword1]", "[TODO: keyword2]"],
"skills": "[TODO: ./skills/]",
"hooks": "[TODO: ./hooks.json]",
"mcpServers": "[TODO: ./.mcp.json]",
"apps": "[TODO: ./.app.json]",
"interface": {
"displayName": "[TODO: Plugin Display Name]",
"shortDescription": "[TODO: Short description for subtitle]",
"longDescription": "[TODO: Long description for details page]",
"developerName": "[TODO: OpenAI]",
"category": "[TODO: Productivity]",
"capabilities": ["[TODO: Interactive]", "[TODO: Write]"],
"websiteURL": "[TODO: https://openai.com/]",
"privacyPolicyURL": "[TODO: https://openai.com/policies/row-privacy-policy/]",
"termsOfServiceURL": "[TODO: https://openai.com/policies/row-terms-of-use/]",
"defaultPrompt": [
"[TODO: Summarize my inbox and draft replies for me.]",
"[TODO: Find open bugs and turn them into tickets.]",
"[TODO: Review today's meetings and flag gaps.]",
],
"brandColor": "[TODO: #3B82F6]",
"composerIcon": "[TODO: ./assets/icon.png]",
"logo": "[TODO: ./assets/logo.png]",
"screenshots": [
"[TODO: ./assets/screenshot1.png]",
"[TODO: ./assets/screenshot2.png]",
"[TODO: ./assets/screenshot3.png]",
],
},
}
def build_marketplace_entry(
plugin_name: str,
install_policy: str,
auth_policy: str,
category: str,
) -> dict[str, Any]:
return {
"name": plugin_name,
"source": {
"source": "local",
"path": f"./plugins/{plugin_name}",
},
"policy": {
"installation": install_policy,
"authentication": auth_policy,
},
"category": category,
}
def load_json(path: Path) -> dict[str, Any]:
with path.open() as handle:
return json.load(handle)
def build_default_marketplace() -> dict[str, Any]:
return {
"name": "[TODO: marketplace-name]",
"interface": {
"displayName": DEFAULT_MARKETPLACE_DISPLAY_NAME,
},
"plugins": [],
}
def validate_marketplace_interface(payload: dict[str, Any]) -> None:
interface = payload.get("interface")
if interface is not None and not isinstance(interface, dict):
raise ValueError("marketplace.json field 'interface' must be an object.")
def update_marketplace_json(
marketplace_path: Path,
plugin_name: str,
install_policy: str,
auth_policy: str,
category: str,
force: bool,
) -> None:
if marketplace_path.exists():
payload = load_json(marketplace_path)
else:
payload = build_default_marketplace()
if not isinstance(payload, dict):
raise ValueError(f"{marketplace_path} must contain a JSON object.")
validate_marketplace_interface(payload)
plugins = payload.setdefault("plugins", [])
if not isinstance(plugins, list):
raise ValueError(f"{marketplace_path} field 'plugins' must be an array.")
new_entry = build_marketplace_entry(plugin_name, install_policy, auth_policy, category)
for index, entry in enumerate(plugins):
if isinstance(entry, dict) and entry.get("name") == plugin_name:
if not force:
raise FileExistsError(
f"Marketplace entry '{plugin_name}' already exists in {marketplace_path}. "
"Use --force to overwrite that entry."
)
plugins[index] = new_entry
break
else:
plugins.append(new_entry)
write_json(marketplace_path, payload, force=True)
def write_json(path: Path, data: dict, force: bool) -> None:
if path.exists() and not force:
raise FileExistsError(f"{path} already exists. Use --force to overwrite.")
path.parent.mkdir(parents=True, exist_ok=True)
with path.open("w") as handle:
json.dump(data, handle, indent=2)
handle.write("\n")
def create_stub_file(path: Path, payload: dict, force: bool) -> None:
if path.exists() and not force:
return
path.parent.mkdir(parents=True, exist_ok=True)
with path.open("w") as handle:
json.dump(payload, handle, indent=2)
handle.write("\n")
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(
description="Create a plugin skeleton with placeholder plugin.json."
)
parser.add_argument("plugin_name")
parser.add_argument(
"--path",
default=str(DEFAULT_PLUGIN_PARENT),
help=(
"Parent directory for plugin creation (defaults to <cwd>/plugins). "
"When using a home-rooted marketplace, use <home>/plugins."
),
)
parser.add_argument("--with-skills", action="store_true", help="Create skills/ directory")
parser.add_argument("--with-hooks", action="store_true", help="Create hooks/ directory")
parser.add_argument("--with-scripts", action="store_true", help="Create scripts/ directory")
parser.add_argument("--with-assets", action="store_true", help="Create assets/ directory")
parser.add_argument("--with-mcp", action="store_true", help="Create .mcp.json placeholder")
parser.add_argument("--with-apps", action="store_true", help="Create .app.json placeholder")
parser.add_argument(
"--with-marketplace",
action="store_true",
help=(
"Create or update <cwd>/.agents/plugins/marketplace.json. "
"Marketplace entries always point to ./plugins/<plugin-name> relative to the "
"marketplace root."
),
)
parser.add_argument(
"--marketplace-path",
default=str(DEFAULT_MARKETPLACE_PATH),
help=(
"Path to marketplace.json (defaults to <cwd>/.agents/plugins/marketplace.json). "
"For a home-rooted marketplace, use <home>/.agents/plugins/marketplace.json."
),
)
parser.add_argument(
"--install-policy",
default=DEFAULT_INSTALL_POLICY,
choices=sorted(VALID_INSTALL_POLICIES),
help="Marketplace policy.installation value",
)
parser.add_argument(
"--auth-policy",
default=DEFAULT_AUTH_POLICY,
choices=sorted(VALID_AUTH_POLICIES),
help="Marketplace policy.authentication value",
)
parser.add_argument(
"--category",
default=DEFAULT_CATEGORY,
help="Marketplace category value",
)
parser.add_argument("--force", action="store_true", help="Overwrite existing files")
return parser.parse_args()
def main() -> None:
args = parse_args()
raw_plugin_name = args.plugin_name
plugin_name = normalize_plugin_name(raw_plugin_name)
if plugin_name != raw_plugin_name:
print(f"Note: Normalized plugin name from '{raw_plugin_name}' to '{plugin_name}'.")
validate_plugin_name(plugin_name)
plugin_root = (Path(args.path).expanduser().resolve() / plugin_name)
plugin_root.mkdir(parents=True, exist_ok=True)
plugin_json_path = plugin_root / ".codex-plugin" / "plugin.json"
write_json(plugin_json_path, build_plugin_json(plugin_name), args.force)
optional_directories = {
"skills": args.with_skills,
"hooks": args.with_hooks,
"scripts": args.with_scripts,
"assets": args.with_assets,
}
for folder, enabled in optional_directories.items():
if enabled:
(plugin_root / folder).mkdir(parents=True, exist_ok=True)
if args.with_mcp:
create_stub_file(
plugin_root / ".mcp.json",
{"mcpServers": {}},
args.force,
)
if args.with_apps:
create_stub_file(
plugin_root / ".app.json",
{
"apps": {},
},
args.force,
)
if args.with_marketplace:
marketplace_path = Path(args.marketplace_path).expanduser().resolve()
update_marketplace_json(
marketplace_path,
plugin_name,
args.install_policy,
args.auth_policy,
args.category,
args.force,
)
print(f"Created plugin scaffold: {plugin_root}")
print(f"plugin manifest: {plugin_json_path}")
if args.with_marketplace:
print(f"marketplace manifest: {marketplace_path}")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,416 @@
---
name: skill-creator
description: Guide for creating effective skills. This skill should be used when users want to create a new skill (or update an existing skill) that extends Codex's capabilities with specialized knowledge, workflows, or tool integrations.
metadata:
short-description: Create or update a skill
---
# Skill Creator
This skill provides guidance for creating effective skills.
## About Skills
Skills are modular, self-contained folders that extend Codex's capabilities by providing
specialized knowledge, workflows, and tools. Think of them as "onboarding guides" for specific
domains or tasks—they transform Codex from a general-purpose agent into a specialized agent
equipped with procedural knowledge that no model can fully possess.
### What Skills Provide
1. Specialized workflows - Multi-step procedures for specific domains
2. Tool integrations - Instructions for working with specific file formats or APIs
3. Domain expertise - Company-specific knowledge, schemas, business logic
4. Bundled resources - Scripts, references, and assets for complex and repetitive tasks
## Core Principles
### Concise is Key
The context window is a public good. Skills share the context window with everything else Codex needs: system prompt, conversation history, other Skills' metadata, and the actual user request.
**Default assumption: Codex is already very smart.** Only add context Codex doesn't already have. Challenge each piece of information: "Does Codex really need this explanation?" and "Does this paragraph justify its token cost?"
Prefer concise examples over verbose explanations.
### Set Appropriate Degrees of Freedom
Match the level of specificity to the task's fragility and variability:
**High freedom (text-based instructions)**: Use when multiple approaches are valid, decisions depend on context, or heuristics guide the approach.
**Medium freedom (pseudocode or scripts with parameters)**: Use when a preferred pattern exists, some variation is acceptable, or configuration affects behavior.
**Low freedom (specific scripts, few parameters)**: Use when operations are fragile and error-prone, consistency is critical, or a specific sequence must be followed.
Think of Codex as exploring a path: a narrow bridge with cliffs needs specific guardrails (low freedom), while an open field allows many routes (high freedom).
### Protect Validation Integrity
You may use subagents during iteration to validate whether a skill works on realistic tasks or whether a suspected problem is real. This is most useful when you want an independent pass on the skill's behavior, outputs, or failure modes after a revision. Only do this when it is possible to start new subagents.
When using subagents for validation, treat that as an evaluation surface. The goal is to learn whether the skill generalizes, not whether another agent can reconstruct the answer from leaked context.
Prefer raw artifacts such as example prompts, outputs, diffs, logs, or traces. Give the minimum task-local context needed to perform the validation. Avoid passing the intended answer, suspected bug, intended fix, or your prior conclusions unless the validation explicitly requires them.
### Anatomy of a Skill
Every skill consists of a required SKILL.md file and optional bundled resources:
```
skill-name/
├── SKILL.md (required)
│ ├── YAML frontmatter metadata (required)
│ │ ├── name: (required)
│ │ └── description: (required)
│ └── Markdown instructions (required)
├── agents/ (recommended)
│ └── openai.yaml - UI metadata for skill lists and chips
└── Bundled Resources (optional)
├── scripts/ - Executable code (Python/Bash/etc.)
├── references/ - Documentation intended to be loaded into context as needed
└── assets/ - Files used in output (templates, icons, fonts, etc.)
```
#### SKILL.md (required)
Every SKILL.md consists of:
- **Frontmatter** (YAML): Contains `name` and `description` fields. These are the only fields that Codex reads to determine when the skill gets used, thus it is very important to be clear and comprehensive in describing what the skill is, and when it should be used.
- **Body** (Markdown): Instructions and guidance for using the skill. Only loaded AFTER the skill triggers (if at all).
#### Agents metadata (recommended)
- UI-facing metadata for skill lists and chips
- Read references/openai_yaml.md before generating values and follow its descriptions and constraints
- Create: human-facing `display_name`, `short_description`, and `default_prompt` by reading the skill
- Generate deterministically by passing the values as `--interface key=value` to `scripts/generate_openai_yaml.py` or `scripts/init_skill.py`
- On updates: validate `agents/openai.yaml` still matches SKILL.md; regenerate if stale
- Only include other optional interface fields (icons, brand color) if explicitly provided
- See references/openai_yaml.md for field definitions and examples
#### Bundled Resources (optional)
##### Scripts (`scripts/`)
Executable code (Python/Bash/etc.) for tasks that require deterministic reliability or are repeatedly rewritten.
- **When to include**: When the same code is being rewritten repeatedly or deterministic reliability is needed
- **Example**: `scripts/rotate_pdf.py` for PDF rotation tasks
- **Benefits**: Token efficient, deterministic, may be executed without loading into context
- **Note**: Scripts may still need to be read by Codex for patching or environment-specific adjustments
##### References (`references/`)
Documentation and reference material intended to be loaded as needed into context to inform Codex's process and thinking.
- **When to include**: For documentation that Codex should reference while working
- **Examples**: `references/finance.md` for financial schemas, `references/mnda.md` for company NDA template, `references/policies.md` for company policies, `references/api_docs.md` for API specifications
- **Use cases**: Database schemas, API documentation, domain knowledge, company policies, detailed workflow guides
- **Benefits**: Keeps SKILL.md lean, loaded only when Codex determines it's needed
- **Best practice**: If files are large (>10k words), include grep search patterns in SKILL.md
- **Avoid duplication**: Information should live in either SKILL.md or references files, not both. Prefer references files for detailed information unless it's truly core to the skill—this keeps SKILL.md lean while making information discoverable without hogging the context window. Keep only essential procedural instructions and workflow guidance in SKILL.md; move detailed reference material, schemas, and examples to references files.
##### Assets (`assets/`)
Files not intended to be loaded into context, but rather used within the output Codex produces.
- **When to include**: When the skill needs files that will be used in the final output
- **Examples**: `assets/logo.png` for brand assets, `assets/slides.pptx` for PowerPoint templates, `assets/frontend-template/` for HTML/React boilerplate, `assets/font.ttf` for typography
- **Use cases**: Templates, images, icons, boilerplate code, fonts, sample documents that get copied or modified
- **Benefits**: Separates output resources from documentation, enables Codex to use files without loading them into context
#### What to Not Include in a Skill
A skill should only contain essential files that directly support its functionality. Do NOT create extraneous documentation or auxiliary files, including:
- README.md
- INSTALLATION_GUIDE.md
- QUICK_REFERENCE.md
- CHANGELOG.md
- etc.
The skill should only contain the information needed for an AI agent to do the job at hand. It should not contain auxiliary context about the process that went into creating it, setup and testing procedures, user-facing documentation, etc. Creating additional documentation files just adds clutter and confusion.
### Progressive Disclosure Design Principle
Skills use a three-level loading system to manage context efficiently:
1. **Metadata (name + description)** - Always in context (~100 words)
2. **SKILL.md body** - When skill triggers (<5k words)
3. **Bundled resources** - As needed by Codex (Unlimited because scripts can be executed without reading into context window)
#### Progressive Disclosure Patterns
Keep SKILL.md body to the essentials and under 500 lines to minimize context bloat. Split content into separate files when approaching this limit. When splitting out content into other files, it is very important to reference them from SKILL.md and describe clearly when to read them, to ensure the reader of the skill knows they exist and when to use them.
**Key principle:** When a skill supports multiple variations, frameworks, or options, keep only the core workflow and selection guidance in SKILL.md. Move variant-specific details (patterns, examples, configuration) into separate reference files.
**Pattern 1: High-level guide with references**
```markdown
# PDF Processing
## Quick start
Extract text with pdfplumber:
[code example]
## Advanced features
- **Form filling**: See [FORMS.md](FORMS.md) for complete guide
- **API reference**: See [REFERENCE.md](REFERENCE.md) for all methods
- **Examples**: See [EXAMPLES.md](EXAMPLES.md) for common patterns
```
Codex loads FORMS.md, REFERENCE.md, or EXAMPLES.md only when needed.
**Pattern 2: Domain-specific organization**
For Skills with multiple domains, organize content by domain to avoid loading irrelevant context:
```
bigquery-skill/
├── SKILL.md (overview and navigation)
└── reference/
├── finance.md (revenue, billing metrics)
├── sales.md (opportunities, pipeline)
├── product.md (API usage, features)
└── marketing.md (campaigns, attribution)
```
When a user asks about sales metrics, Codex only reads sales.md.
Similarly, for skills supporting multiple frameworks or variants, organize by variant:
```
cloud-deploy/
├── SKILL.md (workflow + provider selection)
└── references/
├── aws.md (AWS deployment patterns)
├── gcp.md (GCP deployment patterns)
└── azure.md (Azure deployment patterns)
```
When the user chooses AWS, Codex only reads aws.md.
**Pattern 3: Conditional details**
Show basic content, link to advanced content:
```markdown
# DOCX Processing
## Creating documents
Use docx-js for new documents. See [DOCX-JS.md](DOCX-JS.md).
## Editing documents
For simple edits, modify the XML directly.
**For tracked changes**: See [REDLINING.md](REDLINING.md)
**For OOXML details**: See [OOXML.md](OOXML.md)
```
Codex reads REDLINING.md or OOXML.md only when the user needs those features.
**Important guidelines:**
- **Avoid deeply nested references** - Keep references one level deep from SKILL.md. All reference files should link directly from SKILL.md.
- **Structure longer reference files** - For files longer than 100 lines, include a table of contents at the top so Codex can see the full scope when previewing.
## Skill Creation Process
Skill creation involves these steps:
1. Understand the skill with concrete examples
2. Plan reusable skill contents (scripts, references, assets)
3. Initialize the skill (run init_skill.py)
4. Edit the skill (implement resources and write SKILL.md)
5. Validate the skill (run quick_validate.py)
6. Iterate based on real usage and forward-test complex skills.
Follow these steps in order, skipping only if there is a clear reason why they are not applicable.
### Skill Naming
- Use lowercase letters, digits, and hyphens only; normalize user-provided titles to hyphen-case (e.g., "Plan Mode" -> `plan-mode`).
- When generating names, generate a name under 64 characters (letters, digits, hyphens).
- Prefer short, verb-led phrases that describe the action.
- Namespace by tool when it improves clarity or triggering (e.g., `gh-address-comments`, `linear-address-issue`).
- Name the skill folder exactly after the skill name.
### Step 1: Understanding the Skill with Concrete Examples
Skip this step only when the skill's usage patterns are already clearly understood. It remains valuable even when working with an existing skill.
To create an effective skill, clearly understand concrete examples of how the skill will be used. This understanding can come from either direct user examples or generated examples that are validated with user feedback.
For example, when building an image-editor skill, relevant questions include:
- "What functionality should the image-editor skill support? Editing, rotating, anything else?"
- "Can you give some examples of how this skill would be used?"
- "I can imagine users asking for things like 'Remove the red-eye from this image' or 'Rotate this image'. Are there other ways you imagine this skill being used?"
- "What would a user say that should trigger this skill?"
- "Where should I create this skill? If you do not have a preference, I will place it in `$CODEX_HOME/skills` (or `~/.codex/skills` when `CODEX_HOME` is unset) so Codex can discover it automatically."
To avoid overwhelming users, avoid asking too many questions in a single message. Start with the most important questions and follow up as needed for better effectiveness.
Conclude this step when there is a clear sense of the functionality the skill should support.
### Step 2: Planning the Reusable Skill Contents
To turn concrete examples into an effective skill, analyze each example by:
1. Considering how to execute on the example from scratch
2. Identifying what scripts, references, and assets would be helpful when executing these workflows repeatedly
Example: When building a `pdf-editor` skill to handle queries like "Help me rotate this PDF," the analysis shows:
1. Rotating a PDF requires re-writing the same code each time
2. A `scripts/rotate_pdf.py` script would be helpful to store in the skill
Example: When designing a `frontend-webapp-builder` skill for queries like "Build me a todo app" or "Build me a dashboard to track my steps," the analysis shows:
1. Writing a frontend webapp requires the same boilerplate HTML/React each time
2. An `assets/hello-world/` template containing the boilerplate HTML/React project files would be helpful to store in the skill
Example: When building a `big-query` skill to handle queries like "How many users have logged in today?" the analysis shows:
1. Querying BigQuery requires re-discovering the table schemas and relationships each time
2. A `references/schema.md` file documenting the table schemas would be helpful to store in the skill
To establish the skill's contents, analyze each concrete example to create a list of the reusable resources to include: scripts, references, and assets.
### Step 3: Initializing the Skill
At this point, it is time to actually create the skill.
Skip this step only if the skill being developed already exists. In this case, continue to the next step.
Before running `init_skill.py`, ask where the user wants the skill created. If they do not specify a location, default to `$CODEX_HOME/skills`; when `CODEX_HOME` is unset, fall back to `~/.codex/skills` so the skill is auto-discovered.
When creating a new skill from scratch, always run the `init_skill.py` script. The script conveniently generates a new template skill directory that automatically includes everything a skill requires, making the skill creation process much more efficient and reliable.
Usage:
```bash
scripts/init_skill.py <skill-name> --path <output-directory> [--resources scripts,references,assets] [--examples]
```
Examples:
```bash
scripts/init_skill.py my-skill --path "${CODEX_HOME:-$HOME/.codex}/skills"
scripts/init_skill.py my-skill --path "${CODEX_HOME:-$HOME/.codex}/skills" --resources scripts,references
scripts/init_skill.py my-skill --path ~/work/skills --resources scripts --examples
```
The script:
- Creates the skill directory at the specified path
- Generates a SKILL.md template with proper frontmatter and TODO placeholders
- Creates `agents/openai.yaml` using agent-generated `display_name`, `short_description`, and `default_prompt` passed via `--interface key=value`
- Optionally creates resource directories based on `--resources`
- Optionally adds example files when `--examples` is set
After initialization, customize the SKILL.md and add resources as needed. If you used `--examples`, replace or delete placeholder files.
Generate `display_name`, `short_description`, and `default_prompt` by reading the skill, then pass them as `--interface key=value` to `init_skill.py` or regenerate with:
```bash
scripts/generate_openai_yaml.py <path/to/skill-folder> --interface key=value
```
Only include other optional interface fields when the user explicitly provides them. For full field descriptions and examples, see references/openai_yaml.md.
### Step 4: Edit the Skill
When editing the (newly-generated or existing) skill, remember that the skill is being created for another instance of Codex to use. Include information that would be beneficial and non-obvious to Codex. Consider what procedural knowledge, domain-specific details, or reusable assets would help another Codex instance execute these tasks more effectively.
After substantial revisions, or if the skill is particularly tricky, you should use subagents to forward-test the skill on realistic tasks or artifacts. When doing so, pass the artifact under validation rather than your diagnosis of what is wrong, and keep the prompt generic enough that success depends on transferable reasoning rather than hidden ground truth.
#### Start with Reusable Skill Contents
To begin implementation, start with the reusable resources identified above: `scripts/`, `references/`, and `assets/` files. Note that this step may require user input. For example, when implementing a `brand-guidelines` skill, the user may need to provide brand assets or templates to store in `assets/`, or documentation to store in `references/`.
Added scripts must be tested by actually running them to ensure there are no bugs and that the output matches what is expected. If there are many similar scripts, only a representative sample needs to be tested to ensure confidence that they all work while balancing time to completion.
If you used `--examples`, delete any placeholder files that are not needed for the skill. Only create resource directories that are actually required.
#### Update SKILL.md
**Writing Guidelines:** Always use imperative/infinitive form.
##### Frontmatter
Write the YAML frontmatter with `name` and `description`:
- `name`: The skill name
- `description`: This is the primary triggering mechanism for your skill, and helps Codex understand when to use the skill.
- Include both what the Skill does and specific triggers/contexts for when to use it.
- Include all "when to use" information here - Not in the body. The body is only loaded after triggering, so "When to Use This Skill" sections in the body are not helpful to Codex.
- Example description for a `docx` skill: "Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. Use when Codex needs to work with professional documents (.docx files) for: (1) Creating new documents, (2) Modifying or editing content, (3) Working with tracked changes, (4) Adding comments, or any other document tasks"
Do not include any other fields in YAML frontmatter.
##### Body
Write instructions for using the skill and its bundled resources.
### Step 5: Validate the Skill
Once development of the skill is complete, validate the skill folder to catch basic issues early:
```bash
scripts/quick_validate.py <path/to/skill-folder>
```
The validation script checks YAML frontmatter format, required fields, and naming rules. If validation fails, fix the reported issues and run the command again.
### Step 6: Iterate
After testing the skill, you may detect the skill is complex enough that it requires forward-testing; or users may request improvements.
User testing often this happens right after using the skill, with fresh context of how the skill performed.
**Forward-testing and iteration workflow:**
1. Use the skill on real tasks
2. Notice struggles or inefficiencies
3. Identify how SKILL.md or bundled resources should be updated
4. Implement changes and test again
5. Forward-test if it is reasonable and appropriate
## Forward-testing
To forward-test, launch subagents as a way to stress test the skill with minimal context.
Subagents should *not* know that they are being asked to test the skill. They should be treated as
an agent asked to perform a task by the user. Prompts to subagents should look like:
`Use $skill-x at /path/to/skill-x to solve problem y`
Not:
`Review the skill at /path/to/skill-x; pretend a user asks you to...`
Decision rule for forward-testing:
- Err on the side of forward-testing
- Ask for approval if you think there's a risk that forward-testing would:
* take a long time,
* require additional approvals from the user, or
* modify live production systems
In these cases, show the user your proposed prompt and request (1) a yes/no decision, and
(2) any suggested modifictions.
Considerations when forward-testing:
- use fresh threads for independent passes
- pass the skill, and a request in a similar way the user would.
- pass raw artifacts, not your conclusions
- avoid showing expected answers or intended fixes
- rebuild context from source artifacts after each iteration
- review the subagent's output and reasoning and emitted artifacts
- avoid leaving artifacts the agent can find on disk between iterations;
clean up subagents' artifacts to avoid additional contamination.
If forward-testing only succeeds when subagents see leaked context, tighten the skill or the
forward-testing setup before trusting the result.

View File

@@ -0,0 +1,5 @@
interface:
display_name: "Skill Creator"
short_description: "Create or update a skill"
icon_small: "./assets/skill-creator-small.svg"
icon_large: "./assets/skill-creator.png"

View File

@@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" fill="currentColor" viewBox="0 0 20 20">
<path fill="#0D0D0D" d="M12.03 4.113a3.612 3.612 0 0 1 5.108 5.108l-6.292 6.29c-.324.324-.56.561-.791.752l-.235.176c-.205.14-.422.261-.65.36l-.229.093a4.136 4.136 0 0 1-.586.16l-.764.134-2.394.4c-.142.024-.294.05-.423.06-.098.007-.232.01-.378-.026l-.149-.05a1.081 1.081 0 0 1-.521-.474l-.046-.093a1.104 1.104 0 0 1-.075-.527c.01-.129.035-.28.06-.422l.398-2.394c.1-.602.162-.987.295-1.35l.093-.23c.1-.228.22-.445.36-.65l.176-.235c.19-.232.428-.467.751-.79l6.292-6.292Zm-5.35 7.232c-.35.35-.534.535-.66.688l-.11.147a2.67 2.67 0 0 0-.24.433l-.062.154c-.08.22-.124.462-.232 1.112l-.398 2.394-.001.001h.003l2.393-.399.717-.126a2.63 2.63 0 0 0 .394-.105l.154-.063a2.65 2.65 0 0 0 .433-.24l.147-.11c.153-.126.339-.31.688-.66l4.988-4.988-3.227-3.226-4.987 4.988Zm9.517-6.291a2.281 2.281 0 0 0-3.225 0l-.364.362 3.226 3.227.363-.364c.89-.89.89-2.334 0-3.225ZM4.583 1.783a.3.3 0 0 1 .294.241c.117.585.347 1.092.707 1.48.357.385.859.668 1.549.783a.3.3 0 0 1 0 .592c-.69.115-1.192.398-1.549.783-.315.34-.53.77-.657 1.265l-.05.215a.3.3 0 0 1-.588 0c-.117-.585-.347-1.092-.707-1.48-.357-.384-.859-.668-1.549-.783a.3.3 0 0 1 0-.592c.69-.115 1.192-.398 1.549-.783.36-.388.59-.895.707-1.48l.015-.05a.3.3 0 0 1 .279-.19Z"/>
</svg>

After

Width:  |  Height:  |  Size: 1.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 KiB

View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,49 @@
# openai.yaml fields (full example + descriptions)
`agents/openai.yaml` is an extended, product-specific config intended for the machine/harness to read, not the agent. Other product-specific config can also live in the `agents/` folder.
## Full example
```yaml
interface:
display_name: "Optional user-facing name"
short_description: "Optional user-facing description"
icon_small: "./assets/small-400px.png"
icon_large: "./assets/large-logo.svg"
brand_color: "#3B82F6"
default_prompt: "Optional surrounding prompt to use the skill with"
dependencies:
tools:
- type: "mcp"
value: "github"
description: "GitHub MCP server"
transport: "streamable_http"
url: "https://api.githubcopilot.com/mcp/"
policy:
allow_implicit_invocation: true
```
## Field descriptions and constraints
Top-level constraints:
- Quote all string values.
- Keep keys unquoted.
- For `interface.default_prompt`: generate a helpful, short (typically 1 sentence) example starting prompt based on the skill. It must explicitly mention the skill as `$skill-name` (e.g., "Use $skill-name-here to draft a concise weekly status update.").
- `interface.display_name`: Human-facing title shown in UI skill lists and chips.
- `interface.short_description`: Human-facing short UI blurb (2564 chars) for quick scanning.
- `interface.icon_small`: Path to a small icon asset (relative to skill dir). Default to `./assets/` and place icons in the skill's `assets/` folder.
- `interface.icon_large`: Path to a larger logo asset (relative to skill dir). Default to `./assets/` and place icons in the skill's `assets/` folder.
- `interface.brand_color`: Hex color used for UI accents (e.g., badges).
- `interface.default_prompt`: Default prompt snippet inserted when invoking the skill.
- `dependencies.tools[].type`: Dependency category. Only `mcp` is supported for now.
- `dependencies.tools[].value`: Identifier of the tool or dependency.
- `dependencies.tools[].description`: Human-readable explanation of the dependency.
- `dependencies.tools[].transport`: Connection type when `type` is `mcp`.
- `dependencies.tools[].url`: MCP server URL when `type` is `mcp`.
- `policy.allow_implicit_invocation`: When false, the skill is not injected into
the model context by default, but can still be invoked explicitly via `$skill`.
Defaults to true.

View File

@@ -0,0 +1,226 @@
#!/usr/bin/env python3
"""
OpenAI YAML Generator - Creates agents/openai.yaml for a skill folder.
Usage:
generate_openai_yaml.py <skill_dir> [--name <skill_name>] [--interface key=value]
"""
import argparse
import re
import sys
from pathlib import Path
ACRONYMS = {
"GH",
"MCP",
"API",
"CI",
"CLI",
"LLM",
"PDF",
"PR",
"UI",
"URL",
"SQL",
}
BRANDS = {
"openai": "OpenAI",
"openapi": "OpenAPI",
"github": "GitHub",
"pagerduty": "PagerDuty",
"datadog": "DataDog",
"sqlite": "SQLite",
"fastapi": "FastAPI",
}
SMALL_WORDS = {"and", "or", "to", "up", "with"}
ALLOWED_INTERFACE_KEYS = {
"display_name",
"short_description",
"icon_small",
"icon_large",
"brand_color",
"default_prompt",
}
def yaml_quote(value):
escaped = value.replace("\\", "\\\\").replace('"', '\\"').replace("\n", "\\n")
return f'"{escaped}"'
def format_display_name(skill_name):
words = [word for word in skill_name.split("-") if word]
formatted = []
for index, word in enumerate(words):
lower = word.lower()
upper = word.upper()
if upper in ACRONYMS:
formatted.append(upper)
continue
if lower in BRANDS:
formatted.append(BRANDS[lower])
continue
if index > 0 and lower in SMALL_WORDS:
formatted.append(lower)
continue
formatted.append(word.capitalize())
return " ".join(formatted)
def generate_short_description(display_name):
description = f"Help with {display_name} tasks"
if len(description) < 25:
description = f"Help with {display_name} tasks and workflows"
if len(description) < 25:
description = f"Help with {display_name} tasks with guidance"
if len(description) > 64:
description = f"Help with {display_name}"
if len(description) > 64:
description = f"{display_name} helper"
if len(description) > 64:
description = f"{display_name} tools"
if len(description) > 64:
suffix = " helper"
max_name_length = 64 - len(suffix)
trimmed = display_name[:max_name_length].rstrip()
description = f"{trimmed}{suffix}"
if len(description) > 64:
description = description[:64].rstrip()
if len(description) < 25:
description = f"{description} workflows"
if len(description) > 64:
description = description[:64].rstrip()
return description
def read_frontmatter_name(skill_dir):
skill_md = Path(skill_dir) / "SKILL.md"
if not skill_md.exists():
print(f"[ERROR] SKILL.md not found in {skill_dir}")
return None
content = skill_md.read_text()
match = re.match(r"^---\n(.*?)\n---", content, re.DOTALL)
if not match:
print("[ERROR] Invalid SKILL.md frontmatter format.")
return None
frontmatter_text = match.group(1)
import yaml
try:
frontmatter = yaml.safe_load(frontmatter_text)
except yaml.YAMLError as exc:
print(f"[ERROR] Invalid YAML frontmatter: {exc}")
return None
if not isinstance(frontmatter, dict):
print("[ERROR] Frontmatter must be a YAML dictionary.")
return None
name = frontmatter.get("name", "")
if not isinstance(name, str) or not name.strip():
print("[ERROR] Frontmatter 'name' is missing or invalid.")
return None
return name.strip()
def parse_interface_overrides(raw_overrides):
overrides = {}
optional_order = []
for item in raw_overrides:
if "=" not in item:
print(f"[ERROR] Invalid interface override '{item}'. Use key=value.")
return None, None
key, value = item.split("=", 1)
key = key.strip()
value = value.strip()
if not key:
print(f"[ERROR] Invalid interface override '{item}'. Key is empty.")
return None, None
if key not in ALLOWED_INTERFACE_KEYS:
allowed = ", ".join(sorted(ALLOWED_INTERFACE_KEYS))
print(f"[ERROR] Unknown interface field '{key}'. Allowed: {allowed}")
return None, None
overrides[key] = value
if key not in ("display_name", "short_description") and key not in optional_order:
optional_order.append(key)
return overrides, optional_order
def write_openai_yaml(skill_dir, skill_name, raw_overrides):
overrides, optional_order = parse_interface_overrides(raw_overrides)
if overrides is None:
return None
display_name = overrides.get("display_name") or format_display_name(skill_name)
short_description = overrides.get("short_description") or generate_short_description(display_name)
if not (25 <= len(short_description) <= 64):
print(
"[ERROR] short_description must be 25-64 characters "
f"(got {len(short_description)})."
)
return None
interface_lines = [
"interface:",
f" display_name: {yaml_quote(display_name)}",
f" short_description: {yaml_quote(short_description)}",
]
for key in optional_order:
value = overrides.get(key)
if value is not None:
interface_lines.append(f" {key}: {yaml_quote(value)}")
agents_dir = Path(skill_dir) / "agents"
agents_dir.mkdir(parents=True, exist_ok=True)
output_path = agents_dir / "openai.yaml"
output_path.write_text("\n".join(interface_lines) + "\n")
print(f"[OK] Created agents/openai.yaml")
return output_path
def main():
parser = argparse.ArgumentParser(
description="Create agents/openai.yaml for a skill directory.",
)
parser.add_argument("skill_dir", help="Path to the skill directory")
parser.add_argument(
"--name",
help="Skill name override (defaults to SKILL.md frontmatter)",
)
parser.add_argument(
"--interface",
action="append",
default=[],
help="Interface override in key=value format (repeatable)",
)
args = parser.parse_args()
skill_dir = Path(args.skill_dir).resolve()
if not skill_dir.exists():
print(f"[ERROR] Skill directory not found: {skill_dir}")
sys.exit(1)
if not skill_dir.is_dir():
print(f"[ERROR] Path is not a directory: {skill_dir}")
sys.exit(1)
skill_name = args.name or read_frontmatter_name(skill_dir)
if not skill_name:
sys.exit(1)
result = write_openai_yaml(skill_dir, skill_name, args.interface)
if result:
sys.exit(0)
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,400 @@
#!/usr/bin/env python3
"""
Skill Initializer - Creates a new skill from template
Usage:
init_skill.py <skill-name> --path <path> [--resources scripts,references,assets] [--examples] [--interface key=value]
Examples:
init_skill.py my-new-skill --path skills/public
init_skill.py my-new-skill --path skills/public --resources scripts,references
init_skill.py my-api-helper --path skills/private --resources scripts --examples
init_skill.py custom-skill --path /custom/location
init_skill.py my-skill --path skills/public --interface short_description="Short UI label"
"""
import argparse
import re
import sys
from pathlib import Path
from generate_openai_yaml import write_openai_yaml
MAX_SKILL_NAME_LENGTH = 64
ALLOWED_RESOURCES = {"scripts", "references", "assets"}
SKILL_TEMPLATE = """---
name: {skill_name}
description: [TODO: Complete and informative explanation of what the skill does and when to use it. Include WHEN to use this skill - specific scenarios, file types, or tasks that trigger it.]
---
# {skill_title}
## Overview
[TODO: 1-2 sentences explaining what this skill enables]
## Structuring This Skill
[TODO: Choose the structure that best fits this skill's purpose. Common patterns:
**1. Workflow-Based** (best for sequential processes)
- Works well when there are clear step-by-step procedures
- Example: DOCX skill with "Workflow Decision Tree" -> "Reading" -> "Creating" -> "Editing"
- Structure: ## Overview -> ## Workflow Decision Tree -> ## Step 1 -> ## Step 2...
**2. Task-Based** (best for tool collections)
- Works well when the skill offers different operations/capabilities
- Example: PDF skill with "Quick Start" -> "Merge PDFs" -> "Split PDFs" -> "Extract Text"
- Structure: ## Overview -> ## Quick Start -> ## Task Category 1 -> ## Task Category 2...
**3. Reference/Guidelines** (best for standards or specifications)
- Works well for brand guidelines, coding standards, or requirements
- Example: Brand styling with "Brand Guidelines" -> "Colors" -> "Typography" -> "Features"
- Structure: ## Overview -> ## Guidelines -> ## Specifications -> ## Usage...
**4. Capabilities-Based** (best for integrated systems)
- Works well when the skill provides multiple interrelated features
- Example: Product Management with "Core Capabilities" -> numbered capability list
- Structure: ## Overview -> ## Core Capabilities -> ### 1. Feature -> ### 2. Feature...
Patterns can be mixed and matched as needed. Most skills combine patterns (e.g., start with task-based, add workflow for complex operations).
Delete this entire "Structuring This Skill" section when done - it's just guidance.]
## [TODO: Replace with the first main section based on chosen structure]
[TODO: Add content here. See examples in existing skills:
- Code samples for technical skills
- Decision trees for complex workflows
- Concrete examples with realistic user requests
- References to scripts/templates/references as needed]
## Resources (optional)
Create only the resource directories this skill actually needs. Delete this section if no resources are required.
### scripts/
Executable code (Python/Bash/etc.) that can be run directly to perform specific operations.
**Examples from other skills:**
- PDF skill: `fill_fillable_fields.py`, `extract_form_field_info.py` - utilities for PDF manipulation
- DOCX skill: `document.py`, `utilities.py` - Python modules for document processing
**Appropriate for:** Python scripts, shell scripts, or any executable code that performs automation, data processing, or specific operations.
**Note:** Scripts may be executed without loading into context, but can still be read by Codex for patching or environment adjustments.
### references/
Documentation and reference material intended to be loaded into context to inform Codex's process and thinking.
**Examples from other skills:**
- Product management: `communication.md`, `context_building.md` - detailed workflow guides
- BigQuery: API reference documentation and query examples
- Finance: Schema documentation, company policies
**Appropriate for:** In-depth documentation, API references, database schemas, comprehensive guides, or any detailed information that Codex should reference while working.
### assets/
Files not intended to be loaded into context, but rather used within the output Codex produces.
**Examples from other skills:**
- Brand styling: PowerPoint template files (.pptx), logo files
- Frontend builder: HTML/React boilerplate project directories
- Typography: Font files (.ttf, .woff2)
**Appropriate for:** Templates, boilerplate code, document templates, images, icons, fonts, or any files meant to be copied or used in the final output.
---
**Not every skill requires all three types of resources.**
"""
EXAMPLE_SCRIPT = '''#!/usr/bin/env python3
"""
Example helper script for {skill_name}
This is a placeholder script that can be executed directly.
Replace with actual implementation or delete if not needed.
Example real scripts from other skills:
- pdf/scripts/fill_fillable_fields.py - Fills PDF form fields
- pdf/scripts/convert_pdf_to_images.py - Converts PDF pages to images
"""
def main():
print("This is an example script for {skill_name}")
# TODO: Add actual script logic here
# This could be data processing, file conversion, API calls, etc.
if __name__ == "__main__":
main()
'''
EXAMPLE_REFERENCE = """# Reference Documentation for {skill_title}
This is a placeholder for detailed reference documentation.
Replace with actual reference content or delete if not needed.
Example real reference docs from other skills:
- product-management/references/communication.md - Comprehensive guide for status updates
- product-management/references/context_building.md - Deep-dive on gathering context
- bigquery/references/ - API references and query examples
## When Reference Docs Are Useful
Reference docs are ideal for:
- Comprehensive API documentation
- Detailed workflow guides
- Complex multi-step processes
- Information too lengthy for main SKILL.md
- Content that's only needed for specific use cases
## Structure Suggestions
### API Reference Example
- Overview
- Authentication
- Endpoints with examples
- Error codes
- Rate limits
### Workflow Guide Example
- Prerequisites
- Step-by-step instructions
- Common patterns
- Troubleshooting
- Best practices
"""
EXAMPLE_ASSET = """# Example Asset File
This placeholder represents where asset files would be stored.
Replace with actual asset files (templates, images, fonts, etc.) or delete if not needed.
Asset files are NOT intended to be loaded into context, but rather used within
the output Codex produces.
Example asset files from other skills:
- Brand guidelines: logo.png, slides_template.pptx
- Frontend builder: hello-world/ directory with HTML/React boilerplate
- Typography: custom-font.ttf, font-family.woff2
- Data: sample_data.csv, test_dataset.json
## Common Asset Types
- Templates: .pptx, .docx, boilerplate directories
- Images: .png, .jpg, .svg, .gif
- Fonts: .ttf, .otf, .woff, .woff2
- Boilerplate code: Project directories, starter files
- Icons: .ico, .svg
- Data files: .csv, .json, .xml, .yaml
Note: This is a text placeholder. Actual assets can be any file type.
"""
def normalize_skill_name(skill_name):
"""Normalize a skill name to lowercase hyphen-case."""
normalized = skill_name.strip().lower()
normalized = re.sub(r"[^a-z0-9]+", "-", normalized)
normalized = normalized.strip("-")
normalized = re.sub(r"-{2,}", "-", normalized)
return normalized
def title_case_skill_name(skill_name):
"""Convert hyphenated skill name to Title Case for display."""
return " ".join(word.capitalize() for word in skill_name.split("-"))
def parse_resources(raw_resources):
if not raw_resources:
return []
resources = [item.strip() for item in raw_resources.split(",") if item.strip()]
invalid = sorted({item for item in resources if item not in ALLOWED_RESOURCES})
if invalid:
allowed = ", ".join(sorted(ALLOWED_RESOURCES))
print(f"[ERROR] Unknown resource type(s): {', '.join(invalid)}")
print(f" Allowed: {allowed}")
sys.exit(1)
deduped = []
seen = set()
for resource in resources:
if resource not in seen:
deduped.append(resource)
seen.add(resource)
return deduped
def create_resource_dirs(skill_dir, skill_name, skill_title, resources, include_examples):
for resource in resources:
resource_dir = skill_dir / resource
resource_dir.mkdir(exist_ok=True)
if resource == "scripts":
if include_examples:
example_script = resource_dir / "example.py"
example_script.write_text(EXAMPLE_SCRIPT.format(skill_name=skill_name))
example_script.chmod(0o755)
print("[OK] Created scripts/example.py")
else:
print("[OK] Created scripts/")
elif resource == "references":
if include_examples:
example_reference = resource_dir / "api_reference.md"
example_reference.write_text(EXAMPLE_REFERENCE.format(skill_title=skill_title))
print("[OK] Created references/api_reference.md")
else:
print("[OK] Created references/")
elif resource == "assets":
if include_examples:
example_asset = resource_dir / "example_asset.txt"
example_asset.write_text(EXAMPLE_ASSET)
print("[OK] Created assets/example_asset.txt")
else:
print("[OK] Created assets/")
def init_skill(skill_name, path, resources, include_examples, interface_overrides):
"""
Initialize a new skill directory with template SKILL.md.
Args:
skill_name: Name of the skill
path: Path where the skill directory should be created
resources: Resource directories to create
include_examples: Whether to create example files in resource directories
Returns:
Path to created skill directory, or None if error
"""
# Determine skill directory path
skill_dir = Path(path).resolve() / skill_name
# Check if directory already exists
if skill_dir.exists():
print(f"[ERROR] Skill directory already exists: {skill_dir}")
return None
# Create skill directory
try:
skill_dir.mkdir(parents=True, exist_ok=False)
print(f"[OK] Created skill directory: {skill_dir}")
except Exception as e:
print(f"[ERROR] Error creating directory: {e}")
return None
# Create SKILL.md from template
skill_title = title_case_skill_name(skill_name)
skill_content = SKILL_TEMPLATE.format(skill_name=skill_name, skill_title=skill_title)
skill_md_path = skill_dir / "SKILL.md"
try:
skill_md_path.write_text(skill_content)
print("[OK] Created SKILL.md")
except Exception as e:
print(f"[ERROR] Error creating SKILL.md: {e}")
return None
# Create agents/openai.yaml
try:
result = write_openai_yaml(skill_dir, skill_name, interface_overrides)
if not result:
return None
except Exception as e:
print(f"[ERROR] Error creating agents/openai.yaml: {e}")
return None
# Create resource directories if requested
if resources:
try:
create_resource_dirs(skill_dir, skill_name, skill_title, resources, include_examples)
except Exception as e:
print(f"[ERROR] Error creating resource directories: {e}")
return None
# Print next steps
print(f"\n[OK] Skill '{skill_name}' initialized successfully at {skill_dir}")
print("\nNext steps:")
print("1. Edit SKILL.md to complete the TODO items and update the description")
if resources:
if include_examples:
print("2. Customize or delete the example files in scripts/, references/, and assets/")
else:
print("2. Add resources to scripts/, references/, and assets/ as needed")
else:
print("2. Create resource directories only if needed (scripts/, references/, assets/)")
print("3. Update agents/openai.yaml if the UI metadata should differ")
print("4. Run the validator when ready to check the skill structure")
print(
"5. Forward-test complex skills with realistic user requests to ensure they work as intended"
)
return skill_dir
def main():
parser = argparse.ArgumentParser(
description="Create a new skill directory with a SKILL.md template.",
)
parser.add_argument("skill_name", help="Skill name (normalized to hyphen-case)")
parser.add_argument("--path", required=True, help="Output directory for the skill")
parser.add_argument(
"--resources",
default="",
help="Comma-separated list: scripts,references,assets",
)
parser.add_argument(
"--examples",
action="store_true",
help="Create example files inside the selected resource directories",
)
parser.add_argument(
"--interface",
action="append",
default=[],
help="Interface override in key=value format (repeatable)",
)
args = parser.parse_args()
raw_skill_name = args.skill_name
skill_name = normalize_skill_name(raw_skill_name)
if not skill_name:
print("[ERROR] Skill name must include at least one letter or digit.")
sys.exit(1)
if len(skill_name) > MAX_SKILL_NAME_LENGTH:
print(
f"[ERROR] Skill name '{skill_name}' is too long ({len(skill_name)} characters). "
f"Maximum is {MAX_SKILL_NAME_LENGTH} characters."
)
sys.exit(1)
if skill_name != raw_skill_name:
print(f"Note: Normalized skill name from '{raw_skill_name}' to '{skill_name}'.")
resources = parse_resources(args.resources)
if args.examples and not resources:
print("[ERROR] --examples requires --resources to be set.")
sys.exit(1)
path = args.path
print(f"Initializing skill: {skill_name}")
print(f" Location: {path}")
if resources:
print(f" Resources: {', '.join(resources)}")
if args.examples:
print(" Examples: enabled")
else:
print(" Resources: none (create as needed)")
print()
result = init_skill(skill_name, path, resources, args.examples, args.interface)
if result:
sys.exit(0)
else:
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,101 @@
#!/usr/bin/env python3
"""
Quick validation script for skills - minimal version
"""
import re
import sys
from pathlib import Path
import yaml
MAX_SKILL_NAME_LENGTH = 64
def validate_skill(skill_path):
"""Basic validation of a skill"""
skill_path = Path(skill_path)
skill_md = skill_path / "SKILL.md"
if not skill_md.exists():
return False, "SKILL.md not found"
content = skill_md.read_text()
if not content.startswith("---"):
return False, "No YAML frontmatter found"
match = re.match(r"^---\n(.*?)\n---", content, re.DOTALL)
if not match:
return False, "Invalid frontmatter format"
frontmatter_text = match.group(1)
try:
frontmatter = yaml.safe_load(frontmatter_text)
if not isinstance(frontmatter, dict):
return False, "Frontmatter must be a YAML dictionary"
except yaml.YAMLError as e:
return False, f"Invalid YAML in frontmatter: {e}"
allowed_properties = {"name", "description", "license", "allowed-tools", "metadata"}
unexpected_keys = set(frontmatter.keys()) - allowed_properties
if unexpected_keys:
allowed = ", ".join(sorted(allowed_properties))
unexpected = ", ".join(sorted(unexpected_keys))
return (
False,
f"Unexpected key(s) in SKILL.md frontmatter: {unexpected}. Allowed properties are: {allowed}",
)
if "name" not in frontmatter:
return False, "Missing 'name' in frontmatter"
if "description" not in frontmatter:
return False, "Missing 'description' in frontmatter"
name = frontmatter.get("name", "")
if not isinstance(name, str):
return False, f"Name must be a string, got {type(name).__name__}"
name = name.strip()
if name:
if not re.match(r"^[a-z0-9-]+$", name):
return (
False,
f"Name '{name}' should be hyphen-case (lowercase letters, digits, and hyphens only)",
)
if name.startswith("-") or name.endswith("-") or "--" in name:
return (
False,
f"Name '{name}' cannot start/end with hyphen or contain consecutive hyphens",
)
if len(name) > MAX_SKILL_NAME_LENGTH:
return (
False,
f"Name is too long ({len(name)} characters). "
f"Maximum is {MAX_SKILL_NAME_LENGTH} characters.",
)
description = frontmatter.get("description", "")
if not isinstance(description, str):
return False, f"Description must be a string, got {type(description).__name__}"
description = description.strip()
if description:
if "<" in description or ">" in description:
return False, "Description cannot contain angle brackets (< or >)"
if len(description) > 1024:
return (
False,
f"Description is too long ({len(description)} characters). Maximum is 1024 characters.",
)
return True, "Skill is valid!"
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: python quick_validate.py <skill_directory>")
sys.exit(1)
valid, message = validate_skill(sys.argv[1])
print(message)
sys.exit(0 if valid else 1)

View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,58 @@
---
name: skill-installer
description: Install Codex skills into $CODEX_HOME/skills from a curated list or a GitHub repo path. Use when a user asks to list installable skills, install a curated skill, or install a skill from another repo (including private repos).
metadata:
short-description: Install curated skills from openai/skills or other repos
---
# Skill Installer
Helps install skills. By default these are from https://github.com/openai/skills/tree/main/skills/.curated, but users can also provide other locations. Experimental skills live in https://github.com/openai/skills/tree/main/skills/.experimental and can be installed the same way.
Use the helper scripts based on the task:
- List skills when the user asks what is available, or if the user uses this skill without specifying what to do. Default listing is `.curated`, but you can pass `--path skills/.experimental` when they ask about experimental skills.
- Install from the curated list when the user provides a skill name.
- Install from another repo when the user provides a GitHub repo/path (including private repos).
Install skills with the helper scripts.
## Communication
When listing skills, output approximately as follows, depending on the context of the user's request. If they ask about experimental skills, list from `.experimental` instead of `.curated` and label the source accordingly:
"""
Skills from {repo}:
1. skill-1
2. skill-2 (already installed)
3. ...
Which ones would you like installed?
"""
After installing a skill, tell the user: "Restart Codex to pick up new skills."
## Scripts
All of these scripts use network, so when running in the sandbox, request escalation when running them.
- `scripts/list-skills.py` (prints skills list with installed annotations)
- `scripts/list-skills.py --format json`
- Example (experimental list): `scripts/list-skills.py --path skills/.experimental`
- `scripts/install-skill-from-github.py --repo <owner>/<repo> --path <path/to/skill> [<path/to/skill> ...]`
- `scripts/install-skill-from-github.py --url https://github.com/<owner>/<repo>/tree/<ref>/<path>`
- Example (experimental skill): `scripts/install-skill-from-github.py --repo openai/skills --path skills/.experimental/<skill-name>`
## Behavior and Options
- Defaults to direct download for public GitHub repos.
- If download fails with auth/permission errors, falls back to git sparse checkout.
- Aborts if the destination skill directory already exists.
- Installs into `$CODEX_HOME/skills/<skill-name>` (defaults to `~/.codex/skills`).
- Multiple `--path` values install multiple skills in one run, each named from the path basename unless `--name` is supplied.
- Options: `--ref <ref>` (default `main`), `--dest <path>`, `--method auto|download|git`.
## Notes
- Curated listing is fetched from `https://github.com/openai/skills/tree/main/skills/.curated` via the GitHub API. If it is unavailable, explain the error and exit.
- Private GitHub repos can be accessed via existing git credentials or optional `GITHUB_TOKEN`/`GH_TOKEN` for download.
- Git fallback tries HTTPS first, then SSH.
- The skills at https://github.com/openai/skills/tree/main/skills/.system are preinstalled, so no need to help users install those. If they ask, just explain this. If they insist, you can download and overwrite.
- Installed annotations come from `$CODEX_HOME/skills`.

View File

@@ -0,0 +1,5 @@
interface:
display_name: "Skill Installer"
short_description: "Install curated skills from openai/skills or other repos"
icon_small: "./assets/skill-installer-small.svg"
icon_large: "./assets/skill-installer.png"

View File

@@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
<path fill="#0D0D0D" d="M2.145 3.959a2.033 2.033 0 0 1 2.022-1.824h5.966c.551 0 .997 0 1.357.029.367.03.692.093.993.246l.174.098c.397.243.72.593.932 1.01l.053.114c.116.269.168.557.194.878.03.36.03.805.03 1.357v4.3a2.365 2.365 0 0 1-2.366 2.365h-1.312a2.198 2.198 0 0 1-4.377 0H4.167A2.032 2.032 0 0 1 2.135 10.5V9.333l.004-.088A.865.865 0 0 1 3 8.468l.116-.006A1.135 1.135 0 0 0 3 6.199a.865.865 0 0 1-.865-.864V4.167l.01-.208Zm1.054 1.186a2.198 2.198 0 0 1 0 4.376v.98c0 .534.433.967.968.967H6l.089.004a.866.866 0 0 1 .776.861 1.135 1.135 0 0 0 2.27 0c0-.478.387-.865.865-.865h1.5c.719 0 1.301-.583 1.301-1.301v-4.3c0-.57 0-.964-.025-1.27a1.933 1.933 0 0 0-.09-.493L12.642 4a1.47 1.47 0 0 0-.541-.585l-.102-.056c-.126-.065-.295-.11-.596-.135a17.31 17.31 0 0 0-1.27-.025H4.167a.968.968 0 0 0-.968.968v.978Z"/>
</svg>

After

Width:  |  Height:  |  Size: 923 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

View File

@@ -0,0 +1,21 @@
#!/usr/bin/env python3
"""Shared GitHub helpers for skill install scripts."""
from __future__ import annotations
import os
import urllib.request
def github_request(url: str, user_agent: str) -> bytes:
headers = {"User-Agent": user_agent}
token = os.environ.get("GITHUB_TOKEN") or os.environ.get("GH_TOKEN")
if token:
headers["Authorization"] = f"token {token}"
req = urllib.request.Request(url, headers=headers)
with urllib.request.urlopen(req) as resp:
return resp.read()
def github_api_contents_url(repo: str, path: str, ref: str) -> str:
return f"https://api.github.com/repos/{repo}/contents/{path}?ref={ref}"

View File

@@ -0,0 +1,308 @@
#!/usr/bin/env python3
"""Install a skill from a GitHub repo path into $CODEX_HOME/skills."""
from __future__ import annotations
import argparse
from dataclasses import dataclass
import os
import shutil
import subprocess
import sys
import tempfile
import urllib.error
import urllib.parse
import zipfile
from github_utils import github_request
DEFAULT_REF = "main"
@dataclass
class Args:
url: str | None = None
repo: str | None = None
path: list[str] | None = None
ref: str = DEFAULT_REF
dest: str | None = None
name: str | None = None
method: str = "auto"
@dataclass
class Source:
owner: str
repo: str
ref: str
paths: list[str]
repo_url: str | None = None
class InstallError(Exception):
pass
def _codex_home() -> str:
return os.environ.get("CODEX_HOME", os.path.expanduser("~/.codex"))
def _tmp_root() -> str:
base = os.path.join(tempfile.gettempdir(), "codex")
os.makedirs(base, exist_ok=True)
return base
def _request(url: str) -> bytes:
return github_request(url, "codex-skill-install")
def _parse_github_url(url: str, default_ref: str) -> tuple[str, str, str, str | None]:
parsed = urllib.parse.urlparse(url)
if parsed.netloc != "github.com":
raise InstallError("Only GitHub URLs are supported for download mode.")
parts = [p for p in parsed.path.split("/") if p]
if len(parts) < 2:
raise InstallError("Invalid GitHub URL.")
owner, repo = parts[0], parts[1]
ref = default_ref
subpath = ""
if len(parts) > 2:
if parts[2] in ("tree", "blob"):
if len(parts) < 4:
raise InstallError("GitHub URL missing ref or path.")
ref = parts[3]
subpath = "/".join(parts[4:])
else:
subpath = "/".join(parts[2:])
return owner, repo, ref, subpath or None
def _download_repo_zip(owner: str, repo: str, ref: str, dest_dir: str) -> str:
zip_url = f"https://codeload.github.com/{owner}/{repo}/zip/{ref}"
zip_path = os.path.join(dest_dir, "repo.zip")
try:
payload = _request(zip_url)
except urllib.error.HTTPError as exc:
raise InstallError(f"Download failed: HTTP {exc.code}") from exc
with open(zip_path, "wb") as file_handle:
file_handle.write(payload)
with zipfile.ZipFile(zip_path, "r") as zip_file:
_safe_extract_zip(zip_file, dest_dir)
top_levels = {name.split("/")[0] for name in zip_file.namelist() if name}
if not top_levels:
raise InstallError("Downloaded archive was empty.")
if len(top_levels) != 1:
raise InstallError("Unexpected archive layout.")
return os.path.join(dest_dir, next(iter(top_levels)))
def _run_git(args: list[str]) -> None:
result = subprocess.run(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
if result.returncode != 0:
raise InstallError(result.stderr.strip() or "Git command failed.")
def _safe_extract_zip(zip_file: zipfile.ZipFile, dest_dir: str) -> None:
dest_root = os.path.realpath(dest_dir)
for info in zip_file.infolist():
extracted_path = os.path.realpath(os.path.join(dest_dir, info.filename))
if extracted_path == dest_root or extracted_path.startswith(dest_root + os.sep):
continue
raise InstallError("Archive contains files outside the destination.")
zip_file.extractall(dest_dir)
def _validate_relative_path(path: str) -> None:
if os.path.isabs(path) or os.path.normpath(path).startswith(".."):
raise InstallError("Skill path must be a relative path inside the repo.")
def _validate_skill_name(name: str) -> None:
altsep = os.path.altsep
if not name or os.path.sep in name or (altsep and altsep in name):
raise InstallError("Skill name must be a single path segment.")
if name in (".", ".."):
raise InstallError("Invalid skill name.")
def _git_sparse_checkout(repo_url: str, ref: str, paths: list[str], dest_dir: str) -> str:
repo_dir = os.path.join(dest_dir, "repo")
clone_cmd = [
"git",
"clone",
"--filter=blob:none",
"--depth",
"1",
"--sparse",
"--single-branch",
"--branch",
ref,
repo_url,
repo_dir,
]
try:
_run_git(clone_cmd)
except InstallError:
_run_git(
[
"git",
"clone",
"--filter=blob:none",
"--depth",
"1",
"--sparse",
"--single-branch",
repo_url,
repo_dir,
]
)
_run_git(["git", "-C", repo_dir, "sparse-checkout", "set", *paths])
_run_git(["git", "-C", repo_dir, "checkout", ref])
return repo_dir
def _validate_skill(path: str) -> None:
if not os.path.isdir(path):
raise InstallError(f"Skill path not found: {path}")
skill_md = os.path.join(path, "SKILL.md")
if not os.path.isfile(skill_md):
raise InstallError("SKILL.md not found in selected skill directory.")
def _copy_skill(src: str, dest_dir: str) -> None:
os.makedirs(os.path.dirname(dest_dir), exist_ok=True)
if os.path.exists(dest_dir):
raise InstallError(f"Destination already exists: {dest_dir}")
shutil.copytree(src, dest_dir)
def _build_repo_url(owner: str, repo: str) -> str:
return f"https://github.com/{owner}/{repo}.git"
def _build_repo_ssh(owner: str, repo: str) -> str:
return f"git@github.com:{owner}/{repo}.git"
def _prepare_repo(source: Source, method: str, tmp_dir: str) -> str:
if method in ("download", "auto"):
try:
return _download_repo_zip(source.owner, source.repo, source.ref, tmp_dir)
except InstallError as exc:
if method == "download":
raise
err_msg = str(exc)
if "HTTP 401" in err_msg or "HTTP 403" in err_msg or "HTTP 404" in err_msg:
pass
else:
raise
if method in ("git", "auto"):
repo_url = source.repo_url or _build_repo_url(source.owner, source.repo)
try:
return _git_sparse_checkout(repo_url, source.ref, source.paths, tmp_dir)
except InstallError:
repo_url = _build_repo_ssh(source.owner, source.repo)
return _git_sparse_checkout(repo_url, source.ref, source.paths, tmp_dir)
raise InstallError("Unsupported method.")
def _resolve_source(args: Args) -> Source:
if args.url:
owner, repo, ref, url_path = _parse_github_url(args.url, args.ref)
if args.path is not None:
paths = list(args.path)
elif url_path:
paths = [url_path]
else:
paths = []
if not paths:
raise InstallError("Missing --path for GitHub URL.")
return Source(owner=owner, repo=repo, ref=ref, paths=paths)
if not args.repo:
raise InstallError("Provide --repo or --url.")
if "://" in args.repo:
return _resolve_source(
Args(url=args.repo, repo=None, path=args.path, ref=args.ref)
)
repo_parts = [p for p in args.repo.split("/") if p]
if len(repo_parts) != 2:
raise InstallError("--repo must be in owner/repo format.")
if not args.path:
raise InstallError("Missing --path for --repo.")
paths = list(args.path)
return Source(
owner=repo_parts[0],
repo=repo_parts[1],
ref=args.ref,
paths=paths,
)
def _default_dest() -> str:
return os.path.join(_codex_home(), "skills")
def _parse_args(argv: list[str]) -> Args:
parser = argparse.ArgumentParser(description="Install a skill from GitHub.")
parser.add_argument("--repo", help="owner/repo")
parser.add_argument("--url", help="https://github.com/owner/repo[/tree/ref/path]")
parser.add_argument(
"--path",
nargs="+",
help="Path(s) to skill(s) inside repo",
)
parser.add_argument("--ref", default=DEFAULT_REF)
parser.add_argument("--dest", help="Destination skills directory")
parser.add_argument(
"--name", help="Destination skill name (defaults to basename of path)"
)
parser.add_argument(
"--method",
choices=["auto", "download", "git"],
default="auto",
)
return parser.parse_args(argv, namespace=Args())
def main(argv: list[str]) -> int:
args = _parse_args(argv)
try:
source = _resolve_source(args)
source.ref = source.ref or args.ref
if not source.paths:
raise InstallError("No skill paths provided.")
for path in source.paths:
_validate_relative_path(path)
dest_root = args.dest or _default_dest()
tmp_dir = tempfile.mkdtemp(prefix="skill-install-", dir=_tmp_root())
try:
repo_root = _prepare_repo(source, args.method, tmp_dir)
installed = []
for path in source.paths:
skill_name = args.name if len(source.paths) == 1 else None
skill_name = skill_name or os.path.basename(path.rstrip("/"))
_validate_skill_name(skill_name)
if not skill_name:
raise InstallError("Unable to derive skill name.")
dest_dir = os.path.join(dest_root, skill_name)
if os.path.exists(dest_dir):
raise InstallError(f"Destination already exists: {dest_dir}")
skill_src = os.path.join(repo_root, path)
_validate_skill(skill_src)
_copy_skill(skill_src, dest_dir)
installed.append((skill_name, dest_dir))
finally:
if os.path.isdir(tmp_dir):
shutil.rmtree(tmp_dir, ignore_errors=True)
for skill_name, dest_dir in installed:
print(f"Installed {skill_name} to {dest_dir}")
return 0
except InstallError as exc:
print(f"Error: {exc}", file=sys.stderr)
return 1
if __name__ == "__main__":
raise SystemExit(main(sys.argv[1:]))

View File

@@ -0,0 +1,107 @@
#!/usr/bin/env python3
"""List skills from a GitHub repo path."""
from __future__ import annotations
import argparse
import json
import os
import sys
import urllib.error
from github_utils import github_api_contents_url, github_request
DEFAULT_REPO = "openai/skills"
DEFAULT_PATH = "skills/.curated"
DEFAULT_REF = "main"
class ListError(Exception):
pass
class Args(argparse.Namespace):
repo: str
path: str
ref: str
format: str
def _request(url: str) -> bytes:
return github_request(url, "codex-skill-list")
def _codex_home() -> str:
return os.environ.get("CODEX_HOME", os.path.expanduser("~/.codex"))
def _installed_skills() -> set[str]:
root = os.path.join(_codex_home(), "skills")
if not os.path.isdir(root):
return set()
entries = set()
for name in os.listdir(root):
path = os.path.join(root, name)
if os.path.isdir(path):
entries.add(name)
return entries
def _list_skills(repo: str, path: str, ref: str) -> list[str]:
api_url = github_api_contents_url(repo, path, ref)
try:
payload = _request(api_url)
except urllib.error.HTTPError as exc:
if exc.code == 404:
raise ListError(
"Skills path not found: "
f"https://github.com/{repo}/tree/{ref}/{path}"
) from exc
raise ListError(f"Failed to fetch skills: HTTP {exc.code}") from exc
data = json.loads(payload.decode("utf-8"))
if not isinstance(data, list):
raise ListError("Unexpected skills listing response.")
skills = [item["name"] for item in data if item.get("type") == "dir"]
return sorted(skills)
def _parse_args(argv: list[str]) -> Args:
parser = argparse.ArgumentParser(description="List skills.")
parser.add_argument("--repo", default=DEFAULT_REPO)
parser.add_argument(
"--path",
default=DEFAULT_PATH,
help="Repo path to list (default: skills/.curated)",
)
parser.add_argument("--ref", default=DEFAULT_REF)
parser.add_argument(
"--format",
choices=["text", "json"],
default="text",
help="Output format",
)
return parser.parse_args(argv, namespace=Args())
def main(argv: list[str]) -> int:
args = _parse_args(argv)
try:
skills = _list_skills(args.repo, args.path, args.ref)
installed = _installed_skills()
if args.format == "json":
payload = [
{"name": name, "installed": name in installed} for name in skills
]
print(json.dumps(payload))
else:
for idx, name in enumerate(skills, start=1):
suffix = " (already installed)" if name in installed else ""
print(f"{idx}. {name}{suffix}")
return 0
except ListError as exc:
print(f"Error: {exc}", file=sys.stderr)
return 1
if __name__ == "__main__":
raise SystemExit(main(sys.argv[1:]))

View File

@@ -0,0 +1,216 @@
---
name: disk-space-cleanup
description: Investigate and safely reclaim disk space on this machine, especially on NixOS systems with heavy Nix, Rust/Haskell, Docker, and Podman usage. Use when disk is low, builds fail with no-space errors, /nix/store appears unexpectedly large, or the user asks for easy cleanup wins without deleting important data.
---
# Disk Space Cleanup
Reclaim disk space with a safety-first workflow: investigate first, run obvious low-risk cleanup wins, then do targeted analysis for larger opportunities.
Bundled helpers:
- `scripts/rust_target_dirs.py`: inventory and guarded deletion for explicit Rust `target/` directories
- `references/rust-target-roots.txt`: machine-specific roots for Rust artifact scans
- `references/ignore-paths.md`: machine-specific excludes for `du`/`ncdu`
## Execution Default
- Start with non-destructive investigation and quick sizing.
- Prioritize easy wins first (`nix-collect-garbage`, container prune, Cargo artifacts).
- Propose destructive actions with expected impact before running them.
- Run destructive actions only after confirmation, unless the user explicitly requests immediate execution of obvious wins.
- Capture new reusable findings by updating this skill before finishing.
## Workflow
1. Establish current pressure and biggest filesystems
2. Run easy cleanup wins
3. Inventory Rust build artifacts and clean the right kind of target
4. Investigate remaining heavy directories with `ncdu`/`du`
5. Investigate `/nix/store` roots when large toolchains still persist
6. Summarize reclaimed space and next candidate actions
7. Record new machine-specific ignore paths, Rust roots, or cleanup patterns in this skill
## Step 1: Baseline
Run a quick baseline before deleting anything:
```bash
df -h /
df -h /home
df -h /nix
```
Optionally add a quick home-level size snapshot:
```bash
du -xh --max-depth=1 "$HOME" 2>/dev/null | sort -h
```
## Step 2: Easy Wins
Use these first when the user wants fast, low-effort reclaiming:
```bash
sudo -n nix-collect-garbage -d
sudo -n docker system prune -a
sudo -n podman system prune -a
```
Notes:
- Add `--volumes` only when the user approves deleting unused volumes.
- Re-check free space after each command to show impact.
- Prefer `sudo -n` first so cleanup runs fail fast instead of hanging on password prompts.
- If root is still tight after these, run app cache cleaners before proposing raw `rm -rf`:
```bash
uv cache clean
pip cache purge
yarn cache clean
npm cache clean --force
```
## Step 3: Rust Build Artifact Cleanup
Do not start with a blind `find ~ -name target` or with hard-coded roots that may miss worktrees. Inventory explicit `target/` directories first using the bundled helper and the machine-specific root list in `references/rust-target-roots.txt`.
Inventory the biggest candidates:
```bash
python /home/imalison/dotfiles/dotfiles/agents/skills/disk-space-cleanup/scripts/rust_target_dirs.py list --min-size 500M --limit 30
```
Focus on stale targets only:
```bash
python /home/imalison/dotfiles/dotfiles/agents/skills/disk-space-cleanup/scripts/rust_target_dirs.py list --min-size 1G --older-than 14 --output tsv
```
Use `cargo-sweep` when the repo is still active and you want age/toolchain-aware cleanup inside a workspace:
```bash
nix run nixpkgs#cargo-sweep -- sweep -d -r -t 30 <workspace-root>
nix run nixpkgs#cargo-sweep -- sweep -r -t 30 <workspace-root>
nix run nixpkgs#cargo-sweep -- sweep -d -r -i <workspace-root>
nix run nixpkgs#cargo-sweep -- sweep -r -i <workspace-root>
```
Use direct `target/` deletion when inventory shows a discrete stale directory, especially for inactive repos or project-local worktrees. The helper only deletes explicit paths named `target` that are beneath configured roots and a Cargo project:
```bash
python /home/imalison/dotfiles/dotfiles/agents/skills/disk-space-cleanup/scripts/rust_target_dirs.py delete /abs/path/to/target
python /home/imalison/dotfiles/dotfiles/agents/skills/disk-space-cleanup/scripts/rust_target_dirs.py delete /abs/path/to/target --yes
```
Recommended sequence:
1. Run `rust_target_dirs.py list` to see the largest `target/` directories across `~/Projects`, `~/org`, `~/dotfiles`, and other configured roots.
2. For active repos, prefer `cargo-sweep` from the workspace root.
3. For inactive repos, abandoned branches, and `.worktrees/*/target`, prefer guarded direct deletion of the explicit `target/` directory.
4. Re-run the list command after each deletion round to show reclaimed space.
Machine-specific note:
- Project-local `.worktrees/*/target` directories are common cleanup wins on this machine and are easy to miss with the old hard-coded workflow.
## Step 4: Investigation with `ncdu` and `du`
Avoid mounted or remote filesystems when profiling space. Load ignore patterns from `references/ignore-paths.md`.
Use one-filesystem scans to avoid crossing mounts:
```bash
ncdu -x "$HOME"
sudo ncdu -x /
```
When excluding known noisy mountpoints:
```bash
ncdu -x --exclude "$HOME/keybase" "$HOME"
sudo ncdu -x --exclude /keybase --exclude /var/lib/railbird /
```
If `ncdu` is missing, use:
```bash
nix run nixpkgs#ncdu -- -x "$HOME"
```
For quick, non-blocking triage on very large trees, prefer bounded probes:
```bash
timeout 30s du -xh --max-depth=1 "$HOME/.cache" 2>/dev/null | sort -h
timeout 30s du -xh --max-depth=1 "$HOME/.local/share" 2>/dev/null | sort -h
```
Machine-specific heavy hitters seen in practice:
- `~/.cache/uv` can exceed 20G and is reclaimable with `uv cache clean`.
- `~/.cache/spotify` can exceed 10G; treat as optional app-cache cleanup.
- `~/.local/share/picom/debug.log` can grow past 15G when verbose picom debugging is enabled or crashes leave a stale log behind; if `picom` is not running, deleting or truncating the log is a high-yield low-risk win.
- `~/.local/share/Trash` can exceed several GB; empty only with user approval.
## Step 5: `/nix/store` Deep Dive
When `/nix/store` is still large after GC, inspect root causes instead of deleting random paths.
Useful commands:
```bash
nix path-info -Sh /nix/store/* 2>/dev/null | sort -h | tail -n 50
nix-store --gc --print-roots
```
Avoid `du -sh /nix/store` as a first diagnostic; it can be very slow on large stores.
For repeated GHC/Rust toolchain copies:
```bash
nix path-info -Sh /nix/store/* 2>/dev/null | rg '(ghc|rustc|rust-std|cargo)'
nix-store --gc --print-roots | rg '(ghc|rust)'
```
Resolve why a path is retained:
```bash
/home/imalison/dotfiles/dotfiles/lib/functions/find_store_path_gc_roots /nix/store/<store-path>
nix why-depends <consumer-store-path> <dependency-store-path>
```
Common retention pattern on this machine:
- Many `.direnv/flake-profile-*` symlinks under `~/Projects` and worktrees keep `nix-shell-env`/`ghc-shell-*` roots alive.
- Old taffybar constellation repos under `~/Projects` can pin large Haskell closures through `.direnv` and `result` symlinks. Deleting `gtk-sni-tray`, `status-notifier-item`, `dbus-menu`, `dbus-hslogger`, and `gtk-strut` and then rerunning `nix-collect-garbage -d` reclaimed about 11G of store data in one validated run.
- `find_store_path_gc_roots` is especially useful for proving GHC retention: many large `ghc-9.10.3-with-packages` paths are unique per project, while the base `ghc-9.10.3` and docs paths are shared.
- Quantify before acting:
```bash
find ~/Projects -type l -path '*/.direnv/flake-profile-*' | wc -l
find ~/Projects -type d -name .direnv | wc -l
nix-store --gc --print-roots | rg '/\\.direnv/flake-profile-' | awk -F' -> ' '{print $1 \"|\" $2}' \
| while IFS='|' read -r root target; do \
nix-store -qR \"$target\" | rg '^/nix/store/.+-ghc-[0-9]'; \
done | sort | uniq -c | sort -nr | head
```
- If counts are high and the projects are inactive, propose targeted `.direnv` cleanup for user confirmation.
## Safety Rules
- Do not delete user files directly unless explicitly requested.
- Prefer cleanup tools that understand ownership/metadata (`nix`, `docker`, `podman`, `cargo-sweep`) over `rm -rf`.
- For Rust build artifacts, deleting an explicit directory literally named `target` is acceptable when it is discovered by the bundled helper; Cargo will rebuild it.
- Present a concise “proposed actions” list before high-impact deletes.
- If uncertain whether data is needed, stop at investigation and ask.
## Learning Loop (Required)
Treat this skill as a living playbook.
After each disk cleanup task:
1. Add newly discovered mountpoints or directories to ignore in `references/ignore-paths.md`.
2. Add newly discovered Rust repo roots in `references/rust-target-roots.txt`.
3. Add validated command patterns or caveats discovered during the run to this `SKILL.md`.
4. Keep instructions practical and machine-specific; remove stale guidance.

View File

@@ -0,0 +1,3 @@
interface:
display_name: "Disk Space Cleanup"
short_description: "Find safe disk-space wins on NixOS hosts"

View File

@@ -0,0 +1,31 @@
# Ignore Paths for Disk Investigation
Use this file to track mountpoints or directories that should be excluded from `ncdu`/`du` scans because they are remote, special-purpose, or noisy.
## Known Ignores
- `$HOME/keybase`
- `$HOME/.cache/keybase`
- `$HOME/.local/share/keybase`
- `$HOME/.config/keybase`
- `/keybase`
- `/var/lib/railbird`
- `/run/user/*/doc` (FUSE portal mount; machine-specific example observed: `/run/user/1004/doc`)
## Discovery Commands
List mounted filesystems and spot special mounts:
```bash
findmnt -rn -o TARGET,FSTYPE,SOURCE
```
Target likely remote/special mounts:
```bash
findmnt -rn -o TARGET,FSTYPE,SOURCE | rg '(keybase|fuse|rclone|s3|railbird)'
```
## Maintenance Rule
When a disk cleanup run encounters a mount or path that should be ignored in future runs, add it here immediately with a short note.

View File

@@ -0,0 +1,6 @@
# One absolute path per line. Comments are allowed.
# Keep this list machine-specific and update it when Rust repos move.
/home/imalison/Projects
/home/imalison/org
/home/imalison/dotfiles

View File

@@ -0,0 +1,271 @@
#!/usr/bin/env python3
import argparse
import json
import os
import shutil
import subprocess
import sys
import time
from pathlib import Path
SCRIPT_DIR = Path(__file__).resolve().parent
DEFAULT_ROOTS_FILE = SCRIPT_DIR.parent / "references" / "rust-target-roots.txt"
def parse_size(value: str) -> int:
text = value.strip().upper()
units = {
"B": 1,
"K": 1024,
"KB": 1024,
"M": 1024**2,
"MB": 1024**2,
"G": 1024**3,
"GB": 1024**3,
"T": 1024**4,
"TB": 1024**4,
}
for suffix, multiplier in units.items():
if text.endswith(suffix):
number = text[: -len(suffix)].strip()
return int(float(number) * multiplier)
return int(float(text))
def human_size(num_bytes: int) -> str:
value = float(num_bytes)
for unit in ["B", "K", "M", "G", "T"]:
if value < 1024 or unit == "T":
if unit == "B":
return f"{int(value)}B"
return f"{value:.1f}{unit}"
value /= 1024
return f"{num_bytes}B"
def is_relative_to(path: Path, root: Path) -> bool:
try:
path.relative_to(root)
return True
except ValueError:
return False
def load_roots(roots_file: Path, cli_roots: list[str]) -> list[Path]:
roots: list[Path] = []
for raw in cli_roots:
candidate = Path(raw).expanduser().resolve()
if candidate.exists():
roots.append(candidate)
if roots_file.exists():
for line in roots_file.read_text().splitlines():
stripped = line.split("#", 1)[0].strip()
if not stripped:
continue
candidate = Path(stripped).expanduser().resolve()
if candidate.exists():
roots.append(candidate)
unique_roots: list[Path] = []
seen: set[Path] = set()
for root in roots:
if root not in seen:
unique_roots.append(root)
seen.add(root)
return unique_roots
def du_size_bytes(path: Path) -> int:
result = subprocess.run(
["du", "-sb", str(path)],
check=True,
capture_output=True,
text=True,
)
return int(result.stdout.split()[0])
def nearest_cargo_root(path: Path, stop_roots: list[Path]) -> str:
current = path.parent
stop_root_set = set(stop_roots)
while current != current.parent:
if (current / "Cargo.toml").exists():
return str(current)
if current in stop_root_set:
break
current = current.parent
return ""
def discover_targets(roots: list[Path]) -> list[dict]:
results: dict[Path, dict] = {}
now = time.time()
for root in roots:
for current, dirnames, _filenames in os.walk(root, topdown=True):
if "target" in dirnames:
target_dir = (Path(current) / "target").resolve()
dirnames.remove("target")
if target_dir in results or not target_dir.is_dir():
continue
stat_result = target_dir.stat()
size_bytes = du_size_bytes(target_dir)
age_days = int((now - stat_result.st_mtime) // 86400)
results[target_dir] = {
"path": str(target_dir),
"size_bytes": size_bytes,
"size_human": human_size(size_bytes),
"age_days": age_days,
"workspace": nearest_cargo_root(target_dir, roots),
}
return sorted(results.values(), key=lambda item: item["size_bytes"], reverse=True)
def print_table(rows: list[dict]) -> None:
if not rows:
print("No matching Rust target directories found.")
return
size_width = max(len(row["size_human"]) for row in rows)
age_width = max(len(str(row["age_days"])) for row in rows)
print(
f"{'SIZE'.ljust(size_width)} {'AGE'.rjust(age_width)} PATH"
)
for row in rows:
print(
f"{row['size_human'].ljust(size_width)} "
f"{str(row['age_days']).rjust(age_width)}d "
f"{row['path']}"
)
def filter_rows(rows: list[dict], min_size: int, older_than: int | None, limit: int | None) -> list[dict]:
filtered = [row for row in rows if row["size_bytes"] >= min_size]
if older_than is not None:
filtered = [row for row in filtered if row["age_days"] >= older_than]
if limit is not None:
filtered = filtered[:limit]
return filtered
def cmd_list(args: argparse.Namespace) -> int:
roots = load_roots(Path(args.roots_file).expanduser(), args.root)
if not roots:
print("No scan roots available.", file=sys.stderr)
return 1
rows = discover_targets(roots)
rows = filter_rows(rows, parse_size(args.min_size), args.older_than, args.limit)
if args.output == "json":
print(json.dumps(rows, indent=2))
elif args.output == "tsv":
for row in rows:
print(
"\t".join(
[
str(row["size_bytes"]),
str(row["age_days"]),
row["path"],
row["workspace"],
]
)
)
elif args.output == "paths":
for row in rows:
print(row["path"])
else:
print_table(rows)
return 0
def validate_delete_path(path_text: str, roots: list[Path]) -> Path:
target = Path(path_text).expanduser().resolve(strict=True)
if target.name != "target":
raise ValueError(f"{target} is not a target directory")
if target.is_symlink():
raise ValueError(f"{target} is a symlink")
if not target.is_dir():
raise ValueError(f"{target} is not a directory")
if not any(is_relative_to(target, root) for root in roots):
raise ValueError(f"{target} is outside configured scan roots")
if nearest_cargo_root(target, roots) == "":
raise ValueError(f"{target} is not beneath a Cargo project")
return target
def cmd_delete(args: argparse.Namespace) -> int:
roots = load_roots(Path(args.roots_file).expanduser(), args.root)
if not roots:
print("No scan roots available.", file=sys.stderr)
return 1
targets: list[Path] = []
for raw_path in args.path:
try:
targets.append(validate_delete_path(raw_path, roots))
except ValueError as exc:
print(str(exc), file=sys.stderr)
return 1
total_size = sum(du_size_bytes(target) for target in targets)
print(f"Matched {len(targets)} target directories totaling {human_size(total_size)}:")
for target in targets:
print(str(target))
if not args.yes:
print("Dry run only. Re-run with --yes to delete these target directories.")
return 0
for target in targets:
shutil.rmtree(target)
print(f"Deleted {len(targets)} target directories.")
return 0
def build_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(
description="Inventory and delete Rust target directories under configured roots."
)
parser.add_argument(
"--roots-file",
default=str(DEFAULT_ROOTS_FILE),
help="Path to the newline-delimited root list.",
)
parser.add_argument(
"--root",
action="append",
default=[],
help="Additional root to scan. May be provided multiple times.",
)
subparsers = parser.add_subparsers(dest="command", required=True)
list_parser = subparsers.add_parser("list", help="List target directories.")
list_parser.add_argument("--min-size", default="0", help="Minimum size threshold, for example 500M or 2G.")
list_parser.add_argument("--older-than", type=int, help="Only include targets at least this many days old.")
list_parser.add_argument("--limit", type=int, help="Maximum number of rows to print.")
list_parser.add_argument(
"--output",
choices=["table", "tsv", "json", "paths"],
default="table",
help="Output format.",
)
list_parser.set_defaults(func=cmd_list)
delete_parser = subparsers.add_parser("delete", help="Delete explicit target directories.")
delete_parser.add_argument("path", nargs="+", help="One or more target directories to delete.")
delete_parser.add_argument("--yes", action="store_true", help="Actually delete the paths.")
delete_parser.set_defaults(func=cmd_delete)
return parser
def main() -> int:
parser = build_parser()
args = parser.parse_args()
return args.func(args)
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,105 @@
---
name: email-unsubscribe-check
description: Use when user wants to find promotional or unwanted recurring emails to unsubscribe from, or when doing periodic inbox hygiene to identify senders worth unsubscribing from
---
# Email Unsubscribe Check
Scan recent inbox emails to surface promotional, newsletter, and digest senders the user likely wants to unsubscribe from. Actually unsubscribe via browser automation.
## Workflow
```dot
digraph unsubscribe_check {
"Search recent inbox emails" -> "Group by sender domain";
"Group by sender domain" -> "Classify each sender";
"Classify each sender" -> "Obvious unsubscribe?";
"Obvious unsubscribe?" -> "Present to user for confirmation" [label="yes"];
"Obvious unsubscribe?" -> "Borderline?" [label="no"];
"Borderline?" -> "Ask user" [label="yes"];
"Borderline?" -> "Skip" [label="no, personal"];
"Present to user for confirmation" -> "User confirms?";
"User confirms?" -> "Actually unsubscribe" [label="yes"];
"User confirms?" -> "Skip" [label="no"];
"Actually unsubscribe" -> "Mark matching emails read + archive";
"Mark matching emails read + archive" -> "Create Gmail filter";
"Create Gmail filter" -> "Retroactively clean old emails";
}
```
## Execution Default
- Start the workflow immediately when this skill is invoked.
- Do not ask a kickoff question like "should I start now?".
- Default scan window is `newer_than:7d` unless the user already specified a different range.
- Only ask a follow-up question before starting if required information is missing and execution would otherwise be blocked.
## How to Scan
1. Search recent emails: `newer_than:7d` (or wider if user requests)
2. Identify senders that look promotional/automated/digest
3. Present findings grouped by confidence:
- **Clearly unsubscribeable**: marketing, promos, digests user never engages with
- **Ask user**: newsletters, community content, event platforms (might be wanted)
## Unsubscribe Execution
For each confirmed sender, do ALL of these:
### 1. Actually unsubscribe via browser (most important step)
Two approaches depending on the sender:
**For emails with unsubscribe links:**
- Read the email via `gws gmail` to find the unsubscribe URL (usually at bottom of email body)
- Navigate to the URL with Chrome DevTools MCP
- Take a snapshot, find the confirmation button/checkbox
- Click through to complete the unsubscribe
- Verify the confirmation page
**For services with email settings pages (Nextdoor, LinkedIn, etc.):**
- Navigate to the service's notification/email settings page
- Log in using credentials from `pass` if needed
- Find and disable all email notification toggles
- Check ALL categories (digests, alerts, promotions, etc.)
### 2. Create Gmail filter as backup
Even after unsubscribing, create a filter to catch stragglers:
```
gws gmail users settings filters create \
--params '{"userId":"me"}' \
--json '{"criteria":{"from":"domain.com"},"action":{"removeLabelIds":["INBOX"]}}'
```
### 3. Mark old emails as read and archive them (minimum hygiene)
After unsubscribing, clean up existing email from the sender.
- At minimum: mark them as read.
- Preferred/default: also archive them (remove `INBOX` label).
Example:
```
gws gmail users messages list --params '{"userId":"me","q":"from:domain.com","maxResults":50}'
gws gmail users messages batchModify \
--params '{"userId":"me"}' \
--json '{"ids":["..."],"removeLabelIds":["UNREAD","INBOX"]}'
```
## Signals That an Email is Unsubscribeable
- "no-reply@" or "newsletter@" sender addresses
- Marketing subject lines: sales, promotions, "don't miss", digests
- Bulk senders: Nextdoor, Yelp, LinkedIn digest, social media notifications
- Community digests the user doesn't engage with
- Financial marketing (not transactional alerts)
- "Your weekly/daily/monthly" summaries
## Signals to NOT Auto-Unsubscribe (Ask First)
- Patreon/creator content
- Event platforms (Luma, Eventbrite, Meetup)
- Professional communities
- Services the user actively uses (even if noisy)
- Transactional emails from wanted services

View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,25 @@
---
name: gh-address-comments
description: Help address review/issue comments on the open GitHub PR for the current branch using gh CLI; verify gh auth first and prompt the user to authenticate if not logged in.
metadata:
short-description: Address comments in a GitHub PR review
---
# PR Comment Handler
Guide to find the open PR for the current branch and address its comments with gh CLI. Run all `gh` commands with elevated network access.
Prereq: ensure `gh` is authenticated (for example, run `gh auth login` once), then run `gh auth status` with escalated permissions (include workflow/repo scopes) so `gh` commands succeed. If sandboxing blocks `gh auth status`, rerun it with `sandbox_permissions=require_escalated`.
## 1) Inspect comments needing attention
- Run scripts/fetch_comments.py which will print out all the comments and review threads on the PR
## 2) Ask the user for clarification
- Number all the review threads and comments and provide a short summary of what would be required to apply a fix for it
- Ask the user which numbered comments should be addressed
## 3) If user chooses comments
- Apply fixes for the selected comments
Notes:
- If gh hits auth/rate issues mid-run, prompt the user to re-authenticate with `gh auth login`, then retry.

View File

@@ -0,0 +1,6 @@
interface:
display_name: "GitHub Address Comments"
short_description: Address comments in a GitHub PR review"
icon_small: "./assets/github-small.svg"
icon_large: "./assets/github.png"
default_prompt: "Address all actionable GitHub PR review comments in this branch and summarize the updates."

View File

@@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" viewBox="0 0 16 16">
<path fill="currentColor" d="M8 1.3a6.665 6.665 0 0 1 5.413 10.56 6.677 6.677 0 0 1-3.288 2.432c-.333.067-.458-.142-.458-.316 0-.226.008-.942.008-1.834 0-.625-.208-1.025-.45-1.233 1.483-.167 3.042-.734 3.042-3.292a2.58 2.58 0 0 0-.684-1.792c.067-.166.3-.85-.066-1.766 0 0-.559-.184-1.834.683a6.186 6.186 0 0 0-1.666-.225c-.567 0-1.134.075-1.667.225-1.275-.858-1.833-.683-1.833-.683-.367.916-.134 1.6-.067 1.766a2.594 2.594 0 0 0-.683 1.792c0 2.55 1.55 3.125 3.033 3.292-.192.166-.367.458-.425.891-.383.175-1.342.459-1.942-.55-.125-.2-.5-.691-1.025-.683-.558.008-.225.317.009.442.283.158.608.75.683.941.133.376.567 1.092 2.242.784 0 .558.008 1.083.008 1.242 0 .174-.125.374-.458.316a6.662 6.662 0 0 1-4.559-6.325A6.665 6.665 0 0 1 8 1.3Z"/>
</svg>

After

Width:  |  Height:  |  Size: 853 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 KiB

View File

@@ -0,0 +1,237 @@
#!/usr/bin/env python3
"""
Fetch all PR conversation comments + reviews + review threads (inline threads)
for the PR associated with the current git branch, by shelling out to:
gh api graphql
Requires:
- `gh auth login` already set up
- current branch has an associated (open) PR
Usage:
python fetch_comments.py > pr_comments.json
"""
from __future__ import annotations
import json
import subprocess
import sys
from typing import Any
QUERY = """\
query(
$owner: String!,
$repo: String!,
$number: Int!,
$commentsCursor: String,
$reviewsCursor: String,
$threadsCursor: String
) {
repository(owner: $owner, name: $repo) {
pullRequest(number: $number) {
number
url
title
state
# Top-level "Conversation" comments (issue comments on the PR)
comments(first: 100, after: $commentsCursor) {
pageInfo { hasNextPage endCursor }
nodes {
id
body
createdAt
updatedAt
author { login }
}
}
# Review submissions (Approve / Request changes / Comment), with body if present
reviews(first: 100, after: $reviewsCursor) {
pageInfo { hasNextPage endCursor }
nodes {
id
state
body
submittedAt
author { login }
}
}
# Inline review threads (grouped), includes resolved state
reviewThreads(first: 100, after: $threadsCursor) {
pageInfo { hasNextPage endCursor }
nodes {
id
isResolved
isOutdated
path
line
diffSide
startLine
startDiffSide
originalLine
originalStartLine
resolvedBy { login }
comments(first: 100) {
nodes {
id
body
createdAt
updatedAt
author { login }
}
}
}
}
}
}
}
"""
def _run(cmd: list[str], stdin: str | None = None) -> str:
p = subprocess.run(cmd, input=stdin, capture_output=True, text=True)
if p.returncode != 0:
raise RuntimeError(f"Command failed: {' '.join(cmd)}\n{p.stderr}")
return p.stdout
def _run_json(cmd: list[str], stdin: str | None = None) -> dict[str, Any]:
out = _run(cmd, stdin=stdin)
try:
return json.loads(out)
except json.JSONDecodeError as e:
raise RuntimeError(f"Failed to parse JSON from command output: {e}\nRaw:\n{out}") from e
def _ensure_gh_authenticated() -> None:
try:
_run(["gh", "auth", "status"])
except RuntimeError:
print("run `gh auth login` to authenticate the GitHub CLI", file=sys.stderr)
raise RuntimeError("gh auth status failed; run `gh auth login` to authenticate the GitHub CLI") from None
def gh_pr_view_json(fields: str) -> dict[str, Any]:
# fields is a comma-separated list like: "number,headRepositoryOwner,headRepository"
return _run_json(["gh", "pr", "view", "--json", fields])
def get_current_pr_ref() -> tuple[str, str, int]:
"""
Resolve the PR for the current branch (whatever gh considers associated).
Works for cross-repo PRs too, by reading head repository owner/name.
"""
pr = gh_pr_view_json("number,headRepositoryOwner,headRepository")
owner = pr["headRepositoryOwner"]["login"]
repo = pr["headRepository"]["name"]
number = int(pr["number"])
return owner, repo, number
def gh_api_graphql(
owner: str,
repo: str,
number: int,
comments_cursor: str | None = None,
reviews_cursor: str | None = None,
threads_cursor: str | None = None,
) -> dict[str, Any]:
"""
Call `gh api graphql` using -F variables, avoiding JSON blobs with nulls.
Query is passed via stdin using query=@- to avoid shell newline/quoting issues.
"""
cmd = [
"gh",
"api",
"graphql",
"-F",
"query=@-",
"-F",
f"owner={owner}",
"-F",
f"repo={repo}",
"-F",
f"number={number}",
]
if comments_cursor:
cmd += ["-F", f"commentsCursor={comments_cursor}"]
if reviews_cursor:
cmd += ["-F", f"reviewsCursor={reviews_cursor}"]
if threads_cursor:
cmd += ["-F", f"threadsCursor={threads_cursor}"]
return _run_json(cmd, stdin=QUERY)
def fetch_all(owner: str, repo: str, number: int) -> dict[str, Any]:
conversation_comments: list[dict[str, Any]] = []
reviews: list[dict[str, Any]] = []
review_threads: list[dict[str, Any]] = []
comments_cursor: str | None = None
reviews_cursor: str | None = None
threads_cursor: str | None = None
pr_meta: dict[str, Any] | None = None
while True:
payload = gh_api_graphql(
owner=owner,
repo=repo,
number=number,
comments_cursor=comments_cursor,
reviews_cursor=reviews_cursor,
threads_cursor=threads_cursor,
)
if "errors" in payload and payload["errors"]:
raise RuntimeError(f"GitHub GraphQL errors:\n{json.dumps(payload['errors'], indent=2)}")
pr = payload["data"]["repository"]["pullRequest"]
if pr_meta is None:
pr_meta = {
"number": pr["number"],
"url": pr["url"],
"title": pr["title"],
"state": pr["state"],
"owner": owner,
"repo": repo,
}
c = pr["comments"]
r = pr["reviews"]
t = pr["reviewThreads"]
conversation_comments.extend(c.get("nodes") or [])
reviews.extend(r.get("nodes") or [])
review_threads.extend(t.get("nodes") or [])
comments_cursor = c["pageInfo"]["endCursor"] if c["pageInfo"]["hasNextPage"] else None
reviews_cursor = r["pageInfo"]["endCursor"] if r["pageInfo"]["hasNextPage"] else None
threads_cursor = t["pageInfo"]["endCursor"] if t["pageInfo"]["hasNextPage"] else None
if not (comments_cursor or reviews_cursor or threads_cursor):
break
assert pr_meta is not None
return {
"pull_request": pr_meta,
"conversation_comments": conversation_comments,
"reviews": reviews,
"review_threads": review_threads,
}
def main() -> None:
_ensure_gh_authenticated()
owner, repo, number = get_current_pr_ref()
result = fetch_all(owner, repo, number)
print(json.dumps(result, indent=2))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,65 @@
---
name: hackage-release
description: Use when user asks to release, publish, or bump version of a Haskell package to Hackage
---
# Hackage Release
Bump version, build, validate, tag, push, and publish a Haskell package to Hackage.
## Workflow
1. **Bump version** in `package.yaml` (if using hpack) or `.cabal` file
2. **Update ChangeLog.md** with release notes
3. **Regenerate cabal** (if using hpack): `hpack`
4. **Build**: `cabal build`
5. **Check**: `cabal check` (must report zero warnings)
6. **Create sdist**: `cabal sdist`
7. **Commit & tag**: commit all changed files, `git tag vX.Y.Z.W`
8. **Push**: `git push && git push --tags`
9. **Get Hackage credentials**: `pass show hackage.haskell.org.gpg`
- Format: first line is password, `user:` line has username
10. **Publish package**: `cabal upload --publish <sdist-tarball> --username=<user> --password='<pass>'`
11. **Build & publish docs**: `cabal haddock --haddock-for-hackage` then `cabal upload --documentation --publish <docs-tarball> --username=<user> --password='<pass>'`
## Version Bumping (PVP)
Haskell uses the [Package Versioning Policy](https://pvp.haskell.org/) with format `A.B.C.D`:
| Component | When to Bump |
|-----------|-------------|
| A.B (major) | Breaking API changes |
| C (minor) | Backwards-compatible new features |
| D (patch) | Bug fixes, non-API changes |
## Nix-Based Projects
If the project uses a Nix flake, wrap cabal commands with `nix develop`:
```bash
nix develop --command cabal build
nix develop --command cabal check
nix develop --command hpack package.yaml
```
Prefer `nix develop` (flake) over `nix-shell` (legacy) to avoid ABI mismatches.
## PVP Dependency Bounds
Hackage warns about:
- **Missing upper bounds**: Every dependency should have an upper bound (e.g., `text >= 1.2 && < 2.2`)
- **Trailing zeros in upper bounds**: Use `< 2` not `< 2.0.0`; use `< 0.4` not `< 0.4.0.0`
Run `cabal check` to verify zero warnings before releasing.
## Checklist
- [ ] Version bumped in package.yaml / .cabal
- [ ] ChangeLog.md updated
- [ ] Cabal file regenerated (if hpack)
- [ ] `cabal build` succeeds
- [ ] `cabal check` reports no errors or warnings
- [ ] Changes committed and tagged
- [ ] Pushed to remote with tags
- [ ] Package published to Hackage
- [ ] Docs published to Hackage

View File

@@ -0,0 +1,32 @@
---
name: journaling
description: Use when user wants to journal, reflect, write a journal entry, or process thoughts. Also use when user mentions wanting to talk through what's on their mind.
---
# Journaling
## Overview
Guide the user through a freeform journaling conversation, then synthesize their thoughts into an organized `.org` file.
## How It Works
**1. Open the conversation.** Ask what's on their mind, how things have been going, or what they want to talk through. Keep it open-ended.
**2. Follow up naturally.** Listen for what seems important - dig into those threads. Don't rush through a checklist. One question at a time.
**3. Synthesize into a journal entry.** When the conversation winds down (or the user says they're done), write an organized `~/org/journal/YYYY-MM-DD.org` file with:
- A timestamp on the first line: `[YYYY-MM-DD Day HH:MM]`
- Org headings that emerge naturally from the conversation topics
- The user's thoughts in their own voice, but organized and cleaned up
- No rigid template - structure follows content
**4. Offer to review.** Show them the entry before writing, let them tweak it.
## Guidelines
- This is their space. Don't coach or advise unless asked.
- Reflect back what you hear - help them see their own patterns.
- If they seem stuck, gently prompt: recent events, feelings, goals, relationships, work.
- Keep the tone warm but not saccharine.
- Entries go in `~/org/journal/` as `YYYY-MM-DD.org`.

View File

@@ -0,0 +1,124 @@
---
name: logical-commits
description: Use when the user asks to split current git changes into logical commits, clean up commit history, create atomic commits, or stage by hunk. Review the whole worktree, group related changes, and produce ordered commits where each commit is a valid state (builds/tests pass with the project validation command).
---
# Logical Commits
Turn a mixed worktree into a clean sequence of atomic commits.
## Workflow
1. Inspect the full change set before staging anything.
2. Define commit boundaries by behavior or concern, not by file count.
3. Order commits so dependencies land first (types/api/schema/helpers before consumers).
4. Stage only the exact hunks for one commit.
5. Validate that staged commit state is healthy before committing.
6. Commit with a precise message.
7. Repeat until all intended changes are committed.
## 1) Inspect First
Run:
```bash
git status --short
git diff --stat
git diff
```
If there are staged changes already, inspect both views:
```bash
git diff --staged
git diff
```
## 2) Choose Validation Command Early
Select the fastest command that proves the repo is valid for this project. Prefer project-standard commands (for example: `just test`, `npm test`, `cargo test`, `go test ./...`, `nix flake check`, targeted build commands).
If no clear command exists:
1. Infer the best available command from repo scripts/config.
2. Tell the user what command you chose and why.
3. Do not claim full validation if coverage is partial.
## 3) Plan the Commit Stack
Before committing, write a short plan:
1. Commit title
2. Files and hunks included
3. Why this is a coherent unit
4. Validation command to run
If changes are intertwined, split by hunk (`git add -p`). If hunk splitting is not enough, use `git add -e` or perform a temporary refactor so each commit remains coherent and valid.
## 4) Stage Exactly One Commit
Preferred staging flow:
```bash
git add -p <file>
git diff --staged
```
Useful corrections:
```bash
git restore --staged -p <file> # unstage specific hunks
git reset -p <file> # alternate unstage flow
```
Never stage unrelated edits just to make the commit pass.
## 5) Validate Before Commit
Run the chosen validation command with the current staged/working tree state.
If validation fails:
1. Fix only what belongs in this logical commit, or
2. Unstage/re-split and revise the commit boundary.
Commit only after validation passes.
## 6) Commit and Verify
Commit:
```bash
git commit -m "<type>: <logical change>"
```
Then confirm:
```bash
git show --stat --oneline -1
```
Ensure remaining unstaged changes still make sense for later commits.
## 7) Final Checks
After finishing the stack:
```bash
git log --oneline --decorate -n <count>
git status
```
Report:
1. The commit sequence created
2. Validation command(s) run per commit
3. Any residual risks (for example, partial validation only)
## Guardrails
1. Keep commits atomic and reviewable.
2. Prefer hunk staging over broad file staging when a file contains multiple concerns.
3. Preserve user changes; do not discard unrelated work.
4. Avoid destructive commands unless the user explicitly requests them.
5. If a clean logical split is impossible without deeper refactor, explain the blocker and ask for direction.

View File

@@ -0,0 +1,77 @@
---
name: nixpkgs-review
description: Review or prepare nixpkgs package changes and PRs using a checklist distilled from review feedback on Ivan Malison's own NixOS/nixpkgs pull requests. Use when working in nixpkgs on package inits, updates, packaging fixes, or before opening or reviewing a nixpkgs PR.
---
# Nixpkgs Review
Use this skill when the task is specifically about reviewing or tightening a change in `NixOS/nixpkgs`.
The goal is not generic style review. The goal is to catch the kinds of issues that repeatedly came up in real nixpkgs feedback on Ivan's PRs: derivation structure, builder choice, metadata, PR hygiene, and JS packaging details.
## Workflow
1. Read the scope first.
Open the changed `package.nix` files, related metadata, and the PR title/body if there is one.
2. Run the historical checklist below.
Bias toward concrete review findings and actionable edits, not abstract style commentary.
3. Validate the package path.
Use the narrowest reasonable validation for the task: targeted build, package eval, or `nixpkgs-review` when appropriate.
4. If you are writing a review:
Lead with findings ordered by severity, include file references, and tie each point to a nixpkgs expectation.
5. If you are preparing a PR:
Fix the checklist items before opening it, then confirm title/body/commit hygiene.
## Historical Checklist
### Derivation structure
- Prefer `finalAttrs` over `rec` for derivations and nested derivations when self-references matter.
- Prefer `tag = "v${...}"` over `rev` when fetching a tagged upstream release.
- Check whether `strictDeps = true;` should be enabled.
- Use the narrowest builder/stdenv that matches the package. If no compiler is needed, consider `stdenvNoCC`.
- Put source modifications in `postPatch` or another appropriate hook, not inside `buildPhase`.
- Prefer `makeBinaryWrapper` over `makeWrapper` when a compiled wrapper is sufficient.
- Keep wrappers aligned with `meta.mainProgram` so overrides remain clean.
- Avoid `with lib;` in package expressions; prefer explicit `lib.*` references.
### Metadata and platform expectations
- For new packages, ensure maintainers are present and include the submitter when appropriate.
- Check whether platform restrictions are justified. Do not mark packages Linux-only or broken without evidence.
- If a package is only workable through patch accumulation and has no maintainer, call that out directly.
### JS, Bun, Electron, and wrapper-heavy packages
- Separate runtime deps from build-only deps. Large closures attract review attention.
- Remove redundant env vars and duplicated configuration if build hooks already cover them.
- Check bundled tool/runtime version alignment, especially browser/runtime pairs.
- Install completions, desktop files, or icons when upstream clearly ships them and the package already exposes the feature.
- Be careful with wrappers that hardcode env vars users may want to override.
### PR hygiene
- PR title should match nixpkgs naming and the package version.
- Keep the PR template intact unless there is a strong reason not to.
- Avoid unrelated commits in the PR branch.
- Watch for duplicate or overlapping PRs before investing in deeper review.
- If asked, squash fixup history before merge.
## Review Output
When producing a review, prefer this shape:
- Finding: what is wrong or risky.
- Why it matters in nixpkgs terms.
- Concrete fix, ideally with the exact attr/hook/builder to use.
If there are no findings, say so explicitly and mention remaining validation gaps.
## References
- Read [references/review-patterns.md](references/review-patterns.md) for the curated list of recurring review themes and concrete PR examples.
- Run `scripts/mine_pr_feedback.py --repo NixOS/nixpkgs --author colonelpanic8 --limit 20 --format markdown` to refresh the source material from newer PRs.

View File

@@ -0,0 +1,4 @@
interface:
display_name: "Nixpkgs Review"
short_description: "Review nixpkgs changes with historical guidance"
default_prompt: "Use $nixpkgs-review to review this nixpkgs package change before I open the PR."

View File

@@ -0,0 +1,105 @@
# Nixpkgs Review Patterns
This reference is a curated summary of recurring feedback from Ivan Malison's `NixOS/nixpkgs` PRs. Use it to ground reviews in patterns that have already come up from nixpkgs reviewers.
## Most Repeated Themes
### 1. Prefer `finalAttrs` over `rec`
This came up repeatedly on both package init and update PRs.
- [PR #490230](https://github.com/NixOS/nixpkgs/pull/490230) `playwright-cli`: reviewer asked for `buildNpmPackage (finalAttrs: { ... })` instead of `rec`.
- [PR #490033](https://github.com/NixOS/nixpkgs/pull/490033) `rumno`: same feedback for `rustPlatform.buildRustPackage`.
Practical rule:
- If the derivation self-references `version`, `src`, `pname`, `meta.mainProgram`, or nested outputs, default to `finalAttrs`.
### 2. Prefer `tag` when upstream release is a tag
This also repeated across multiple PRs.
- [PR #490230](https://github.com/NixOS/nixpkgs/pull/490230) `playwright-cli`
- [PR #490033](https://github.com/NixOS/nixpkgs/pull/490033) `rumno`
- [PR #497465](https://github.com/NixOS/nixpkgs/pull/497465) `t3code`
Practical rule:
- If upstream publishes a named release tag, prefer `tag = "v${finalAttrs.version}";` or the exact tag format instead of a raw `rev`.
### 3. Use the right hook and builder
Reviewers often push on hook placement and builder/stdenv choice.
- [PR #497465](https://github.com/NixOS/nixpkgs/pull/497465) `t3code`: feedback to move work from `buildPhase` into `postPatch`.
- [PR #497465](https://github.com/NixOS/nixpkgs/pull/497465) `t3code`: feedback to consider `stdenvNoCC`.
- [PR #490230](https://github.com/NixOS/nixpkgs/pull/490230) `playwright-cli`: prefer `makeBinaryWrapper` for a simple wrapper.
Practical rule:
- Check whether each mutation belongs in `postPatch`, `preConfigure`, `buildPhase`, or `installPhase`.
- Check whether the package genuinely needs a compiler toolchain.
- For simple env/arg wrappers, prefer `makeBinaryWrapper`.
### 4. Enable `strictDeps` unless there is a reason not to
This was called out explicitly on [PR #497465](https://github.com/NixOS/nixpkgs/pull/497465).
Practical rule:
- For new derivations, ask whether `strictDeps = true;` should be present.
- If not, be ready to justify why the builder or package layout makes it unnecessary.
### 5. Keep metadata explicit and override-friendly
- [PR #490230](https://github.com/NixOS/nixpkgs/pull/490230) `playwright-cli`: reviewer asked to avoid `with lib;`.
- [PR #497465](https://github.com/NixOS/nixpkgs/pull/497465) `t3code`: reviewer suggested deriving wrapper executable name from `finalAttrs.meta.mainProgram`.
Practical rule:
- Prefer `lib.licenses.mit` over `with lib;`.
- Keep `meta.mainProgram` authoritative and have wrappers follow it when practical.
### 6. Maintainers matter for new packages
- [PR #496806](https://github.com/NixOS/nixpkgs/pull/496806) `gws`: reviewer would not merge until the submitter appeared in maintainers.
Practical rule:
- For package inits, check maintainers early rather than waiting for review feedback.
### 7. PR title and template hygiene are review targets
- [PR #497465](https://github.com/NixOS/nixpkgs/pull/497465) `t3code`: asked to fix the PR title to match the version.
- [PR #490033](https://github.com/NixOS/nixpkgs/pull/490033) `rumno`: reviewer asked what happened to the PR template.
Practical rule:
- Before opening or updating a PR, verify the title, template, and branch scope.
### 8. Duplicate or overlapping PRs get noticed quickly
- [PR #490227](https://github.com/NixOS/nixpkgs/pull/490227) was replaced by [PR #490230](https://github.com/NixOS/nixpkgs/pull/490230).
- [PR #490053](https://github.com/NixOS/nixpkgs/pull/490053) overlapped with [PR #490033](https://github.com/NixOS/nixpkgs/pull/490033).
- [PR #488606](https://github.com/NixOS/nixpkgs/pull/488606), [PR #488602](https://github.com/NixOS/nixpkgs/pull/488602), and [PR #488603](https://github.com/NixOS/nixpkgs/pull/488603) were closed after reviewers pointed to existing work.
Practical rule:
- Search for existing PRs on the package before spending time polishing a review.
- If a branch contains unrelated commits, fix that before asking for review.
### 9. JS/Bun/Electron packages draw runtime-layout scrutiny
This came up heavily on `t3code` and `playwright-cli`.
- [PR #497465](https://github.com/NixOS/nixpkgs/pull/497465) `t3code`: reviewers proposed trimming the runtime closure, removing unnecessary env vars, and adding shell completions and desktop integration.
- [PR #490230](https://github.com/NixOS/nixpkgs/pull/490230) `playwright-cli`: reviewers called out mismatched bundled `playwright-core` and browser binaries, and wrapper behavior that prevented user overrides.
Practical rule:
- For JS-heavy packages, inspect closure size, runtime vs build-only deps, wrapper env vars, and version alignment between bundled libraries and external binaries.
### 10. Cross-platform evidence helps
- [PR #490230](https://github.com/NixOS/nixpkgs/pull/490230) received an approval explicitly noting Darwin success.
- [PR #497465](https://github.com/NixOS/nixpkgs/pull/497465) got feedback questioning platform restrictions and build behavior.
Practical rule:
- If the package plausibly supports Darwin, avoid premature Linux-only restrictions and mention what was or was not tested.
## How To Use This Reference
- Use these patterns as a focused checklist before submitting or reviewing nixpkgs changes.
- Do not blindly apply every point. Check whether the builder, language ecosystem, and upstream release model actually match.
- When in doubt, prefer concrete evidence from the current package diff over generic convention.

View File

@@ -0,0 +1,2 @@
__pycache__/
*.pyc

View File

@@ -0,0 +1,208 @@
#!/usr/bin/env python3
"""
Mine external feedback from recent GitHub PRs.
Examples:
python scripts/mine_pr_feedback.py --repo NixOS/nixpkgs --author colonelpanic8
python scripts/mine_pr_feedback.py --repo NixOS/nixpkgs --author colonelpanic8 --limit 30 --format json
"""
from __future__ import annotations
import argparse
import json
import subprocess
import sys
from collections import Counter
from concurrent.futures import ThreadPoolExecutor, as_completed
def run(cmd: list[str]) -> str:
proc = subprocess.run(cmd, capture_output=True, text=True)
if proc.returncode != 0:
raise RuntimeError(proc.stderr.strip() or f"command failed: {' '.join(cmd)}")
return proc.stdout
def gh_json(args: list[str]) -> object:
return json.loads(run(["gh", *args]))
def fetch_prs(repo: str, author: str, limit: int) -> list[dict]:
prs: dict[int, dict] = {}
for state in ("open", "closed"):
data = gh_json(
[
"search",
"prs",
"--repo",
repo,
"--author",
author,
"--limit",
str(max(limit, 30)),
"--state",
state,
"--json",
"number,title,state,closedAt,updatedAt,url",
]
)
for pr in data:
prs[pr["number"]] = pr
return sorted(
prs.values(),
key=lambda pr: (pr["updatedAt"], pr["number"]),
reverse=True,
)[:limit]
def fetch_feedback(repo: str, author: str, pr: dict) -> dict:
owner, name = repo.split("/", 1)
number = pr["number"]
def api(path: str) -> list[dict]:
return gh_json(["api", f"repos/{owner}/{name}/{path}", "--paginate"])
issue_comments = api(f"issues/{number}/comments")
review_comments = api(f"pulls/{number}/comments")
reviews = api(f"pulls/{number}/reviews")
comments = []
for comment in issue_comments:
login = comment["user"]["login"]
body = (comment.get("body") or "").strip()
if login != author and body:
comments.append({"kind": "issue", "user": login, "body": body})
for comment in review_comments:
login = comment["user"]["login"]
body = (comment.get("body") or "").strip()
if login != author and body:
comments.append(
{
"kind": "review_comment",
"user": login,
"body": body,
"path": comment.get("path"),
"line": comment.get("line"),
}
)
for review in reviews:
login = review["user"]["login"]
body = (review.get("body") or "").strip()
if login != author and body:
comments.append(
{
"kind": "review",
"user": login,
"body": body,
"state": review.get("state"),
}
)
return {**pr, "comments": comments}
def is_bot(login: str) -> bool:
return login.endswith("[bot]") or login in {"github-actions", "app/dependabot"}
def render_markdown(results: list[dict], include_bots: bool) -> str:
commenters = Counter()
kept = []
for pr in results:
comments = [
comment
for comment in pr["comments"]
if include_bots or not is_bot(comment["user"])
]
if comments:
kept.append({**pr, "comments": comments})
commenters.update(comment["user"] for comment in comments)
lines = [
"# PR Feedback Summary",
"",
f"- PRs scanned: {len(results)}",
f"- PRs with external feedback: {len(kept)}",
"",
"## Top commenters",
"",
]
for user, count in commenters.most_common(10):
lines.append(f"- `{user}`: {count}")
for pr in kept:
lines.extend(
[
"",
f"## PR #{pr['number']}: {pr['title']}",
"",
f"- URL: {pr['url']}",
f"- State: {pr['state']}",
"",
]
)
for comment in pr["comments"]:
body = comment["body"].replace("\r", " ").replace("\n", " ").strip()
snippet = body[:280] + ("..." if len(body) > 280 else "")
lines.append(f"- `{comment['user']}` `{comment['kind']}`: {snippet}")
return "\n".join(lines) + "\n"
def main() -> int:
parser = argparse.ArgumentParser(description="Collect review feedback from recent GitHub PRs.")
parser.add_argument("--repo", required=True, help="GitHub repo in owner/name form")
parser.add_argument("--author", required=True, help="PR author to inspect")
parser.add_argument("--limit", type=int, default=20, help="How many recent PRs to inspect")
parser.add_argument(
"--format",
choices=("markdown", "json"),
default="markdown",
help="Output format",
)
parser.add_argument(
"--include-bots",
action="store_true",
help="Keep bot comments in the output",
)
parser.add_argument(
"--workers",
type=int,
default=6,
help="Maximum concurrent GitHub API workers",
)
args = parser.parse_args()
try:
run(["gh", "auth", "status"])
except RuntimeError as err:
print(err, file=sys.stderr)
return 1
prs = fetch_prs(args.repo, args.author, args.limit)
results = []
with ThreadPoolExecutor(max_workers=args.workers) as pool:
futures = [pool.submit(fetch_feedback, args.repo, args.author, pr) for pr in prs]
for future in as_completed(futures):
results.append(future.result())
results.sort(key=lambda pr: (pr["updatedAt"], pr["number"]), reverse=True)
if args.format == "json":
if not args.include_bots:
for pr in results:
pr["comments"] = [
comment for comment in pr["comments"] if not is_bot(comment["user"])
]
json.dump(results, sys.stdout, indent=2)
sys.stdout.write("\n")
else:
sys.stdout.write(render_markdown(results, args.include_bots))
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,51 @@
---
name: org-agenda-api-production
description: Use when investigating production org-agenda-api state, testing endpoints, or debugging production issues
---
# org-agenda-api Production Access
## Overview
Access the production org-agenda-api instance at https://colonelpanic-org-agenda.fly.dev/ for debugging, testing, or verification.
## Credentials
Get the password from `pass`:
```bash
pass show org-agenda-api/imalison
```
Username is currently `imalison`.
## Quick Access with just
This repo includes a `justfile` under `~/dotfiles/org-agenda-api` with pre-configured commands:
```bash
cd ~/dotfiles/org-agenda-api
just health
just get-all-todos
just get-todays-agenda
just agenda
just agenda-files
just todo-states
just create-todo "Test todo"
```
## Manual curl
Prefer using the `just` recipes above so we don't bake auth syntax into docs.
## Key Endpoints
| Endpoint | Method | Description |
|----------|--------|-------------|
| /health | GET | Health check |
| /version | GET | API version |
| /get-all-todos | GET | All TODO items |
| /agenda | GET | Agenda (span=day\|week) |
| /capture | POST | Create entry |
| /update | POST | Update heading |
| /complete | POST | Complete item |
| /delete | POST | Delete heading |

View File

@@ -0,0 +1,312 @@
---
name: org-agenda-api
description: Use when interacting with the org-agenda-api HTTP server to read/write org-mode agenda data
---
# Org Agenda API Reference
HTTP API for org-mode agenda data. Use this skill when you need to query or modify org agenda entries programmatically.
## Authentication
Get credentials from pass:
```bash
pass show colonelpanic-org-agenda.fly.dev
```
Returns: password on first line, then `user:` and `url:` fields.
**Note:** The `url` field in pass may be outdated. Use the base URL below.
## Base URL
`https://colonelpanic-org-agenda.fly.dev`
All requests use Basic Auth with the credentials from pass.
## Read Endpoints
### GET /agenda
Get agenda entries for a day or week.
Query params:
- `span`: `day` (default) or `week`
- `date`: `YYYY-MM-DD` (default: today)
- `include_overdue`: `true` to include overdue items from previous days
- `include_completed`: `true` to include items completed on the queried date
- `refresh`: `true` to git pull repos first
Response includes `span`, `date`, `entries` array, and optionally `gitRefresh` results.
### GET /get-all-todos
Get all TODO items from agenda files.
Query params:
- `refresh`: `true` to git pull first
Response includes `defaults` (with `notifyBefore`), `todos` array, and optionally `gitRefresh`.
### GET /metadata
Get all app metadata in a single request. Returns:
- `templates`: capture templates
- `filterOptions`: tags, categories, priorities, todoStates
- `todoStates`: active and done states
- `customViews`: available custom agenda views
- `errors`: any errors encountered fetching above
### GET /todo-states
Get configured TODO states. Returns:
- `active`: array of not-done states (TODO, NEXT, etc.)
- `done`: array of done states (DONE, CANCELLED, etc.)
### GET /filter-options
Get available filter options. Returns:
- `todoStates`: all states
- `priorities`: available priorities (A, B, C)
- `tags`: all tags from agenda files
- `categories`: all categories
### GET /custom-views
List available custom agenda views. Returns array of `{key, name}` objects.
### GET /custom-view
Run a custom agenda view.
Query params:
- `key` (required): custom agenda command key
- `refresh`: `true` to git pull first
### GET /agenda-files
Get list of org-agenda-files with existence and readability status.
### GET /capture-templates (alias: /templates)
List available capture templates with their prompts.
### GET /health
Health check. Returns `status`, `uptime`, `requests`, and `captureStatus` if unhealthy.
### GET /version
Version info. Returns `version` and `gitCommit`.
### GET /debug-config
Current org configuration for debugging.
## Write Endpoints
### POST /capture
Create a new entry using a capture template.
**Important:** Use `capture-g` (GTD Todo) for most tasks - it properly records creation time and logbook history. Only use `default` when you specifically don't want GTD tracking.
Body:
```json
{
"template": "capture-g",
"values": {
"Title": "Task title",
"scheduled": "2026-01-20",
"deadline": "2026-01-25",
"priority": "A",
"tags": ["work", "urgent"],
"todo": "TODO"
}
}
```
### POST /complete
Mark a TODO as complete.
Body (use any combination to identify the item):
```json
{
"id": "org-id-if-available",
"file": "/path/to/file.org",
"pos": 12345,
"title": "Task title",
"state": "DONE"
}
```
Lookup order: id -> file+pos+title -> file+title -> title only
### POST /update
Update a TODO's scheduled date, deadline, priority, tags, or properties.
Body:
```json
{
"id": "org-id",
"file": "/path/to/file.org",
"pos": 12345,
"title": "Task title",
"scheduled": "2026-01-20T10:00:00",
"deadline": "2026-01-25",
"priority": "B",
"tags": ["updated", "tags"],
"properties": {
"CUSTOM_PROP": "value"
}
}
```
Set value to `null` or empty string to clear. Response includes new `pos` for cache updates.
### POST /delete
Delete an org item permanently.
Body:
```json
{
"id": "org-id",
"file": "/path/to/file.org",
"position": 12345,
"include_children": true
}
```
Requires `include_children: true` if item has children, otherwise returns error.
### POST /restart
Restart the Emacs server (exits gracefully, supervisord restarts).
## Category Strategy Endpoints
These require org-category-capture to be configured.
### GET /category-types
List registered category strategy types. Returns array with:
- `name`: strategy type name
- `hasCategories`: boolean
- `captureTemplate`: template string
- `prompts`: array of prompt definitions
### GET /categories
Get categories for a strategy type.
Query params:
- `type` (required): strategy type name (e.g., "projects")
- `existing_only`: `true` to only return categories with capture locations
Returns `type`, `categories` array, `todoFiles` array.
### GET /category-tasks
Get tasks for a specific category.
Query params:
- `type` (required): strategy type name
- `category` (required): category name
### POST /category-capture
Capture a new entry to a category.
Body:
```json
{
"type": "projects",
"category": "my-project",
"title": "Task title",
"todo": "TODO",
"scheduled": "2026-01-20",
"deadline": "2026-01-25",
"priority": "A",
"tags": ["work"],
"properties": {"EFFORT": "1h"}
}
```
## Response Format
Agenda/todo entries include:
- `todo`: TODO state (TODO, NEXT, DONE, etc.)
- `title`: Heading text
- `scheduled`: ISO date or datetime
- `deadline`: ISO date or datetime
- `priority`: A, B, or C (only if explicitly set)
- `tags`: Array of tags
- `file`: Source file path
- `pos`: Position in file (may change after edits)
- `id`: Org ID if set (stable identifier)
- `olpath`: Outline path array
- `level`: Heading level
- `category`: Category of the item
- `properties`: All properties from the property drawer
- `completedAt`: ISO timestamp when completed (if applicable)
- `agendaLine`: Raw agenda display text (agenda endpoint only)
- `notifyBefore`: Array of minutes for notifications
- `isWindowHabit`: Boolean for window habits
- `habitSummary`: Summary object for habits (if applicable)
## Common Workflows
**View today's agenda:**
```bash
curl -s -u "$USER:$PASS" "$URL/agenda?span=day" | jq '.entries[] | {todo, title, scheduled}'
```
**View this week:**
```bash
curl -s -u "$USER:$PASS" "$URL/agenda?span=week" | jq .
```
**View completed tasks for a specific date:**
```bash
curl -s -u "$USER:$PASS" "$URL/agenda?date=2026-01-17&include_completed=true" | jq '.entries[] | select(.completedAt != null) | {title, completedAt}'
```
**Get all metadata at once:**
```bash
curl -s -u "$USER:$PASS" "$URL/metadata" | jq .
```
**Create a task:**
```bash
curl -s -u "$USER:$PASS" -X POST "$URL/capture" \
-H "Content-Type: application/json" \
-d '{"template":"capture-g","values":{"Title":"New task","scheduled":"2026-01-20"}}'
```
**Complete a task by title:**
```bash
curl -s -u "$USER:$PASS" -X POST "$URL/complete" \
-H "Content-Type: application/json" \
-d '{"title":"Task title"}'
```
**Update a task's schedule:**
```bash
curl -s -u "$USER:$PASS" -X POST "$URL/update" \
-H "Content-Type: application/json" \
-d '{"title":"Task title","scheduled":"2026-01-21T14:00:00"}'
```
**Clear a deadline:**
```bash
curl -s -u "$USER:$PASS" -X POST "$URL/update" \
-H "Content-Type: application/json" \
-d '{"title":"Task title","deadline":null}'
```
**Delete a task:**
```bash
curl -s -u "$USER:$PASS" -X POST "$URL/delete" \
-H "Content-Type: application/json" \
-d '{"title":"Task to delete","file":"/path/to/file.org","position":12345}'
```
## Error Handling
All endpoints return JSON. Errors include:
```json
{
"status": "error",
"message": "Error description"
}
```
Success responses include:
```json
{
"status": "created" | "completed" | "updated",
...additional fields
}
```

View File

@@ -0,0 +1,122 @@
---
name: password-reset
description: Use when the user wants to reset or rotate a website or service password end-to-end, including finding the right `pass` entry, generating a new password with `xkcdpassgen`, retrieving reset emails through `gws gmail` or a local mail CLI, completing the reset in the browser with Chrome DevTools MCP, and updating the password store safely without losing entry metadata.
---
# Password Reset
## Overview
Handle password resets end-to-end. Prefer `gws gmail` for reset-email retrieval, Chrome DevTools MCP for website interaction, and the local `xkcdpassgen` helper for password generation.
## Tool Priorities
- Prefer `gws gmail` over opening Gmail in the browser.
- If `gws` is unavailable, use an installed Gmail CLI or IMAP-based mail tool if one exists locally. Inspect the environment first instead of guessing command names.
- Prefer Chrome DevTools MCP for all browser interaction.
- Use `pass find` and `pass show` before asking the user for credentials or account details.
## Password Generation
The local password generator is `xkcdpassgen`, defined in `dotfiles/lib/functions/xkcdpassgen` and available in shell as an autoloaded function.
```bash
xkcdpassgen <pass-entry-name>
```
Behavior:
- Generates `xkcdpass -n 3 | tr -d ' '` as the base password.
- Appends one uppercase letter, one digit, and one symbol by default.
- Supports:
- `-U` to omit uppercase
- `-N` to omit number
- `-S` to omit symbol
Do not substitute a different password generator ungless the user explicitly asks.
## Safe `pass` Update Pattern
`xkcdpassgen` writes directly to the `pass` entry it is given. Do not run it against the canonical entry before the reset succeeds, because:
- it would overwrite the current password immediately
- it would replace any extra metadata lines in a multiline `pass` entry
Use this pattern instead:
```bash
entry="service/example"
tmp_entry="${entry}-password-reset-tmp"
existing_contents="$(pass show "$entry" 2>/dev/null || true)"
metadata="$(printf '%s\n' "$existing_contents" | tail -n +2)"
xkcdpassgen "$tmp_entry"
new_password="$(pass show "$tmp_entry" | head -1)"
# ... use $new_password in the reset flow ...
if [ -n "$metadata" ]; then
printf '%s\n%s\n' "$new_password" "$metadata" | pass insert -m -f "$entry"
else
printf '%s\n' "$new_password" | pass insert -m -f "$entry"
fi
pass rm -f "$tmp_entry"
```
If the site rejects the password because of policy constraints, keep the canonical entry unchanged, delete or reuse the temp entry, and generate another candidate with different flags only if needed.
## Reset Workflow
1. Identify the account and canonical `pass` entry.
2. Run `pass find <service>` and inspect likely matches with `pass show`.
3. Capture existing metadata before generating a new password.
4. Generate the candidate password into a temporary `pass` entry with `xkcdpassgen`.
5. Start the reset flow in Chrome DevTools MCP:
- navigate to the login or account page
- use the site's "forgot password" flow, or
- sign in and navigate to security settings if the user asked for a rotation rather than a reset
6. Use `gws gmail` to retrieve the reset email when needed:
- search recent mail by sender domain, subject, or reset-related keywords
- open the message and extract the reset link
- navigate to that link in Chrome DevTools MCP
7. Fill the new password from the temporary `pass` entry and complete the form.
8. Verify success:
- confirmation page, or
- successful login with the new password
9. Promote the temp password into the canonical `pass` entry while preserving metadata, then remove the temp entry.
## Email Guidance
Prefer `gws gmail` for reset-email handling. Typical pattern:
- list recent messages with `gws gmail users messages list --params '{"userId":"me","q":"from:service.example newer_than:7d"}'`
- bias toward reset keywords such as `reset`, `password`, `security`, `verify`, or `signin`
- read shortlisted messages with `gws gmail users messages get --params '{"userId":"me","id":"MESSAGE_ID","format":"full"}'` rather than browsing Gmail manually
If `gws` is unavailable, use an installed Gmail CLI or local mail helper only as a fallback. Keep that discovery lightweight and local to the current environment.
## Browser Guidance
Use Chrome DevTools MCP to complete the reset flow directly:
- navigate to the reset or security page
- take snapshots to identify the relevant inputs and buttons
- click, fill, and submit through the site UI
- verify the success state before updating the canonical `pass` entry
Prefer MCP interaction over describing steps for the user to perform manually.
## Credentials And Account Data
- Search `pass` before asking the user for usernames, recovery emails, or OTP-related entries.
- Preserve existing metadata lines in multiline `pass` entries whenever possible.
- Never print the new password in the final response unless the user explicitly asks for it.
## Failure Handling
- If account discovery is ambiguous, ask a short clarifying question only after checking `pass`.
- If the reset email does not arrive, search spam or alternate senders before giving up.
- If login or reset requires another secret that is not in `pass`, then ask the user.
- If the reset flow fails after temp-password generation, leave the canonical entry untouched.

View File

@@ -0,0 +1,4 @@
interface:
display_name: "Password Reset"
short_description: "Reset passwords and update pass safely"
default_prompt: "Use $password-reset to reset this account password, complete the browser flow, and update pass safely."

View File

@@ -0,0 +1,402 @@
---
name: planning-coaching
description: Use when helping with daily planning, task prioritization, reviewing agenda, or when user seems stuck on what to do next
---
# Planning Coaching
Help Ivan with planning through question-driven coaching, honest feedback, and data-informed accountability.
## Persistent Files
**IMPORTANT:** Always read these at the start of planning sessions.
### Context File: `/home/imalison/org/planning/context.org`
Persistent context about Ivan's life, goals, struggles, and current focus. Claude maintains this file - update it when:
- Goals or priorities shift
- New patterns emerge
- Life circumstances change
- We learn something about what helps/doesn't help
Read this first. It's the "state of Ivan" that persists across sessions.
### Daily Journals: `/home/imalison/org/planning/dailies/YYYY-MM-DD.org`
One file per day we do planning. Contains:
- That day's plan (short list, focus areas)
- Stats table from the previous day review (inline)
- Notes from the session
- End-of-day reflection (if we do one)
Create a new file for each planning session day. Reference past dailies to see patterns.
### Stats File: `/home/imalison/org/planning/stats.org`
Running tables for trend analysis:
- **Daily Log**: One row per planning day with all metrics
- **Weekly Summary**: Aggregated weekly totals with notes
### Raw Logs: `/home/imalison/org/planning/logs.jsonl`
Detailed machine-readable log (one JSON object per line, per day). Captures full task data so we can calculate new metrics retroactively.
Each line contains:
```json
{
"date": "2026-01-20",
"planned": [{"title": "...", "friction": 3, "effort": 2, "id": "...", "file": "...", ...}],
"completed": [{"title": "...", "friction": 3, "effort": 2, "completedAt": "...", ...}],
"rescheduled": [{"title": "...", "from": "2026-01-20", "to": "2026-01-21", ...}],
"context": {"energy": "medium", "available_time": "full day", "notes": "..."}
}
```
When recording stats:
1. Append full JSON object to logs.jsonl
2. Add summary row to stats.org Daily Log table
3. Include inline stats table in that day's journal
4. Update Weekly Summary when a week ends
## Core Principles
1. **Question-driven**: Ask questions to help think through priorities rather than dictating
2. **Direct and honest**: Call out avoidance patterns directly - this is wanted
3. **Data-informed**: Use org-agenda-api to look at patterns, velocity, scheduling history
4. **Balance pressure**: Push on procrastination but don't overwhelm on decision-heavy tasks
5. **Lightweight and flexible**: Always offer option to skip parts if not feeling it
6. **No guilt**: If we fall off the wagon, make it easy and encouraging to get back on
## Planning Session Flow
```dot
digraph planning_session {
rankdir=TB;
"Read context.org" [shape=box];
"Yesterday review (skippable)" [shape=box];
"Capture new items" [shape=box];
"Check current state" [shape=box];
"Inbox processing (skippable)" [shape=box];
"Pick focus areas" [shape=box];
"Create short list" [shape=box];
"Meta check (optional)" [shape=box];
"Write daily journal" [shape=box];
"Read context.org" -> "Yesterday review (skippable)";
"Yesterday review (skippable)" -> "Capture new items";
"Capture new items" -> "Check current state";
"Check current state" -> "Inbox processing (skippable)";
"Inbox processing (skippable)" -> "Pick focus areas";
"Pick focus areas" -> "Create short list";
"Create short list" -> "Meta check (optional)";
"Meta check (optional)" -> "Write daily journal";
}
```
Every step marked "skippable" - offer it, but accept "let's skip that today" without question.
### 0. Read Context (Always)
Read `/home/imalison/org/planning/context.org` first. This grounds the session in what's currently going on.
### 1. Yesterday Review (Skippable)
Quick look back at the previous day. Keep it lightweight - a minute or two, not an interrogation.
**Subjective check-in:**
- "How do you feel about yesterday?" (open-ended, not demanding)
- "Anything you want to talk about - productivity or otherwise?"
**Objective stats (if wanted):**
- Completion rate: X of Y planned tasks done
- Friction conquered: total/average friction of completed tasks
- Rescheduled: N tasks bumped to today
- Effort accuracy: any tasks that took way more/less than estimated?
**Keep it encouraging:**
- Celebrate wins, especially high-friction completions
- If it was a rough day, acknowledge it without judgment
- "Yesterday was yesterday. What do we want today to look like?"
**If we haven't done this in a while:**
- "Hey, we haven't done a planning session in [X days]. No big deal - want to ease back in?"
- Don't guilt trip. Just pick up where we are.
### 2. Capture New Items
Before diving into today's state, ask: "Anything new come up that needs to be captured?"
- New tasks, ideas, commitments that surfaced since last session
- Things remembered overnight or during the day
- Add these to org before continuing
**Which capture command to use:**
- `just inbox "Task title"` - Default for new todos. Quick capture without setting properties. Items go to inbox for later triage (setting effort, friction, priority, category).
- `just capture "Task title"` - Only when we're setting effort, friction, priority, or category upfront during the planning session.
This prevents things from falling through the cracks and clears mental load before planning.
### 3. Check Current State
Ask about:
- Energy level right now (low/medium/high)
- Time available and structure of the day
- Any hard deadlines or commitments
- Mental state (scattered? focused? anxious?)
### 4. Inbox Processing (Skippable)
Process items captured to inbox since last session. These are quick captures (`just inbox`) that need triage.
**For each inbox item, decide:**
1. Is this actually actionable? (If not: delete, or convert to reference/someday)
2. Assign FRICTION and EFFORT estimates
3. Set priority if obvious
4. Schedule if it has a natural date, otherwise leave unscheduled for later prioritization
5. **IMPORTANT: Transition state from INBOX to NEXT** using `just set-state "Task title" "NEXT"`
**Process for property assignment:**
1. Both of us estimate FRICTION and EFFORT
2. Use Ivan's values unless we differ by 2+ points
3. If discrepancy >= 2, discuss: "I estimated this as [X] because [reason] - what makes you see it as [Y]?"
**Why this matters:** Items sitting in inbox create mental overhead. Regular processing keeps the system trustworthy.
### 5. Pick Focus Areas
Based on energy and context, choose what *types* of work to tackle:
- High friction tasks (if energy supports it)
- Quick wins (if need momentum)
- Deep work (if have focus time)
- Admin/shallow work (if low energy)
### 6. Create Short List
Curate 3-5 tasks that match the day's reality. Not a full dump - a focused list.
### 7. Meta Check (Optional)
Occasionally (weekly-ish, or when it feels right), ask:
- "Is this planning process working for you?"
- "Anything we should change about how we do this?"
- "Are the FRICTION/EFFORT scales making sense?"
This is how we iterate on the system itself.
## Task Properties
Store in org properties drawer via `just update` with a `properties` field in the JSON body.
### FRICTION (0-5)
Psychological resistance / avoidance tendency / decision paralysis factor.
| Value | Meaning |
|-------|---------|
| 0 | No friction - could start right now |
| 1 | Minimal - minor reluctance |
| 2 | Some - need to push a bit |
| 3 | Moderate - will procrastinate without intention |
| 4 | High - significant avoidance |
| 5 | Maximum - dread/paralysis |
### EFFORT (Fibonacci: 1, 2, 3, 5, 8)
Time/energy investment. Store as number, discuss as t-shirt size.
| Number | T-shirt | Meaning |
|--------|---------|---------|
| 1 | XS | Trivial, <30min |
| 2 | S | Small, ~1-2h |
| 3 | M | Medium, half-day |
| 5 | L | Large, full day |
| 8 | XL | Multi-day effort |
### Setting Properties
```bash
just update '{"title": "Task name", "properties": {"FRICTION": "3", "EFFORT": "5"}}'
```
## Priority Framework
When helping decide what to work on, weigh these factors:
1. **Energy/context match**: Does current energy support this task's friction level?
2. **Deadlines**: What's due soon or has external pressure?
3. **Impact**: What moves the needle most?
High-friction + high-impact tasks need the right conditions. Don't push these when energy is low.
## Handling Avoidance
**Be direct.** Ivan wants honest feedback.
When noticing avoidance patterns:
- "You've rescheduled X three times now. What's making this hard?"
- "This has been on your list for two weeks. Let's talk about what's blocking it."
- "I notice you keep picking small tasks over [big important thing]. What would make that more approachable?"
**Use data:**
- Look at scheduling history via `just agenda-day YYYY-MM-DD`
- Track how long tasks have been scheduled
- Notice patterns in what gets done vs. avoided
## Coaching Stance
**Do:**
- Ask "what's making this hard?" not "why haven't you done this?"
- Offer to break down high-friction tasks into smaller steps
- Notice and celebrate progress, especially on hard things
- Be honest about patterns you see
**Don't:**
- Overwhelm with too many decisions at once
- Push high-friction tasks when energy is clearly low
- Judge - observe and inquire instead
- Let things slide without comment (directness is wanted)
## Red Flags to Watch For
- Same task rescheduled 3+ times
- Consistently avoiding a category of work
- Taking on new commitments while existing ones slip
- Only doing low-friction tasks day after day
- Overcommitting (too many items scheduled for one day)
When you see these: name it directly and explore what's going on.
## Mid-Day Check-ins
These can happen impromptu - not every day, just when useful.
**When to offer:**
- If morning plan isn't working out
- Energy shifted significantly
- Got stuck or derailed
- Finished the short list early
**Keep it brief:**
- "How's it going with [today's focus]?"
- "Want to adjust the plan for the afternoon?"
- "Anything blocking you right now?"
## Metrics We Track
For the daily review, pull these from the API:
| Metric | How to calculate | Why it matters |
|--------|------------------|----------------|
| Completion rate | completed / planned for day | Overall follow-through |
| Friction conquered | sum of FRICTION on completed tasks | Are we tackling hard things? |
| Rescheduling count | tasks that moved from yesterday to today | Chronic rescheduling = avoidance |
| Effort accuracy | compare EFFORT estimate vs actual | Calibrate future estimates |
**Don't obsess over numbers.** They're conversation starters, not report cards.
## Queries for Planning
Use the `just` commands in `/home/imalison/org/justfile` for all API interactions.
**Tasks needing property assignment:**
```bash
just todos # Get all todos, filter for missing FRICTION or EFFORT in properties
```
**Today's agenda (including overdue):**
```bash
just agenda-overdue # Use this for planning - shows today + all overdue items
just agenda # Only today's scheduled items (misses overdue tasks)
```
**Note:** Always use `agenda-overdue` during planning sessions to see the full picture of what needs attention.
**Agenda for specific date:**
```bash
just agenda-day 2026-01-20
```
**Completed items for a specific date:**
```bash
just completed 2026-01-22 # Get items completed on a specific date
just completed-today # Get items completed today
```
**This week's agenda:**
```bash
just agenda-week
```
**Overdue/rescheduled items:**
```bash
just agenda-overdue
```
**Capture new items:**
```bash
just inbox "New task title" # Quick capture to inbox (default)
just capture "Task title" "2026-01-22" # With scheduling
```
**Update task properties:**
```bash
just update '{"title": "Task name", "properties": {"FRICTION": "3", "EFFORT": "5"}}'
```
**Reschedule a task:**
```bash
just reschedule "Task title" "2026-01-25"
```
**Complete a task:**
```bash
just complete "Task title"
```
**Change task state (e.g., INBOX -> NEXT):**
```bash
just set-state "Task title" "NEXT"
```
## Daily Journal Template
Create `/home/imalison/org/planning/dailies/YYYY-MM-DD.org` for each session:
```org
#+TITLE: Planning - YYYY-MM-DD
#+DATE: [YYYY-MM-DD Day]
* Yesterday Review
** Stats
| Metric | Value |
|-------------+-------|
| Planned | N |
| Completed | N |
| Rate | N% |
| Friction | N |
| Rescheduled | N |
** Reflection
[How Ivan felt about yesterday, anything discussed]
* Today's Context
- Energy: [low/medium/high]
- Available time: [description]
- Mental state: [notes]
* Focus Areas
- [What types of work we're tackling today]
* Today's Short List
Use org ID links to reference tasks - don't duplicate task definitions here.
- [[id:uuid-here][Task 1 title]]
- [[id:uuid-here][Task 2 title]]
- [[id:uuid-here][Task 3 title]]
* Notes
[Anything else from the session]
* End of Day (optional)
[If we do an evening check-in]
```
**Also add row to** `/home/imalison/org/planning/stats.org` Daily Log table.
## Updating Context File
Update `/home/imalison/org/planning/context.org` when:
- Ivan mentions a new goal or project
- We notice a recurring pattern
- Something significant changes in life/work
- We discover what helps or doesn't help
- The meta check reveals process adjustments
Don't ask permission to update it - just do it and mention what changed.

View File

@@ -0,0 +1,47 @@
---
name: playwright-cli
description: Automate browser interactions from the shell using Playwright via the `playwright-cli` command (open/goto/snapshot/click/type/screenshot, tabs/storage/network). Use when you need deterministic browser automation for web testing, form filling, screenshots/PDFs, or data extraction.
---
# Browser Automation With playwright-cli
This system provides `playwright-cli` via Nix (see `nixos/flake.nix` for the nixpkgs PR patch and `nixos/code.nix` for installation), so its available on `PATH` without any `npm -g` installs.
## Quick Start
```bash
# First run (downloads browser bits used by Playwright)
playwright-cli install-browser
# Open a new browser session (optionally with a URL)
playwright-cli open
playwright-cli open https://example.com/
# Navigate, inspect, and interact
playwright-cli goto https://playwright.dev
playwright-cli snapshot
playwright-cli click e15
playwright-cli type "search query"
playwright-cli press Enter
# Save artifacts
playwright-cli screenshot --filename=page.png
playwright-cli pdf --filename=page.pdf
# Close the browser
playwright-cli close
```
## Practical Workflow
1. `playwright-cli open` (or `open <url>`)
2. `playwright-cli snapshot`
3. Use element refs (`e1`, `e2`, ...) from the snapshot with `click`, `fill`, `hover`, `check`, etc.
4. Take `screenshot`/`pdf` as needed
5. `playwright-cli close`
## Tips
- Use `playwright-cli state-save auth.json` / `state-load auth.json` to persist login state across runs.
- Use named sessions with `-s=mysession` when you need multiple concurrent browsers.
- Set `PLAYWRIGHT_CLI_PACKAGE` to pin the npm package (default is `@playwright/cli@latest`).

View File

@@ -0,0 +1,5 @@
interface:
display_name: "Playwright CLI"
short_description: "Automate browser interactions"
default_prompt: "Use playwright-cli to automate browser actions (open/goto/snapshot/click/type/screenshot) and save useful artifacts (screenshots, PDFs, auth state)."

View File

@@ -0,0 +1,54 @@
---
name: release
description: Use when user asks to release, publish, bump version, or prepare a new version for deployment
---
# Release
Validate, format, bump version, and tag for release.
## Workflow
1. **Validate** - Run project's validation command
2. **Fix formatting** - Auto-fix prettier/formatting issues if any
3. **Bump version** - Ask user for bump type, update package.json
4. **Commit & tag** - Commit version bump, create git tag
5. **Optionally push** - Ask if user wants to push
## Commands
```bash
# 1. Validate
yarn validate # or: npm run validate
# 2. Fix formatting if needed
yarn prettier:fix # or: npm run prettier:fix
# 3. Bump version (edit package.json)
# patch: 1.2.3 → 1.2.4
# minor: 1.2.3 → 1.3.0
# major: 1.2.3 → 2.0.0
# 4. Commit and tag
git add package.json
git commit -m "chore: bump version to X.Y.Z"
git tag vX.Y.Z
# 5. Push (if requested)
git push && git push --tags
```
## Quick Reference
| Bump Type | When to Use |
|-----------|-------------|
| patch | Bug fixes, small changes |
| minor | New features, backwards compatible |
| major | Breaking changes |
## Before Release Checklist
- [ ] All tests pass
- [ ] No lint errors
- [ ] Formatting is clean
- [ ] Changes are committed

View File

@@ -0,0 +1,86 @@
---
name: taffybar-ecosystem-release
description: Use when releasing, version-bumping, or propagating changes across taffybar GitHub org packages (taffybar, gtk-sni-tray, gtk-strut, status-notifier-item, dbus-menu, dbus-hslogger)
---
# Taffybar Ecosystem Release
Release and propagate changes across the taffybar Haskell package ecosystem.
See also: `taffybar-nixos-flake-chain` for how these packages are consumed by the NixOS configuration and what flake.lock updates may be needed after a release.
## Package Dependency Graph
```
taffybar
├── gtk-sni-tray
│ ├── dbus-menu
│ ├── gtk-strut
│ └── status-notifier-item
├── dbus-menu
├── gtk-strut
├── status-notifier-item
└── dbus-hslogger
```
**Leaf packages** (no ecosystem deps): `gtk-strut`, `status-notifier-item`, `dbus-hslogger`, `dbus-menu`
**Mid-level**: `gtk-sni-tray` (depends on dbus-menu, gtk-strut, status-notifier-item)
**Top-level**: `taffybar` (depends on all above)
## Repositories & Local Checkouts
| Package | GitHub | Local Checkout |
|---------|--------|---------------|
| taffybar | taffybar/taffybar | `~/.config/taffybar/taffybar/` |
| gtk-sni-tray | taffybar/gtk-sni-tray | `~/Projects/gtk-sni-tray/` |
| gtk-strut | taffybar/gtk-strut | `~/Projects/gtk-strut/` |
| status-notifier-item | taffybar/status-notifier-item | `~/Projects/status-notifier-item/` |
| dbus-menu | taffybar/dbus-menu | `~/Projects/dbus-menu/` |
| dbus-hslogger | IvanMalison/dbus-hslogger | `~/Projects/dbus-hslogger/` |
## Releasing a Package
Always release leaf packages before their dependents. Changes propagate **upward** through the graph.
### 1. Release the Changed Package
Use the `hackage-release` skill for the full Hackage publish workflow. In the local checkout:
1. Bump version in `.cabal` file (PVP: A.B.C.D)
2. Update ChangeLog.md
3. `cabal build && cabal check`
4. `cabal sdist`
5. Commit, tag `vX.Y.Z.W`, push with tags
6. Publish to Hackage
7. Publish docs
**Manual doc upload required for GTK-dependent packages:** Hackage cannot build documentation for packages that depend on GTK/GI libraries (the build servers lack the system dependencies). This affects `taffybar`, `gtk-sni-tray`, `gtk-strut`, and `dbus-menu`. For these packages you must build haddocks locally and upload them yourself — see the `hackage-release` skill for the `cabal haddock --haddock-for-hackage` and `cabal upload --documentation` commands. Only `status-notifier-item` and `dbus-hslogger` (pure DBus/Haskell deps) can have their docs built by Hackage automatically.
### 2. Update Dependents' Version Bounds
For each package higher in the graph that depends on what you just released, update the dependency bound in its `.cabal` file. For example, if you bumped `gtk-strut` to 0.1.5.0:
- In `gtk-sni-tray.cabal`: update `gtk-strut >= 0.1.5 && < 0.2`
- In `taffybar.cabal`: update `gtk-strut >= 0.1.5 && < 0.2`
Then release those packages too if needed (repeat from step 1).
### 3. Update Flake Inputs
Each package's `flake.nix` references its ecosystem dependencies as inputs (typically `flake = false` pointing at GitHub). After pushing changes, update the flake.lock in any repo that directly references the changed package:
```bash
cd ~/Projects/gtk-sni-tray # if it depends on what changed
nix flake update gtk-strut
```
```bash
cd ~/.config/taffybar/taffybar # taffybar references all ecosystem pkgs
nix flake update gtk-strut
```
### Full Ecosystem Release Order
1. `gtk-strut`, `status-notifier-item`, `dbus-hslogger`, `dbus-menu` (leaves — parallel OK)
2. `gtk-sni-tray` (update bounds for any leaf changes first)
3. `taffybar` (update bounds for all changes)

View File

@@ -0,0 +1,61 @@
---
name: taffybar-nixos-flake-chain
description: Use when doing NixOS rebuilds involving taffybar, or when flake.lock updates are needed after changing taffybar ecosystem packages. Also use when debugging stale taffybar versions after `just switch`.
---
# Taffybar NixOS Flake Chain
How the taffybar ecosystem packages are consumed by the NixOS configuration through a chain of nested flakes, and what flake.lock updates may be needed when something changes.
See also: `taffybar-ecosystem-release` for the package dependency graph, release workflow, and Hackage publishing.
## The Three-Layer Flake Chain
The NixOS system build pulls in taffybar through three nested flake.nix files:
```
nixos/flake.nix (top — `just switch` reads this)
│ ├── taffybar path:.../taffybar/taffybar
│ ├── imalison-taffybar path:../dotfiles/config/taffybar
│ └── gtk-sni-tray, gtk-strut, etc. (GitHub inputs)
dotfiles/config/taffybar/flake.nix (middle — imalison-taffybar config)
│ ├── taffybar path:.../taffybar/taffybar
│ └── gtk-sni-tray, gtk-strut, etc. (GitHub inputs)
dotfiles/config/taffybar/taffybar/flake.nix (bottom — taffybar library)
│ └── gtk-sni-tray, gtk-strut, etc. (flake = false GitHub inputs)
```
All three flakes declare their own top-level inputs for the ecosystem packages and use `follows` to keep versions consistent within each layer.
## Why Bottom-Up Updates Matter
`path:` inputs snapshot the target flake **including its flake.lock** at lock time. If you only run `nix flake update` at the top (nixos) layer, the middle and bottom layers keep whatever was previously locked in their own flake.lock files.
So when propagating a change to a system rebuild, you generally need to update flake.lock files from the bottom up — the bottom layer first so the middle layer picks up fresh locks when it re-resolves, then the middle so the top picks up fresh locks.
```bash
# Bottom (if an ecosystem dep changed):
cd ~/.config/taffybar/taffybar && nix flake update <pkg>
# Middle:
cd ~/.config/taffybar && nix flake update <pkg> taffybar
# Top:
cd ~/dotfiles/nixos && nix flake update <pkg> imalison-taffybar taffybar
```
Not every change requires touching all three layers. Think about which flake.lock files actually contain stale references:
- Changed **taffybar itself** — it's the bottom layer, so start at the middle (`nix flake update taffybar`) then the top.
- Changed a **leaf ecosystem package** (e.g. gtk-strut) — start at the bottom since taffybar's flake.lock references it, then cascade up.
- The nixos flake also has **direct GitHub inputs** for ecosystem packages with `follows` overrides. Updating those at the top level may be sufficient if nothing changed in the middle/bottom flake.lock files themselves.
## Rebuilding
```bash
cd ~/dotfiles/nixos && just switch
```
If taffybar seems stale after a rebuild, check whether the flake.lock at each layer actually points at the expected revision — a missed cascade step is the usual cause.

1
dotfiles/claude/CLAUDE.md Symbolic link
View File

@@ -0,0 +1 @@
../agents/AGENTS.md

View File

@@ -0,0 +1,20 @@
{
"hooks": {
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "~/.agents/hooks/tmux-title.sh"
}
]
}
]
},
"enabledPlugins": {
"superpowers@superpowers-marketplace": true,
"agent-browser@agent-browser": true
},
"effortLevel": "high",
"skipDangerousModePermissionPrompt": true
}

View File

@@ -0,0 +1,39 @@
{
"permissions": {
"allow": [
"Bash(find:*)",
"Bash(cat:*)"
],
"deny": []
},
"mcp": {
"servers": {
"gitea-mcp": {
"command": "bash",
"args": [
"-lc",
"set -euo pipefail; export GITEA_BASE_URL='https://dev.railbird.ai'; export GITEA_ACCESS_TOKEN=\"$(pass show claude-mcp/gitea-access-token | head -1)\"; exec docker run -i --rm -e GITEA_ACCESS_TOKEN -e GITEA_BASE_URL docker.gitea.com/gitea-mcp-server"
]
},
"chrome-devtools": {
"command": "npx",
"args": [
"chrome-devtools-mcp@latest",
"--auto-connect"
]
},
"imap-email": {
"command": "bash",
"args": [
"-lc",
"set -euo pipefail; export IMAP_USER='IvanMalison@gmail.com'; export IMAP_HOST='imap.gmail.com'; export IMAP_PASSWORD=\"$(pass show claude-mcp/gmail-imap-app-password | head -1)\"; exec npx -y imap-email-mcp"
]
}
}
},
"enabledMcpjsonServers": [
"chrome-devtools",
"imap-email"
],
"enableAllProjectMcpServers": true
}

View File

@@ -0,0 +1,43 @@
{
"permissions": {
"allow": [
"Bash(find:*)",
"Bash(cat:*)"
],
"deny": []
},
"mcp": {
"servers": {
"gitea-mcp": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"GITEA_ACCESS_TOKEN",
"-e",
"GITEA_BASE_URL=https://dev.railbird.ai",
"docker.gitea.com/gitea-mcp-server"
]
},
"chrome-devtools": {
"command": "npx",
"args": [
"chrome-devtools-mcp@latest",
"--auto-connect"
]
},
"imap-email": {
"command": "npx",
"args": ["-y", "imap-email-mcp"],
"env": {}
}
}
},
"enabledMcpjsonServers": [
"chrome-devtools",
"imap-email"
],
"enableAllProjectMcpServers": true
}

1
dotfiles/codex/AGENTS.md Symbolic link
View File

@@ -0,0 +1 @@
../agents/AGENTS.md

159
dotfiles/codex/config.toml Normal file
View File

@@ -0,0 +1,159 @@
model = "gpt-5.4"
model_reasoning_effort = "high"
personality = "pragmatic"
notify = ["/Users/kat/.codex/plugins/cache/openai-bundled/computer-use/1.0.750/Codex Computer Use.app/Contents/SharedSupport/SkyComputerUseClient.app/Contents/MacOS/SkyComputerUseClient", "turn-ended"]
[projects."/home/imalison/Projects/nixpkgs"]
trust_level = "trusted"
[projects."/home/imalison/dotfiles"]
trust_level = "trusted"
[projects."/home/imalison/Projects/railbird"]
trust_level = "trusted"
[projects."/home/imalison/Projects/subtr-actor"]
trust_level = "trusted"
[projects."/home/imalison/Projects/google-messages-api"]
trust_level = "trusted"
[projects."/home/imalison"]
trust_level = "trusted"
[projects."/home/imalison/Projects/scrobble-scrubber"]
trust_level = "trusted"
[projects."/home/imalison/temp"]
trust_level = "trusted"
[projects."/home/imalison/Projects/org-agenda-api"]
trust_level = "untrusted"
[projects."/home/imalison/org"]
trust_level = "trusted"
[projects."/home/imalison/dotfiles/.git/modules/dotfiles/config/taffybar"]
trust_level = "trusted"
[projects."/home/imalison/Projects/notifications-tray-icon"]
trust_level = "trusted"
[projects."/home/imalison/Projects/hyprland"]
trust_level = "trusted"
[projects."/home/imalison/Projects/git-sync-rs"]
trust_level = "trusted"
[projects."/home/imalison/Projects/keepbook"]
trust_level = "trusted"
[projects."/home/imalison/Projects/boxcars"]
trust_level = "trusted"
[projects."/home/imalison/Projects/rumno"]
trust_level = "trusted"
[projects."/home/imalison/Projects/git-blame-rank"]
trust_level = "trusted"
[projects."/home/imalison/Projects/hatchet"]
trust_level = "trusted"
[projects."/home/imalison/dotfiles/dotfiles/emacs.d/elpaca/sources/org-project-capture"]
trust_level = "trusted"
[projects."/home/imalison/dotfiles/dotfiles/config/taffybar/taffybar/packages"]
trust_level = "trusted"
[projects."/home/imalison/Projects/scrobble-tools"]
trust_level = "trusted"
[projects."/home/imalison/.password-store"]
trust_level = "trusted"
[projects."/home/imalison/Projects/subtr-actor-mechanics"]
trust_level = "trusted"
[projects."/home/imalison/Projects/lastfm-edit"]
trust_level = "trusted"
[projects."/home/imalison/Projects/mova"]
trust_level = "trusted"
[projects."/home/imalison/dotfiles/dotfiles/config/taffybar/taffybar"]
trust_level = "trusted"
[projects."/home/imalison/Projects"]
trust_level = "trusted"
[projects."/home/imalison/Projects/rofi-systemd"]
trust_level = "trusted"
[projects."/home/imalison/Projects/map-quiz"]
trust_level = "trusted"
[projects."/run/media/imalison/NETDEBUGUSB"]
trust_level = "trusted"
[projects."/home/imalison/Projects/coqui-tts-streamer"]
trust_level = "trusted"
[projects."/home/imalison/Downloads"]
trust_level = "trusted"
[projects."/home/imalison/keysmith_generated"]
trust_level = "trusted"
[projects."/Users/kat/dotfiles"]
trust_level = "trusted"
[projects."/Users/kat"]
trust_level = "trusted"
[notice]
hide_gpt5_1_migration_prompt = true
"hide_gpt-5.1-codex-max_migration_prompt" = true
[notice.model_migrations]
"gpt-5.2" = "gpt-5.2-codex"
[mcp_servers.chrome-devtools]
command = "npx"
args = ["-y", "chrome-devtools-mcp@latest", "--auto-connect"]
[mcp_servers.observability]
command = "npx"
args = ["-y", "@google-cloud/observability-mcp"]
[mcp_servers.gmail]
command = "nix"
args = ["run", "/home/imalison/Projects/gmail-mcp#gmail-mcp-server"]
[mcp_servers.openaiDeveloperDocs]
url = "https://developers.openai.com/mcp"
[features]
unified_exec = true
apps = true
steer = true
[marketplaces.openai-bundled]
last_updated = "2026-04-19T01:07:40Z"
source_type = "local"
source = "/Users/kat/.codex/.tmp/bundled-marketplaces/openai-bundled"
[plugins."google-calendar@openai-curated"]
enabled = true
[plugins."gmail@openai-curated"]
enabled = true
[plugins."google-drive@openai-curated"]
enabled = true
[plugins."computer-use@openai-bundled"]
enabled = true
[plugins."github@openai-curated"]
enabled = true

1
dotfiles/codex/skills Symbolic link
View File

@@ -0,0 +1 @@
../agents/skills

View File

@@ -0,0 +1,16 @@
[general]
import = ["~/.config/alacritty/themes/themes/dracula.toml"]
[font]
size = 12
[scrolling]
history = 10000
multiplier = 3
[window]
decorations = "full"
[window.padding]
x = 10
y = 10

View File

@@ -1,34 +0,0 @@
scrolling:
# How many lines of scrollback to keep,
# '0' will disable scrolling.
history: 10000
# Number of lines the viewport will move for every line
# scrolled when scrollback is enabled (history > 0).
multiplier: 3
# Faux Scrolling
#
# The `faux_multiplier` setting controls the number
# of lines the terminal should scroll when the alternate
# screen buffer is active. This is used to allow mouse
# scrolling for applications like `man`.
#
# To disable this completely, set `faux_multiplier` to 0.
faux_multiplier: 3
# Automatically scroll to the bottom when new text is written
# to the terminal.
auto_scroll: false
font:
size: 8
window:
padding:
x: 10
y: 10
decorations: full
import:
- ~/.config/alacritty/themes/themes/dracula.yaml

View File

@@ -1,2 +0,0 @@
[api]
token = 417ba97c-b532-4e4b-86df-a240314ae840

View File

@@ -0,0 +1,39 @@
output DP-1
off
output HDMI-1
off
output DP-2
off
output HDMI-2
off
output DP-1-0
off
output DP-1-1
off
output DP-1-2
off
output DP-1-3
off
output DP-1-4
off
output DP-1-5
off
output DP-1-6
off
output eDP-1
crtc 0
mode 2560x1600
pos 0x0
primary
rate 240.00
x-prop-broadcast_rgb Automatic
x-prop-colorspace Default
x-prop-max_bpc 12
x-prop-non_desktop 0
x-prop-scaling_mode Full aspect
output HDMI-1-0
crtc 4
mode 3440x1440
pos 2560x0
rate 99.98
x-prop-non_desktop 0

View File

@@ -0,0 +1,2 @@
HDMI-1-0 00ffffffffffff0010ace3a1535a333016210103805123782a25a1b14d3db7250e505421080001010101010101010101010101010101e77c70a0d0a029503020350029623100001a000000ff00237442737a474441594542634e000000fd0018781e963c010a202020202020000000fc0044656c6c204157333432334457015f020337f148101f04130312013f230907018301000068030c002000383c006ad85dc401788000000278e305c000e2006ae60605018d4b004ed470a0d0a046503020350029623100001a9d6770a0d0a022503020350029623100001a565e00a0a0a029503020350029623100001a6fc200a0a0a055503020350029623100001a3c
eDP-1 00ffffffffffff0009e5580c0000000001210104b527187803bbc5ae503fb7250c515500000001010101010101010101010101010101c07200a0a040c8603020360084f21000001a000000fd0c30f0b1b176010a202020202020000000fe00424f452043510a202020202020000000fc004e4531383051444d2d4e4d310a029602030f00e3058080e606050195731000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000fa702079020021001d280f7409000a400680dd2a511824b249120e023554b060ec64662a1378220014ffed1185ff099f002f001f003f06c700020005002b000c27003cef00002700303b0000810015741a0000030b30f0006095107310f0000000008d00000000000000000000000000000000000000000000000000000000bc90

View File

@@ -0,0 +1,39 @@
output DP-1
off
output HDMI-1
off
output DP-2
off
output HDMI-2
off
output DP-1-0
off
output DP-1-1
off
output DP-1-2
off
output DP-1-3
off
output DP-1-4
off
output DP-1-5
off
output DP-1-6
off
output eDP-1
crtc 0
mode 2560x1600
pos 0x0
primary
rate 240.00
x-prop-broadcast_rgb Automatic
x-prop-colorspace Default
x-prop-max_bpc 12
x-prop-non_desktop 0
x-prop-scaling_mode Full aspect
output HDMI-1-0
crtc 4
mode 3440x1440
pos 2560x0
rate 99.98
x-prop-non_desktop 0

Some files were not shown because too many files have changed in this diff Show More