diff --git a/dotfiles/agents/skills/.gitignore b/dotfiles/agents/skills/.gitignore new file mode 100644 index 00000000..cb9264b0 --- /dev/null +++ b/dotfiles/agents/skills/.gitignore @@ -0,0 +1,2 @@ +.system/ +codex-primary-runtime/ diff --git a/dotfiles/agents/skills/.system/.codex-system-skills.marker b/dotfiles/agents/skills/.system/.codex-system-skills.marker deleted file mode 100644 index 1cc09bd7..00000000 --- a/dotfiles/agents/skills/.system/.codex-system-skills.marker +++ /dev/null @@ -1 +0,0 @@ -22c0ca9bd55ca4ff diff --git a/dotfiles/agents/skills/.system/imagegen/LICENSE.txt b/dotfiles/agents/skills/.system/imagegen/LICENSE.txt deleted file mode 100644 index 13e25df8..00000000 --- a/dotfiles/agents/skills/.system/imagegen/LICENSE.txt +++ /dev/null @@ -1,201 +0,0 @@ -Apache License -Version 2.0, January 2004 -http://www.apache.org/licenses/ - -TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - -1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - -2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - -3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - -4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - -5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - -6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - -7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - -8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - -9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf of - any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - -END OF TERMS AND CONDITIONS - -APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don\'t include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - -Copyright [yyyy] [name of copyright owner] - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. diff --git a/dotfiles/agents/skills/.system/imagegen/SKILL.md b/dotfiles/agents/skills/.system/imagegen/SKILL.md deleted file mode 100644 index 4285e5e6..00000000 --- a/dotfiles/agents/skills/.system/imagegen/SKILL.md +++ /dev/null @@ -1,356 +0,0 @@ ---- -name: "imagegen" -description: "Generate or edit raster images when the task benefits from AI-created bitmap visuals such as photos, illustrations, textures, sprites, mockups, or transparent-background cutouts. Use when Codex should create a brand-new image, transform an existing image, or derive visual variants from references, and the output should be a bitmap asset rather than repo-native code or vector. Do not use when the task is better handled by editing existing SVG/vector/code-native assets, extending an established icon or logo system, or building the visual directly in HTML/CSS/canvas." ---- - -# Image Generation Skill - -Generates or edits images for the current project (for example website assets, game assets, UI mockups, product mockups, wireframes, logo design, photorealistic images, or infographics). - -## Top-level modes and rules - -This skill has exactly two top-level modes: - -- **Default built-in tool mode (preferred):** built-in `image_gen` tool for normal image generation, editing, and simple transparent-image requests. Does not require `OPENAI_API_KEY`. -- **Fallback CLI mode:** `scripts/image_gen.py` CLI. Use when the user explicitly asks for the CLI/API/model path, or after the user explicitly confirms a true model-native transparency fallback with `gpt-image-1.5`. Requires `OPENAI_API_KEY`. - -Within CLI fallback, the CLI exposes three subcommands: - -- `generate` -- `edit` -- `generate-batch` - -Rules: -- Use the built-in `image_gen` tool by default for normal image generation and editing requests. -- Do not switch to CLI fallback for ordinary quality, size, or file-path control. -- If the user explicitly asks for a transparent image/background, stay on built-in `image_gen` first: prompt for a flat removable chroma-key background, then remove it locally with the installed helper at `$CODEX_HOME/skills/.system/imagegen/scripts/remove_chroma_key.py`. -- Never silently switch from built-in `image_gen` or CLI `gpt-image-2` to CLI `gpt-image-1.5`. Treat this as a model/path downgrade and ask the user before doing it, unless the user has already explicitly requested `gpt-image-1.5`, `scripts/image_gen.py`, or CLI fallback. -- If a transparent request appears too complex for clean chroma-key removal, asks for true/native transparency, or local removal fails validation, explain that true transparency requires CLI `gpt-image-1.5 --background transparent --output-format png` because `gpt-image-2` does not support `background=transparent`, then ask whether to proceed. Run the CLI fallback only after the user confirms. -- The word `batch` by itself does not mean CLI fallback. If the user asks for many assets or says to batch-generate assets without explicitly asking for CLI/API/model controls, stay on the built-in path and issue one built-in call per requested asset or variant. -- If the built-in tool fails or is unavailable, tell the user the CLI fallback exists and that it requires `OPENAI_API_KEY`. Proceed only if the user explicitly asks for that fallback. -- If the user explicitly asks for CLI mode, use the bundled `scripts/image_gen.py` workflow. Do not create one-off SDK runners. -- Never modify `scripts/image_gen.py`. If something is missing, ask the user before doing anything else. - -Built-in save-path policy: -- In built-in tool mode, Codex saves generated images under `$CODEX_HOME/*` by default. -- Do not describe or rely on OS temp as the default built-in destination. -- Do not describe or rely on a destination-path argument (if any) on the built-in `image_gen` tool. If a specific location is needed, generate first and then move or copy the selected output from `$CODEX_HOME/generated_images/...`. -- Save-path precedence in built-in mode: - 1. If the user names a destination, move or copy the selected output there. - 2. If the image is meant for the current project, move or copy the final selected image into the workspace before finishing. - 3. If the image is only for preview or brainstorming, render it inline; the underlying file can remain at the default `$CODEX_HOME/*` path. -- Never leave a project-referenced asset only at the default `$CODEX_HOME/*` path. -- Do not overwrite an existing asset unless the user explicitly asked for replacement; otherwise create a sibling versioned filename such as `hero-v2.png` or `item-icon-edited.png`. - -Shared prompt guidance for both modes lives in `references/prompting.md` and `references/sample-prompts.md`. - -Fallback-only docs/resources for CLI mode: -- `references/cli.md` -- `references/image-api.md` -- `references/codex-network.md` -- `scripts/image_gen.py` - -Local post-processing helper: -- `$CODEX_HOME/skills/.system/imagegen/scripts/remove_chroma_key.py`: removes a flat chroma-key background from a generated image and writes a PNG/WebP with alpha. Prefer auto-key sampling, soft matte, and despill for antialiased edges. - -## When to use -- Generate a new image (concept art, product shot, cover, website hero) -- Generate a new image using one or more reference images for style, composition, or mood -- Edit an existing image (inpainting, lighting or weather transformations, background replacement, object removal, compositing, transparent background) -- Produce many assets or variants for one task - -## When not to use -- Extending or matching an existing SVG/vector icon set, logo system, or illustration library inside the repo -- Creating simple shapes, diagrams, wireframes, or icons that are better produced directly in SVG, HTML/CSS, or canvas -- Making a small project-local asset edit when the source file already exists in an editable native format -- Any task where the user clearly wants deterministic code-native output instead of a generated bitmap - -## Decision tree - -Think about two separate questions: - -1. **Intent:** is this a new image or an edit of an existing image? -2. **Execution strategy:** is this one asset or many assets/variants? - -Intent: -- If the user wants to modify an existing image while preserving parts of it, treat the request as **edit**. -- If the user provides images only as references for style, composition, mood, or subject guidance, treat the request as **generate**. -- If the user provides no images, treat the request as **generate**. - -Built-in edit semantics: -- Built-in edit mode is for images already visible in the conversation context, such as attached images or images generated earlier in the thread. -- If the user wants to edit a local image file with the built-in tool, first load it with built-in `view_image` tool so the image is visible in the conversation context, then proceed with the built-in edit flow. -- Do not promise arbitrary filesystem-path editing through the built-in tool. -- If a local file still needs direct file-path control, masks, or other explicit CLI-only parameters, use the explicit CLI fallback only when the user asks for it. -- For edits, preserve invariants aggressively and save non-destructively by default. - -Execution strategy: -- In the built-in default path, produce many assets or variants by issuing one `image_gen` call per requested asset or variant. -- In the CLI fallback path, use the CLI `generate-batch` subcommand only when the user explicitly chose CLI mode and needs many prompts/assets. -- For many distinct assets, do not use `n` as a substitute for separate prompts. `n` is for variants of one prompt; distinct assets need distinct built-in calls or distinct CLI `generate-batch` jobs. - -Assume the user wants a new image unless they clearly ask to change an existing one. - -## Workflow -1. Decide the top-level mode: built-in by default, including simple transparent-output requests; fallback CLI only if explicitly requested or after the user explicitly confirms a transparent-output fallback. -2. Decide the intent: `generate` or `edit`. -3. Decide whether the output is preview-only or meant to be consumed by the current project. -4. Decide the execution strategy: single asset vs repeated built-in calls vs CLI `generate-batch`. -5. Collect inputs up front: prompt(s), exact text (verbatim), constraints/avoid list, and any input images. -6. For every input image, label its role explicitly: - - reference image - - edit target - - supporting insert/style/compositing input -7. If the edit target is only on the local filesystem and you are staying on the built-in path, inspect it with `view_image` first so the image is available in conversation context. -8. If the user asked for a photo, illustration, sprite, product image, banner, or other explicitly raster-style asset, use `image_gen` rather than substituting SVG/HTML/CSS placeholders. If the request is for an icon, logo, or UI graphic that should match existing repo-native SVG/vector/code assets, prefer editing those directly instead. -9. Augment the prompt based on specificity: - - If the user's prompt is already specific and detailed, normalize it into a clear spec without adding creative requirements. - - If the user's prompt is generic, add tasteful augmentation only when it materially improves output quality. -10. Use the built-in `image_gen` tool by default. -11. For transparent-output requests, follow the transparent image guidance below: generate with built-in `image_gen` on a flat chroma-key background, copy the selected output into the workspace or `tmp/imagegen/`, run the installed `$CODEX_HOME/skills/.system/imagegen/scripts/remove_chroma_key.py` helper, and validate the alpha result before using it. If this path looks unsuitable or fails, ask before switching to CLI `gpt-image-1.5`. -12. Inspect outputs and validate: subject, style, composition, text accuracy, and invariants/avoid items. -13. Iterate with a single targeted change, then re-check. -14. For preview-only work, render the image inline; the underlying file may remain at the default `$CODEX_HOME/generated_images/...` path. -15. For project-bound work, move or copy the selected artifact into the workspace and update any consuming code or references. Never leave a project-referenced asset only at the default `$CODEX_HOME/generated_images/...` path. -16. For batches or multi-asset requests, persist every requested deliverable final in the workspace unless the user explicitly asked to keep outputs preview-only. Discarded variants do not need to be kept unless requested. -17. If the user explicitly chooses or confirms the CLI fallback, then use the fallback-only docs for model, quality, size, `input_fidelity`, masks, output format, output paths, and network setup. -18. Always report the final saved path(s) for any workspace-bound asset(s), plus the final prompt or prompt set and whether the built-in tool or fallback CLI mode was used. - -## Transparent image requests - -Transparent-image requests still use built-in `image_gen` first. Because the built-in tool does not expose a true transparent-background control, create a removable chroma-key source image and then convert the key color to alpha locally. - -Default sequence: -1. Use built-in `image_gen` to generate the requested subject on a perfectly flat solid chroma-key background. -2. Choose a key color that is unlikely to appear in the subject: default `#00ff00`, use `#ff00ff` for green subjects, and avoid `#0000ff` for blue subjects. -3. After generation, move or copy the selected source image from `$CODEX_HOME/generated_images/...` into the workspace or `tmp/imagegen/`. -4. Run the installed helper path, not a project-relative script path: - ```bash - python "${CODEX_HOME:-$HOME/.codex}/skills/.system/imagegen/scripts/remove_chroma_key.py" \ - --input \ - --out \ - --auto-key border \ - --soft-matte \ - --transparent-threshold 12 \ - --opaque-threshold 220 \ - --despill - ``` -5. Validate that the output has an alpha channel, transparent corners, plausible subject coverage, and no obvious key-color fringe. If a thin fringe remains, retry once with `--edge-contract 1`; use `--edge-feather 0.25` only when the edge is visibly stair-stepped and the subject is not shiny or reflective. -6. Save the final alpha PNG/WebP in the project if the asset is project-bound. Never leave a project-referenced transparent asset only under `$CODEX_HOME/*`. - -Prompt transparent requests like this: - -```text -Create the requested subject on a perfectly flat solid #00ff00 chroma-key background for background removal. -The background must be one uniform color with no shadows, gradients, texture, reflections, floor plane, or lighting variation. -Keep the subject fully separated from the background with crisp edges and generous padding. -Do not use #00ff00 anywhere in the subject. -No cast shadow, no contact shadow, no reflection, no watermark, and no text unless explicitly requested. -``` - -Do not automatically use CLI `gpt-image-1.5 --background transparent --output-format png` instead of chroma keying. Ask the user first when the user asks for true/native transparency, when local removal fails validation, or when the requested image is complex: hair, fur, feathers, smoke, glass, liquids, translucent materials, reflective objects, soft shadows, realistic product grounding, or subject colors that conflict with all practical key colors. - -Use a concise confirmation like: - -```text -This likely needs true native transparency. The default built-in path uses a chroma-key background plus local removal, but true transparency requires the CLI fallback with gpt-image-1.5 because gpt-image-2 does not support background=transparent. It also requires OPENAI_API_KEY. Should I proceed with that CLI fallback? -``` - -## Prompt augmentation - -Reformat user prompts into a structured, production-oriented spec. Make the user's goal clearer and more actionable, but do not blindly add detail. - -Treat this as prompt-shaping guidance, not a closed schema. Use only the lines that help, and add a short extra labeled line when it materially improves clarity. - -### Specificity policy - -Use the user's prompt specificity to decide how much augmentation is appropriate: - -- If the prompt is already specific and detailed, preserve that specificity and only normalize/structure it. -- If the prompt is generic, you may add tasteful augmentation when it will materially improve the result. - -Allowed augmentations: -- composition or framing hints -- polish level or intended-use hints -- practical layout guidance -- reasonable scene concreteness that supports the stated request - -Not allowed augmentations: -- extra characters or objects that are not implied by the request -- brand names, slogans, palettes, or narrative beats that are not implied -- arbitrary side-specific placement unless the surrounding layout supports it - -## Use-case taxonomy (exact slugs) - -Classify each request into one of these buckets and keep the slug consistent across prompts and references. - -Generate: -- photorealistic-natural — candid/editorial lifestyle scenes with real texture and natural lighting. -- product-mockup — product/packaging shots, catalog imagery, merch concepts. -- ui-mockup — app/web interface mockups and wireframes; specify the desired fidelity. -- infographic-diagram — diagrams/infographics with structured layout and text. -- scientific-educational — classroom explainers, scientific diagrams, and learning visuals with required labels and accuracy constraints. -- ads-marketing — campaign concepts and ad creatives with audience, brand position, scene, and exact tagline/copy. -- productivity-visual — slide, chart, workflow, and data-heavy business visuals. -- logo-brand — logo/mark exploration, vector-friendly. -- illustration-story — comics, children’s book art, narrative scenes. -- stylized-concept — style-driven concept art, 3D/stylized renders. -- historical-scene — period-accurate/world-knowledge scenes. - -Edit: -- text-localization — translate/replace in-image text, preserve layout. -- identity-preserve — try-on, person-in-scene; lock face/body/pose. -- precise-object-edit — remove/replace a specific element (including interior swaps). -- lighting-weather — time-of-day/season/atmosphere changes only. -- background-extraction — transparent background / clean cutout. Use built-in `image_gen` with chroma-key removal first for simple opaque subjects; ask before using CLI true transparency for complex subjects. -- style-transfer — apply reference style while changing subject/scene. -- compositing — multi-image insert/merge with matched lighting/perspective. -- sketch-to-render — drawing/line art to photoreal render. - -## Shared prompt schema - -Use the following labeled spec as shared prompt scaffolding for both top-level modes: - -```text -Use case: -Asset type: -Primary request: -Input images: (optional) -Scene/backdrop: -Subject:
-Style/medium: -Composition/framing: -Lighting/mood: -Color palette: -Materials/textures: -Text (verbatim): "" -Constraints: -Avoid: -``` - -Notes: -- `Asset type` and `Input images` are prompt scaffolding, not dedicated CLI flags. -- `Scene/backdrop` refers to the visual setting. It is not the same as the fallback CLI `background` parameter, which controls output transparency behavior. -- Fallback-only execution notes such as `Quality:`, `Input fidelity:`, masks, output format, and output paths belong in the CLI path only. Do not treat them as built-in `image_gen` tool arguments. - -Augmentation rules: -- Keep it short. -- Add only the details needed to improve the prompt materially. -- For edits, explicitly list invariants (`change only X; keep Y unchanged`). -- If any critical detail is missing and blocks success, ask a question; otherwise proceed. - -## Examples - -### Generation example (hero image) -```text -Use case: product-mockup -Asset type: landing page hero -Primary request: a minimal hero image of a ceramic coffee mug -Style/medium: clean product photography -Composition/framing: wide composition with usable negative space for page copy if needed -Lighting/mood: soft studio lighting -Constraints: no logos, no text, no watermark -``` - -### Edit example (invariants) -```text -Use case: precise-object-edit -Asset type: product photo background replacement -Primary request: replace only the background with a warm sunset gradient -Constraints: change only the background; keep the product and its edges unchanged; no text; no watermark -``` - -## Prompting best practices -- Structure prompt as scene/backdrop -> subject -> details -> constraints. -- Include intended use (ad, UI mock, infographic) to set the mode and polish level. -- Use camera/composition language for photorealism. -- Only use SVG/vector stand-ins when the user explicitly asked for vector output or a non-image placeholder. -- Quote exact text and specify typography + placement. -- For tricky words, spell them letter-by-letter and require verbatim rendering. -- For multi-image inputs, reference images by index and describe how they should be used. -- For edits, repeat invariants every iteration to reduce drift. -- Iterate with single-change follow-ups. -- If the prompt is generic, add only the extra detail that will materially help. -- If the prompt is already detailed, normalize it instead of expanding it. -- For CLI fallback only, see `references/cli.md` and `references/image-api.md` for model, `quality`, `input_fidelity`, masks, output format, and output-path guidance. -- For transparent images, use the built-in-first chroma-key workflow unless the request is complex enough to need true CLI transparency; ask before switching to CLI `gpt-image-1.5`. - -More principles shared by both modes: `references/prompting.md`. -Copy/paste specs shared by both modes: `references/sample-prompts.md`. - -## Guidance by asset type -Asset-type templates (website assets, game assets, wireframes, logo) are consolidated in `references/sample-prompts.md`. - -## gpt-image-2 guidance for CLI fallback - -The fallback CLI defaults to `gpt-image-2`. - -- Use `gpt-image-2` for new CLI/API workflows unless the request needs true model-native transparent output. -- If a transparent request may need CLI fallback, ask before using `gpt-image-1.5` unless the user already explicitly requested `gpt-image-1.5`, `scripts/image_gen.py`, or CLI fallback. Explain that the built-in chroma-key path is the default, but true transparency requires `gpt-image-1.5` because `gpt-image-2` does not support `background=transparent`. -- `gpt-image-2` always uses high fidelity for image inputs; do not set `input_fidelity` with this model. -- `gpt-image-2` supports `quality` values `low`, `medium`, `high`, and `auto`. -- Use `quality low` for fast drafts, thumbnails, and quick iterations. Use `medium`, `high`, or `auto` for final assets, dense text, diagrams, identity-sensitive edits, or high-resolution outputs. -- Square images are typically fastest to generate. Use `1024x1024` for fast square drafts. -- If the user asks for 4K-style output, use `3840x2160` for landscape or `2160x3840` for portrait. -- `gpt-image-2` size may be `auto` or `WIDTHxHEIGHT` if all constraints hold: max edge `<= 3840px`, both edges multiples of `16px`, long-to-short ratio `<= 3:1`, total pixels between `655,360` and `8,294,400`. - -Popular `gpt-image-2` sizes: -- `1024x1024` square -- `1536x1024` landscape -- `1024x1536` portrait -- `2048x2048` 2K square -- `2048x1152` 2K landscape -- `3840x2160` 4K landscape -- `2160x3840` 4K portrait -- `auto` - -## Fallback CLI mode only - -### Temp and output conventions -These conventions apply only to the CLI fallback. They do not describe built-in `image_gen` output behavior. -- Use `tmp/imagegen/` for intermediate files (for example JSONL batches); delete them when done. -- Write final artifacts under `output/imagegen/`. -- Use `--out` or `--out-dir` to control output paths; keep filenames stable and descriptive. - -### Dependencies -Prefer `uv` for dependency management in this repo. - -Required Python package: -```bash -uv pip install openai -``` - -Required for local chroma-key removal and optional downscaling: -```bash -uv pip install pillow -``` - -Portability note: -- If you are using the installed skill outside this repo, install dependencies into that environment with its package manager. -- In uv-managed environments, `uv pip install ...` remains the preferred path. - -### Environment -- `OPENAI_API_KEY` must be set for live API calls. -- Do not ask the user for `OPENAI_API_KEY` when using the built-in `image_gen` tool. -- Never ask the user to paste the full key in chat. Ask them to set it locally and confirm when ready. - -If the key is missing, give the user these steps: -1. Create an API key in the OpenAI platform UI: https://platform.openai.com/api-keys -2. Set `OPENAI_API_KEY` as an environment variable in their system. -3. Offer to guide them through setting the environment variable for their OS/shell if needed. - -If installation is not possible in this environment, tell the user which dependency is missing and how to install it into their active environment. - -### Script-mode notes -- CLI commands + examples: `references/cli.md` -- API parameter quick reference: `references/image-api.md` -- Network approvals / sandbox settings for CLI mode: `references/codex-network.md` - -## Reference map -- `references/prompting.md`: shared prompting principles for both modes. -- `references/sample-prompts.md`: shared copy/paste prompt recipes for both modes. -- `references/cli.md`: fallback-only CLI usage via `scripts/image_gen.py`. -- `references/image-api.md`: fallback-only API/CLI parameter reference. -- `references/codex-network.md`: fallback-only network/sandbox troubleshooting for CLI mode. -- `scripts/image_gen.py`: fallback-only CLI implementation. Do not load or use it unless the user explicitly chooses CLI mode or explicitly confirms a transparent request's true CLI transparency fallback. -- `$CODEX_HOME/skills/.system/imagegen/scripts/remove_chroma_key.py`: local post-processing helper for built-in transparent-image requests. diff --git a/dotfiles/agents/skills/.system/imagegen/agents/openai.yaml b/dotfiles/agents/skills/.system/imagegen/agents/openai.yaml deleted file mode 100644 index 5e01d441..00000000 --- a/dotfiles/agents/skills/.system/imagegen/agents/openai.yaml +++ /dev/null @@ -1,6 +0,0 @@ -interface: - display_name: "Image Gen" - short_description: "Generate or edit images for websites, games, and more" - icon_small: "./assets/imagegen-small.svg" - icon_large: "./assets/imagegen.png" - default_prompt: "Use $imagegen to make or edit an image for this project." diff --git a/dotfiles/agents/skills/.system/imagegen/assets/imagegen-small.svg b/dotfiles/agents/skills/.system/imagegen/assets/imagegen-small.svg deleted file mode 100644 index 20128b2d..00000000 --- a/dotfiles/agents/skills/.system/imagegen/assets/imagegen-small.svg +++ /dev/null @@ -1,5 +0,0 @@ - - - - - diff --git a/dotfiles/agents/skills/.system/imagegen/assets/imagegen.png b/dotfiles/agents/skills/.system/imagegen/assets/imagegen.png deleted file mode 100644 index 94b54541..00000000 Binary files a/dotfiles/agents/skills/.system/imagegen/assets/imagegen.png and /dev/null differ diff --git a/dotfiles/agents/skills/.system/imagegen/references/cli.md b/dotfiles/agents/skills/.system/imagegen/references/cli.md deleted file mode 100644 index f4a5a63d..00000000 --- a/dotfiles/agents/skills/.system/imagegen/references/cli.md +++ /dev/null @@ -1,242 +0,0 @@ -# CLI reference (`scripts/image_gen.py`) - -This file is for the fallback CLI mode only. Read it when the user explicitly asks to use `scripts/image_gen.py` / CLI / API / model controls, or after the user explicitly confirms that a transparent-output request should use the `gpt-image-1.5` true-transparency fallback path. - -`generate-batch` is a CLI subcommand in this fallback path. It is not a top-level mode of the skill. -The word `batch` in a user request is not CLI opt-in by itself. - -## What this CLI does -- `generate`: generate a new image from a prompt -- `edit`: edit one or more existing images -- `generate-batch`: run many generation jobs from a JSONL file after the user explicitly chooses CLI/API/model controls - -Real API calls require **network access** + `OPENAI_API_KEY`. `--dry-run` does not. - -## Quick start (works from any repo) -Set a stable path to the skill CLI (default `CODEX_HOME` is `~/.codex`): - -``` -export CODEX_HOME="${CODEX_HOME:-$HOME/.codex}" -export IMAGE_GEN="$CODEX_HOME/skills/.system/imagegen/scripts/image_gen.py" -``` - -Install dependencies into that environment with its package manager. In uv-managed environments, `uv pip install ...` remains the preferred path. - -## Quick start - -Dry-run (no API call; no network required; does not require the `openai` package): - -```bash -python "$IMAGE_GEN" generate \ - --prompt "Test" \ - --out output/imagegen/test.png \ - --dry-run -``` - -Notes: -- One-off dry-runs print the API payload and the computed output path(s). -- Repo-local finals should live under `output/imagegen/`. - -Generate (requires `OPENAI_API_KEY` + network): - -```bash -python "$IMAGE_GEN" generate \ - --prompt "A cozy alpine cabin at dawn" \ - --size 1024x1024 \ - --out output/imagegen/alpine-cabin.png -``` - -Edit: - -```bash -python "$IMAGE_GEN" edit \ - --image input.png \ - --prompt "Replace only the background with a warm sunset" \ - --out output/imagegen/sunset-edit.png -``` - -## Guardrails -- Use the bundled CLI directly (`python "$IMAGE_GEN" ...`) after activating the correct environment. -- Do **not** create one-off runners (for example `gen_images.py`) unless the user explicitly asks for a custom wrapper. -- **Never modify** `scripts/image_gen.py`. If something is missing, ask the user before doing anything else. -- Do not silently downgrade from CLI `gpt-image-2` or built-in `image_gen` to CLI `gpt-image-1.5`; ask first unless the user already explicitly requested `gpt-image-1.5`, `scripts/image_gen.py`, or CLI fallback. - -## Defaults -- Model: `gpt-image-2` -- Supported model family for this CLI: GPT Image models (`gpt-image-*`) -- Size: `auto` -- Quality: `medium` -- Output format: `png` -- Default one-off output path: `output/imagegen/output.png` -- Background: unspecified unless `--background` is set - -## gpt-image-2 size and model guidance - -`gpt-image-2` is the default model for new CLI fallback work. - -- Use `--quality low` for fast drafts, thumbnails, and quick iterations. -- Use `--quality medium`, `--quality high`, or `--quality auto` for final assets, dense text, diagrams, identity-sensitive edits, and high-resolution outputs. -- Square images are typically fastest. Use `--size 1024x1024` for quick square drafts. -- If the user asks for 4K-style output, use `--size 3840x2160` for landscape or `--size 2160x3840` for portrait. -- Do not pass `--input-fidelity` with `gpt-image-2`; this model always uses high fidelity for image inputs. -- Do not use `--background transparent` with `gpt-image-2`; the default transparent-image workflow uses built-in `image_gen` on a flat chroma-key background plus local removal. Use `gpt-image-1.5` only after the user explicitly confirms the true-transparent CLI fallback, unless they already requested `gpt-image-1.5`, `scripts/image_gen.py`, or CLI fallback. - -Popular `gpt-image-2` sizes: -- `1024x1024` -- `1536x1024` -- `1024x1536` -- `2048x2048` -- `2048x1152` -- `3840x2160` -- `2160x3840` -- `auto` - -`gpt-image-2` size constraints: -- max edge `<= 3840px` -- both edges multiples of `16px` -- long edge to short edge ratio `<= 3:1` -- total pixels between `655,360` and `8,294,400` -- outputs above `2560x1440` total pixels are experimental - -Fast draft: - -```bash -python "$IMAGE_GEN" generate \ - --prompt "A product thumbnail of a matte ceramic mug on a stone surface" \ - --quality low \ - --size 1024x1024 \ - --out output/imagegen/mug-draft.png -``` - -Final 2K landscape: - -```bash -python "$IMAGE_GEN" generate \ - --prompt "A polished landing-page hero image of a matte ceramic mug on a stone surface" \ - --quality high \ - --size 2048x1152 \ - --out output/imagegen/mug-hero.png -``` - -4K landscape: - -```bash -python "$IMAGE_GEN" generate \ - --prompt "A detailed architectural visualization at golden hour" \ - --size 3840x2160 \ - --quality high \ - --out output/imagegen/architecture-4k.png -``` - -True transparent fallback request: - -Ask for confirmation before using this command unless the user already explicitly requested `gpt-image-1.5`, `scripts/image_gen.py`, or CLI fallback. - -```bash -python "$IMAGE_GEN" generate \ - --model gpt-image-1.5 \ - --prompt "A clean product cutout on a transparent background" \ - --background transparent \ - --output-format png \ - --out output/imagegen/product-cutout.png -``` - -When using this path, explain briefly that built-in `image_gen` plus chroma-key removal is the default transparent-image path, but this request needs true model-native transparency. `gpt-image-2` does not support `background=transparent`, so `gpt-image-1.5` is required for this confirmed fallback. - -## Quality, input fidelity, and masks (CLI fallback only) -These are explicit CLI controls. They are not built-in `image_gen` tool arguments. - -- `--quality` works for `generate`, `edit`, and `generate-batch`: `low|medium|high|auto` -- `--input-fidelity` is **edit-only** and validated as `low|high`; it is not supported for `gpt-image-2` -- `--mask` is **edit-only** - -Example: - -```bash -python "$IMAGE_GEN" edit \ - --model gpt-image-1.5 \ - --image input.png \ - --prompt "Change only the background" \ - --quality high \ - --input-fidelity high \ - --out output/imagegen/background-edit.png -``` - -Mask notes: -- For multi-image edits, pass repeated `--image` flags. Their order is meaningful, so describe each image by index and role in the prompt. -- The CLI accepts a single `--mask`. -- Image and mask must be the same size and format and each under 50MB. -- Masks must include an alpha channel. -- If multiple input images are provided, the mask applies to the first image. -- Masking is prompt-guided; do not promise exact pixel-perfect mask boundaries. -- Use a PNG mask when possible; the script treats mask handling as best-effort and does not perform full preflight validation beyond file checks/warnings. -- In the edit prompt, repeat invariants (`change only the background; keep the subject unchanged`) to reduce drift. - -## Output handling -- Use `tmp/imagegen/` for temporary JSONL inputs or scratch files. -- Use `output/imagegen/` for final outputs. -- Reruns fail if a target file already exists unless you pass `--force`. -- `--out-dir` changes one-off naming to `image_1.`, `image_2.`, and so on. -- Downscaled copies use the default suffix `-web` unless you override it. - -## Common recipes - -Generate with augmentation fields: - -```bash -python "$IMAGE_GEN" generate \ - --prompt "A minimal hero image of a ceramic coffee mug" \ - --use-case "product-mockup" \ - --style "clean product photography" \ - --composition "wide product shot with usable negative space for page copy" \ - --constraints "no logos, no text" \ - --out output/imagegen/mug-hero.png -``` - -Generate + also write a downscaled copy for fast web loading: - -```bash -python "$IMAGE_GEN" generate \ - --prompt "A cozy alpine cabin at dawn" \ - --size 1024x1024 \ - --downscale-max-dim 1024 \ - --out output/imagegen/alpine-cabin.png -``` - -Generate multiple prompts concurrently (async batch): - -```bash -mkdir -p tmp/imagegen output/imagegen/batch -cat > tmp/imagegen/prompts.jsonl << 'EOF' -{"prompt":"Cavernous hangar interior with a compact shuttle parked near the center","use_case":"stylized-concept","composition":"wide-angle, low-angle","lighting":"volumetric light rays through drifting fog","constraints":"no logos or trademarks; no watermark","size":"1536x1024"} -{"prompt":"Gray wolf in profile in a snowy forest","use_case":"photorealistic-natural","composition":"eye-level","constraints":"no logos or trademarks; no watermark","size":"1024x1024"} -EOF - -python "$IMAGE_GEN" generate-batch \ - --input tmp/imagegen/prompts.jsonl \ - --out-dir output/imagegen/batch \ - --concurrency 5 - -rm -f tmp/imagegen/prompts.jsonl -``` - -Notes: -- `generate-batch` requires `--out-dir`. -- generate-batch requires --out-dir. -- Use `--concurrency` to control parallelism (default `5`). -- Per-job overrides are supported in JSONL (for example `size`, `quality`, `background`, `output_format`, `output_compression`, `moderation`, `n`, `model`, `out`, and prompt-augmentation fields). -- `--n` generates multiple variants for a single prompt; `generate-batch` is for many different prompts. -- In batch mode, per-job `out` is treated as a filename under `--out-dir`. -- For many requested deliverable assets, provide one prompt/job per distinct asset and use semantic filenames when possible. - -## CLI notes -- Supported sizes depend on the model. `gpt-image-2` supports flexible constrained sizes; older GPT Image models support `1024x1024`, `1536x1024`, `1024x1536`, or `auto`. -- True transparent CLI outputs require `output_format` to be `png` or `webp` and are not supported by `gpt-image-2`. -- `--prompt-file`, `--output-compression`, `--moderation`, `--max-attempts`, `--fail-fast`, `--force`, and `--no-augment` are supported. -- This CLI is intended for GPT Image models. Do not assume older non-GPT image-model behavior applies here. - -## See also -- API parameter quick reference for fallback CLI mode: `references/image-api.md` -- Prompt examples shared across both top-level modes: `references/sample-prompts.md` -- Network/sandbox notes for fallback CLI mode: `references/codex-network.md` -- Built-in-first transparent image workflow: `SKILL.md` and `$CODEX_HOME/skills/.system/imagegen/scripts/remove_chroma_key.py` diff --git a/dotfiles/agents/skills/.system/imagegen/references/codex-network.md b/dotfiles/agents/skills/.system/imagegen/references/codex-network.md deleted file mode 100644 index 5ce1fbc7..00000000 --- a/dotfiles/agents/skills/.system/imagegen/references/codex-network.md +++ /dev/null @@ -1,33 +0,0 @@ -# Codex network approvals / sandbox notes - -This file is for the fallback CLI mode only. Read it when the user explicitly asks to use `scripts/image_gen.py` / CLI / API / model controls, or after the user explicitly confirms that a transparent-output request should use the `gpt-image-1.5` true-transparency fallback path. - -This guidance is intentionally isolated from `SKILL.md` because it can vary by environment and may become stale. Prefer the defaults in your environment when in doubt. - -## Why am I asked to approve image generation calls? -The fallback CLI uses the OpenAI Image API, so it needs outbound network access. In many Codex setups, network access is disabled by default and/or the approval policy requires confirmation before networked commands run. - -## Important note about approvals vs network -- `--ask-for-approval never` suppresses approval prompts. -- It does **not** by itself enable network access. -- In `workspace-write`, network access still depends on your Codex configuration (for example `[sandbox_workspace_write] network_access = true`). - -## How do I reduce repeated approval prompts? -If you trust the repo and want fewer prompts, use a configuration or profile that both: -- enables network for the sandbox mode you plan to use -- sets an approval policy that matches your risk tolerance - -Example `~/.codex/config.toml` pattern: - -```toml -approval_policy = "on-request" -sandbox_mode = "workspace-write" - -[sandbox_workspace_write] -network_access = true -``` - -If you want quieter automation after network is enabled, you can choose a stricter approval policy, but do that intentionally and with care. - -## Safety note -Enabling network and reducing approvals lowers friction, but increases risk if you run untrusted code or work in an untrusted repository. diff --git a/dotfiles/agents/skills/.system/imagegen/references/image-api.md b/dotfiles/agents/skills/.system/imagegen/references/image-api.md deleted file mode 100644 index db8567de..00000000 --- a/dotfiles/agents/skills/.system/imagegen/references/image-api.md +++ /dev/null @@ -1,90 +0,0 @@ -# Image API quick reference - -This file is for the fallback CLI mode only. Use it when the user explicitly asks to use `scripts/image_gen.py` / CLI / API / model controls, or after the user explicitly confirms that a transparent-output request should use the `gpt-image-1.5` true-transparency fallback path. - -These parameters describe the Image API and bundled CLI fallback surface. Do not assume they are normal arguments on the built-in `image_gen` tool. - -## Scope -- This fallback CLI is intended for GPT Image models (`gpt-image-2`, `gpt-image-1.5`, `gpt-image-1`, and `gpt-image-1-mini`). -- The built-in `image_gen` tool and the fallback CLI do not expose the same controls. - -## Model summary - -| Model | Quality | Input fidelity | Resolutions | Recommended use | -| --- | --- | --- | --- | --- | -| `gpt-image-2` | `low`, `medium`, `high`, `auto` | Always high fidelity for image inputs; do not set `input_fidelity` | `auto` or flexible sizes that satisfy the constraints below | Default for new CLI/API workflows: high-quality generation and editing, text-heavy images, photorealism, compositing, identity-sensitive edits, and workflows where fewer retries matter | -| `gpt-image-1.5` | `low`, `medium`, `high`, `auto` | `low`, `high` | `1024x1024`, `1024x1536`, `1536x1024`, `auto` | True transparent-background fallback and backward-compatible workflows | -| `gpt-image-1` | `low`, `medium`, `high`, `auto` | `low`, `high` | `1024x1024`, `1024x1536`, `1536x1024`, `auto` | Legacy compatibility | -| `gpt-image-1-mini` | `low`, `medium`, `high`, `auto` | `low`, `high` | `1024x1024`, `1024x1536`, `1536x1024`, `auto` | Cost-sensitive draft batches and lower-stakes previews | - -## gpt-image-2 sizes - -`gpt-image-2` accepts `auto` or any `WIDTHxHEIGHT` size that satisfies all constraints: - -- Maximum edge length must be less than or equal to `3840px`. -- Both edges must be multiples of `16px`. -- Long edge to short edge ratio must not exceed `3:1`. -- Total pixels must be at least `655,360` and no more than `8,294,400`. - -Popular sizes: - -| Label | Size | Notes | -| --- | --- | --- | -| Square | `1024x1024` | Typical fast default | -| Landscape | `1536x1024` | Standard landscape | -| Portrait | `1024x1536` | Standard portrait | -| 2K square | `2048x2048` | Larger square output | -| 2K landscape | `2048x1152` | Widescreen output | -| 4K landscape | `3840x2160` | Widescreen 4K output | -| 4K portrait | `2160x3840` | Vertical 4K output | -| Auto | `auto` | Default size | - -Square images are typically fastest to generate. For 4K-style output, use `3840x2160` or `2160x3840`. - -## Endpoints -- Generate: `POST /v1/images/generations` (`client.images.generate(...)`) -- Edit: `POST /v1/images/edits` (`client.images.edit(...)`) - -## Core parameters for GPT Image models -- `prompt`: text prompt -- `model`: image model -- `n`: number of images (1-10) -- `size`: `auto` by default for `gpt-image-2`; flexible `WIDTHxHEIGHT` sizes are allowed only for `gpt-image-2`; older GPT Image models use `1024x1024`, `1536x1024`, `1024x1536`, or `auto` -- `quality`: `low`, `medium`, `high`, or `auto` -- `background`: output transparency behavior (`transparent`, `opaque`, or `auto`) for generated output; this is not the same thing as the prompt's visual scene/backdrop -- `output_format`: `png` (default), `jpeg`, `webp` -- `output_compression`: 0-100 (jpeg/webp only) -- `moderation`: `auto` (default) or `low` - -## Edit-specific parameters -- `image`: one or more input images. For GPT Image models, you can provide up to 16 images. -- `mask`: optional mask image -- `input_fidelity`: `low` or `high` only for models that support it; do not set this for `gpt-image-2` - -Model-specific note for `input_fidelity`: -- `gpt-image-2` always uses high fidelity for image inputs and does not support setting `input_fidelity`. -- `gpt-image-1` and `gpt-image-1-mini` preserve all input images, but the first image gets richer textures and finer details. -- `gpt-image-1.5` preserves the first 5 input images with higher fidelity. - -## Transparent backgrounds - -`gpt-image-2` does not currently support the Image API `background=transparent` parameter. The skill's default transparent-image path is built-in `image_gen` with a flat chroma-key background, followed by local alpha extraction with `python "${CODEX_HOME:-$HOME/.codex}/skills/.system/imagegen/scripts/remove_chroma_key.py"`. - -Use CLI `gpt-image-1.5` with `background=transparent` and a transparent-capable output format such as `png` or `webp` only after the user explicitly confirms that fallback, unless they already requested `gpt-image-1.5`, `scripts/image_gen.py`, or CLI fallback. If the user asks for true/native transparency, the subject is too complex for clean chroma-key removal, or local background removal fails validation, explain the tradeoff and ask before switching. - -## Output -- `data[]` list with `b64_json` per image -- The bundled `scripts/image_gen.py` CLI decodes `b64_json` and writes output files for you. - -## Limits and notes -- Input images and masks must be under 50MB. -- Use the edits endpoint when the user requests changes to an existing image. -- Masking is prompt-guided; exact shapes are not guaranteed. -- Large sizes and high quality increase latency and cost. -- Use `quality=low` for fast drafts, thumbnails, and quick iterations. Use `medium` or `high` for final assets, dense text, diagrams, identity-sensitive edits, or high-resolution outputs. -- High `input_fidelity` can materially increase input token usage on models that support it. -- If a request fails because a specific option is unsupported by the selected GPT Image model, retry manually without that option only when the option is not required by the user. If true transparent CLI output is required, ask before switching to `gpt-image-1.5` instead of dropping `background=transparent`, unless the user already explicitly chose that fallback. - -## Important boundary -- `quality`, `input_fidelity`, explicit masks, `background`, `output_format`, and related parameters are fallback-only execution controls. -- Do not assume they are built-in `image_gen` tool arguments. diff --git a/dotfiles/agents/skills/.system/imagegen/references/prompting.md b/dotfiles/agents/skills/.system/imagegen/references/prompting.md deleted file mode 100644 index 9d2da42f..00000000 --- a/dotfiles/agents/skills/.system/imagegen/references/prompting.md +++ /dev/null @@ -1,118 +0,0 @@ -# Prompting best practices - -These prompting principles are shared by both top-level modes of the skill: -- built-in `image_gen` tool (default) -- explicit `scripts/image_gen.py` CLI fallback - -This file is about prompt structure, specificity, and iteration. Fallback-only execution controls such as `quality`, `input_fidelity`, masks, output format, and output paths live in the fallback docs. - -## Contents -- [Structure](#structure) -- [Specificity policy](#specificity-policy) -- [Allowed and disallowed augmentation](#allowed-and-disallowed-augmentation) -- [Composition and layout](#composition-and-layout) -- [Constraints and invariants](#constraints-and-invariants) -- [Text in images](#text-in-images) -- [Input images and references](#input-images-and-references) -- [Iterate deliberately](#iterate-deliberately) -- [Transparent images](#transparent-images) -- [Fallback-only execution controls](#fallback-only-execution-controls) -- [Use-case tips](#use-case-tips) -- [Where to find copy/paste recipes](#where-to-find-copypaste-recipes) - -## Structure -- Use a consistent order: scene/backdrop -> subject -> key details -> constraints -> output intent. -- Include intended use (ad, UI mock, infographic) to set the level of polish. -- For complex requests, use short labeled lines instead of one long paragraph. - -## Specificity policy -- If the user prompt is already specific and detailed, normalize it into a clean spec without adding creative requirements. -- If the prompt is generic, you may add tasteful detail when it materially improves the output. -- Treat examples in `sample-prompts.md` as fully-authored recipes, not as the default amount of augmentation to add to every request. -- For photorealism, include `photorealistic` directly when that is the goal, plus concrete real-world texture such as pores, wrinkles, fabric wear, material grain, or imperfect everyday detail. - -## Allowed and disallowed augmentation - -Allowed augmentation for generic prompts: -- composition and framing cues -- intended-use or polish-level hints -- practical layout guidance -- reasonable scene concreteness that supports the request - -Do not add: -- extra characters, props, or objects that are not implied -- brand palettes, slogans, or story beats that are not implied -- arbitrary side-specific placement unless the surrounding layout supports it - -## Composition and layout -- Specify framing and viewpoint (close-up, wide, top-down) and placement only when it materially helps. -- Call out negative space if the asset clearly needs room for UI or copy. -- Avoid making left/right layout decisions unless the user or surrounding layout supports them. -- For people, describe body framing, scale, gaze, and object interactions when they matter (`full body visible`, `looking down at the book`, `hands naturally gripping the handlebars`). - -## Constraints and invariants -- State what must not change (`keep background unchanged`). -- For edits, say `change only X; keep Y unchanged` and repeat invariants on every iteration to reduce drift. - -## Text in images -- Put literal text in quotes or ALL CAPS and specify typography (font style, size, color, placement). -- Spell uncommon words letter-by-letter if accuracy matters. -- For in-image copy, require verbatim rendering and no extra characters. -- In CLI fallback mode, use `medium` or `high` quality for small text, dense infographics, data-heavy slides, multi-font layouts, legends, axes, and footnotes. - -## Input images and references -- Do not assume that every provided image is an edit target. -- Label each image by index and role (`Image 1: edit target`, `Image 2: style reference`). -- If the user provides images for style, composition, or mood guidance and does not ask to modify them, treat the request as generation with references. -- If the user asks to preserve an existing image while changing specific parts, treat the request as an edit. -- For compositing, describe how the images interact (`place the subject from Image 2 into Image 1`). - -## Iterate deliberately -- Start with a clean base prompt, then make small single-change edits. -- Re-specify critical constraints when you iterate. -- Prefer one targeted follow-up at a time over rewriting the whole prompt. - -## Transparent images -- Use built-in `image_gen` first for transparent-image requests. If the subject is clearly too complex for chroma-key removal, explain the fallback and ask before switching to CLI. -- Prompt for a perfectly flat solid chroma-key background, usually `#00ff00`; use `#ff00ff` when the subject is green, and avoid key colors that appear in the subject. -- Explicitly prohibit shadows, gradients, floor planes, reflections, texture, and lighting variation in the background. -- Ask for crisp edges, generous padding, and no use of the key color inside the subject. -- After generation, remove the background locally with `python "${CODEX_HOME:-$HOME/.codex}/skills/.system/imagegen/scripts/remove_chroma_key.py" --input --out --auto-key border --soft-matte --transparent-threshold 12 --opaque-threshold 220 --despill` and validate the alpha result before shipping it. -- Use soft matte and despill for antialiased edges; hard tolerance-only removal is mainly for flat pixel-art or exact-color fixtures. -- Use CLI `gpt-image-1.5 --background transparent --output-format png` only after the user explicitly confirms the fallback, or when the user already explicitly requested `gpt-image-1.5`, `scripts/image_gen.py`, or CLI fallback. Ask first for true/native transparency requests, failed chroma-key validation, or complex transparent subjects such as hair, fur, glass, smoke, liquids, translucent materials, reflective objects, or soft shadows. - -## Fallback-only execution controls -- `quality`, `input_fidelity`, explicit masks, output format, and output paths are fallback-only execution controls. -- Do not assume they are built-in `image_gen` tool arguments. -- If the user explicitly chooses CLI fallback, see `references/cli.md` and `references/image-api.md` for those controls. -- In CLI fallback mode, `gpt-image-2` is the default. It supports `quality=low|medium|high|auto`; use `low` for fast drafts and thumbnails, and move to `medium`, `high`, or `auto` for final assets. -- `gpt-image-2` always uses high fidelity for image inputs, so do not set `input_fidelity` with that model. -- If a transparent request needs true CLI transparency, ask before using `gpt-image-1.5` unless the user already explicitly chose it. Explain that built-in chroma-key removal is the default path, but `gpt-image-2` does not support `background=transparent`. -- If the user asks for 4K-style output with `gpt-image-2`, use `3840x2160` for landscape or `2160x3840` for portrait. - -## Use-case tips -Generate: -- photorealistic-natural: Prompt as if a real photo is captured in the moment; use photography language (lens, lighting, framing); call for real texture; avoid over-stylized polish unless requested. -- product-mockup: Describe the product/packaging and materials; ensure clean silhouette and label clarity; if in-image text is needed, require verbatim rendering and specify typography. -- ui-mockup: Describe the target fidelity first (shippable mockup or low-fi wireframe), then focus on layout, hierarchy, and practical UI elements; avoid concept-art language. -- infographic-diagram: Define the audience and layout flow; label parts explicitly; require verbatim text; prefer higher quality in CLI mode for dense labels. -- logo-brand: Keep it simple and scalable; ask for a strong silhouette and balanced negative space; avoid decorative flourishes unless requested. -- ads-marketing: Write like a creative brief; include brand positioning, audience, desired vibe, scene, and exact tagline if text must appear. -- productivity-visual: Name the exact artifact (slide, chart, workflow diagram), define the canvas and hierarchy, provide real labels/data, and ask for readable typography and polished spacing. -- scientific-educational: Define audience, lesson objective, required labels, scientific constraints, arrows, and scan-friendly whitespace. -- illustration-story: Define panels or scene beats; keep each action concrete. -- stylized-concept: Specify style cues, material finish, and rendering approach (3D, painterly, clay) without inventing new story elements. -- historical-scene: State the location/date and required period accuracy; constrain clothing, props, and environment to match the era. - -Edit: -- text-localization: Change only the text; preserve layout, typography, spacing, and hierarchy; no extra words or reflow unless needed. -- identity-preserve: Lock identity (face, body, pose, hair, expression); change only the specified elements; match lighting and shadows. -- precise-object-edit: Specify exactly what to remove/replace; preserve surrounding texture and lighting; keep everything else unchanged. -- lighting-weather: Change only environmental conditions (light, shadows, atmosphere, precipitation); keep geometry, framing, and subject identity. -- background-extraction: For simple opaque subjects, request a clean cutout on a perfectly flat chroma-key background; crisp silhouette; generous padding; no shadows; no halos; preserve label text exactly; no restyling. Ask before using true CLI transparency for complex subjects. -- style-transfer: Specify style cues to preserve (palette, texture, brushwork) and what must change; add `no extra elements` to prevent drift. -- compositing: Reference inputs by index; specify what moves where; match lighting, perspective, and scale; keep the base framing unchanged. -- sketch-to-render: Preserve layout, proportions, and perspective; choose materials and lighting that support the supplied sketch without adding new elements. - -## Where to find copy/paste recipes -For copy/paste prompt specs (examples only), see `references/sample-prompts.md`. This file focuses on principles, specificity, and iteration patterns. diff --git a/dotfiles/agents/skills/.system/imagegen/references/sample-prompts.md b/dotfiles/agents/skills/.system/imagegen/references/sample-prompts.md deleted file mode 100644 index d9492955..00000000 --- a/dotfiles/agents/skills/.system/imagegen/references/sample-prompts.md +++ /dev/null @@ -1,433 +0,0 @@ -# Sample prompts (copy/paste) - -These prompt recipes are shared across both top-level modes of the skill: -- built-in `image_gen` tool (default) -- `scripts/image_gen.py` CLI fallback for explicit CLI/API/model requests or user-confirmed true-transparent-output fallback requests - -Use these as starting points. They are intentionally complete prompt recipes, not the default amount of augmentation to add to every user request. - -When adapting a user's prompt: -- keep user-provided requirements -- only add detail according to the specificity policy in `SKILL.md` -- do not treat every example below as permission to invent extra story elements - -The labeled lines are prompt scaffolding, not a closed schema. `Asset type` and `Input images` are prompt-only scaffolding; the CLI does not expose them as dedicated flags. - -Execution details such as explicit CLI flags, `quality`, `input_fidelity`, masks, output formats, and local output paths depend on mode. Use the built-in tool by default, including simple transparent-image requests. For transparent images, prompt for a flat chroma-key background and remove it locally with `python "${CODEX_HOME:-$HOME/.codex}/skills/.system/imagegen/scripts/remove_chroma_key.py"`; only apply CLI-specific controls when the user explicitly opts into fallback mode or explicitly confirms that the transparent request should use true CLI transparency. - -CLI model notes: -- `gpt-image-2` is the fallback CLI default for new workflows. -- `gpt-image-2` supports `quality` values `low`, `medium`, `high`, and `auto`. -- For 4K-style `gpt-image-2` output, use `3840x2160` or `2160x3840`. -- If transparent output needs true CLI fallback, ask before using `gpt-image-1.5` unless the user already explicitly requested `gpt-image-1.5`, `scripts/image_gen.py`, or CLI fallback. Explain that built-in chroma-key removal is the default path, but `gpt-image-2` does not support `background=transparent`. -- Do not set `input_fidelity` with `gpt-image-2`; image inputs already use high fidelity. - -For prompting principles (structure, specificity, invariants, iteration), see `references/prompting.md`. - -## Generate - -### photorealistic-natural -``` -Use case: photorealistic-natural -Primary request: candid photo of an elderly sailor on a small fishing boat adjusting a net -Scene/backdrop: coastal water with soft haze -Subject: weathered skin with wrinkles and sun texture -Style/medium: photorealistic candid photo -Composition/framing: medium close-up, eye-level -Lighting/mood: soft coastal daylight, shallow depth of field, subtle film grain -Materials/textures: real skin texture, worn fabric, salt-worn wood -Constraints: natural color balance; no heavy retouching; no glamorization; no watermark -Avoid: studio polish; staged look -``` - -### product-mockup -``` -Use case: product-mockup -Primary request: premium product photo of a matte black shampoo bottle with a minimal label -Scene/backdrop: clean studio gradient from light gray to white -Subject: single bottle centered with subtle reflection -Style/medium: premium product photography -Composition/framing: centered, slight three-quarter angle, generous padding -Lighting/mood: softbox lighting, clean highlights, controlled shadows -Materials/textures: matte plastic, crisp label printing -Constraints: no logos or trademarks; no watermark -``` - -### ui-mockup -``` -Use case: ui-mockup -Primary request: mobile app home screen for a local farmers market with vendors and daily specials -Asset type: mobile app screen -Style/medium: realistic product UI, not concept art -Composition/framing: clean vertical mobile layout with clear hierarchy -Constraints: practical layout, clear typography, no logos or trademarks, no watermark -``` - -### infographic-diagram -``` -Use case: infographic-diagram -Primary request: detailed infographic of an automatic coffee machine flow -Scene/backdrop: clean, light neutral background -Subject: bean hopper -> grinder -> brew group -> boiler -> water tank -> drip tray -Style/medium: clean vector-like infographic with clear callouts and arrows -Composition/framing: vertical poster layout, top-to-bottom flow -Text (verbatim): "Bean Hopper", "Grinder", "Brew Group", "Boiler", "Water Tank", "Drip Tray" -Constraints: clear labels, strong contrast, no logos or trademarks, no watermark -``` - -### scientific-educational -``` -Use case: scientific-educational -Primary request: biology diagram titled "Cellular Respiration at a Glance" for high school students -Scene/backdrop: clean white classroom handout background -Subject: glucose turns into energy inside a cell; include glycolysis, Krebs cycle, and electron transport chain -Style/medium: flat scientific diagram with consistent icons, arrows, and readable labels -Composition/framing: landscape slide-style layout with clear hierarchy and generous whitespace -Text (verbatim): "Cellular Respiration at a Glance", "Glucose", "Pyruvate", "ATP", "NADH", "FADH2", "CO2", "O2", "H2O" -Constraints: scientifically plausible; avoid tiny text; no extra decoration; no watermark -``` - -### logo-brand -``` -Use case: logo-brand -Primary request: original logo for "Field & Flour", a local bakery -Style/medium: vector logo mark; flat colors; minimal -Composition/framing: single centered logo on a plain background with generous padding -Constraints: strong silhouette, balanced negative space; original design only; no gradients unless essential; no trademarks; no watermark -``` - -### illustration-story -``` -Use case: illustration-story -Primary request: 4-panel comic about a pet left alone at home -Scene/backdrop: cozy living room across panels -Subject: pet reacting to the owner leaving, then relaxing, then returning to a composed pose -Style/medium: comic illustration with clear panels -Composition/framing: 4 equal-sized vertical panels, readable actions per panel -Constraints: no text; no logos or trademarks; no watermark -``` - -### stylized-concept -``` -Use case: stylized-concept -Primary request: cavernous hangar interior with tall support beams and drifting fog -Scene/backdrop: industrial hangar interior, deep scale, light haze -Subject: compact shuttle parked near the center -Style/medium: cinematic concept art, industrial realism -Composition/framing: wide-angle, low-angle -Lighting/mood: volumetric light rays cutting through fog -Constraints: no logos or trademarks; no watermark -``` - -### ads-marketing -``` -Use case: ads-marketing -Primary request: campaign image for a streetwear brand called Thread -Subject: group of friends hanging out together in a stylish urban setting -Style/medium: polished youth streetwear campaign photography -Composition/framing: vertical ad layout with natural poses and integrated headline space -Lighting/mood: contemporary, energetic, tasteful -Text (verbatim): "Yours to Create." -Constraints: render the tagline exactly once; clean legible typography; no extra text; no watermarks; no unrelated logos -``` - -### productivity-visual -``` -Use case: productivity-visual -Primary request: one pitch-deck slide titled "Market Opportunity" -Asset type: fundraising slide image -Style/medium: clean modern deck slide, white background, crisp sans-serif typography -Subject: TAM/SAM/SOM concentric-circle diagram plus a small growth bar chart from 2021 to 2026 -Composition/framing: 16:9 landscape slide, clear data hierarchy, polished spacing -Text (verbatim): "Market Opportunity", "TAM: $42B", "SAM: $8.7B", "SOM: $340M", "AGI Research, 2024", "Internal analysis" -Constraints: readable labels, no clip art, no stock photography, no decorative clutter, no watermark -``` - -### historical-scene -``` -Use case: historical-scene -Primary request: outdoor crowd scene in Bethel, New York on August 16, 1969 -Scene/backdrop: open field with period-appropriate staging -Subject: crowd in period-accurate clothing, authentic environment -Style/medium: photorealistic photo -Composition/framing: wide shot, eye-level -Constraints: period-accurate details; no modern objects; no logos or trademarks; no watermark -``` - -## Asset type templates (taxonomy-aligned) - -### Website assets template -``` -Use case: -Asset type: -Primary request: -Scene/backdrop: -Subject:
-Style/medium: -Composition/framing: -Lighting/mood: -Color palette: -Constraints: -``` - -### Website assets example: minimal hero background -``` -Use case: stylized-concept -Asset type: landing page hero background -Primary request: minimal abstract background with a soft gradient and subtle texture -Style/medium: matte illustration / soft-rendered abstract background -Composition/framing: wide composition with usable negative space for page copy -Lighting/mood: gentle studio glow -Color palette: restrained neutral palette -Constraints: no text; no logos; no watermark -``` - -### Website assets example: feature section illustration -``` -Use case: stylized-concept -Asset type: feature section illustration -Primary request: simple abstract shapes suggesting connection and flow -Scene/backdrop: subtle light-gray backdrop with faint texture -Style/medium: flat illustration; soft shadows; restrained contrast -Composition/framing: centered cluster; open margins for UI -Color palette: muted neutral palette -Constraints: no text; no logos; no watermark -``` - -### Website assets example: blog header image -``` -Use case: photorealistic-natural -Asset type: blog header image -Primary request: overhead desk scene with notebook, pen, and coffee cup -Scene/backdrop: warm wooden tabletop -Style/medium: photorealistic photo -Composition/framing: wide crop with clean room for page copy -Lighting/mood: soft morning light -Constraints: no text; no logos; no watermark -``` - -### Game assets template -``` -Use case: stylized-concept -Asset type: -Primary request: -Scene/backdrop: (if applicable) -Subject:
-Style/medium: ; -Composition/framing: ; ; -Lighting/mood: