docs: Clarify image editing tool usage and parameter descriptions in image generation documentation

This commit is contained in:
Danny Avila
2025-04-27 14:02:46 -04:00
parent 9602d4aaff
commit 3b0d59de41

View File

@@ -60,14 +60,14 @@ The agent decides which tool to use based on the context:
- Image editing relies on image IDs, which are retained in the chat history.
- When files are uploaded to the current request, their image IDs are added to the context of the LLM before any tokens are generated.
- Previously referenced or generated image IDs can be used for editing, as long as they remain within the context window.
- You can include any relevant image IDs in the `image_ids` array when calling the image editing tool.
- The LLM can include any relevant image IDs in the `image_ids` array when calling the image editing tool.
- You can also attach previously uploaded images from the side panel without needing to upload them again.
- This also has the added benefit of providing a vision model with the image context, which can be useful for informing the `prompt` for the image editing tool.
### Parameters
#### Image Generation
• **prompt** your description (required)
• **prompt** text description (required)
• **size** `auto` (default), `1024x1024` (square), `1536x1024` (landscape), or `1024x1536` (portrait)
• **quality** `auto` (default), `high`, `medium`, or `low`
• **background** `auto` (default), `transparent`, or `opaque` (transparent requires PNG or WebP format)
@@ -75,7 +75,7 @@ The agent decides which tool to use based on the context:
#### Image Editing
• **image_ids** array of image IDs to use as reference for editing (required)
• **prompt** your description of the changes (required)
• **prompt** text description of the changes (required)
• **size** `auto` (default), `1024x1024`, `1536x1024`, `1024x1536`, `256x256`, or `512x512`
• **quality** `auto` (default), `high`, `medium`, or `low`
@@ -165,8 +165,8 @@ Run images entirely on your own machine or server.
Point LibreChat at any Automatic1111 (or compatible) endpoint and you're set.
### Parameters
• **prompt** Detailed keywords describing what you want in the image (required)
• **negative_prompt** Keywords describing what you want to exclude from the image (required)
• **prompt** Detailed keywords describing desired elements in the image (required)
• **negative_prompt** Keywords describing elements to exclude from the image (required)
The Stable Diffusion implementation uses these default parameters:
- cfg_scale: 4.5
@@ -286,7 +286,7 @@ All generated images are:
All image generation tools support proxy configuration through the `PROXY` environment variable:
```bash
PROXY=http://your-proxy-url:port
PROXY=http://proxy-url:port
```
## Error Handling
@@ -308,7 +308,7 @@ Though you can customize the prompts for [OpenAI Image Tools](#advanced-configur
2. Add **composition** and **camera / medium** ("wide-angle shot of…", "watercolour…").
3. Mention **lighting & mood** ("golden hour", "dramatic shadows").
4. Finish with **detail keywords** (textures, colours, expressions).
5. Keep negatives positive—describe what you *want*, not what to avoid.
5. Keep negatives positive—describe what should be included, not what to avoid.
Example: