♻️ refactor(docker-compose): restructure dev environment (#12132)

* 🔥 chore(docker-compose): remove Casdoor SSO dependency

Casdoor is no longer needed since BetterAuth now supports email/password registration natively.

LOBE-3907

* ♻️ refactor(docker-compose): restructure directories

- Rename local/ to dev/ for development dependencies
- Remove logto/ and zitadel/ from production/
- Restore Casdoor config in production/grafana/
- Simplify dev/ to core services only (postgresql, redis, rustfs, searxng)
- Update docker-compose.development.yml to use dev/
- Remove minio-bucket.config.json (switched to rustfs)

* ♻️ refactor(docker-compose): simplify dev environment setup

- Remove docker-compose.development.yml, use dev/docker-compose.yml directly
- Add npm scripts: dev:docker, dev:docker:down, dev:docker:reset
- Simplify .env.example.development (remove variable refs, redundant vars)
- Update docker-compose/dev/.env.example (consistent passwords)
- Add docker-compose/dev/data/ to .gitignore
- Update setup docs: use npm scripts, remove image generation section

* 🔧 chore: add SSRF_ALLOW_PRIVATE_IP_ADDRESS to dev env example

* 🔒 security: auto-generate KEY_VAULTS_SECRET and AUTH_SECRET in setup.sh

- Remove hardcoded secrets from docker-compose.yml
- Add placeholders to .env.example files
- Generate secrets dynamically in setup.sh using openssl rand -base64 32

* 🔧 chore(docker-compose): expose SearXNG port and improve dev scripts

- Add SearXNG port mapping (8180:8080) for host access
- Use --wait flag in dev:docker to ensure services are healthy
- Include db:migrate in dev:docker:reset for one-command reset
- Update MinIO reference to RustFS in zh-CN docs
- Add SearXNG to service URLs and port conflict docs
This commit is contained in:
YuTengjing
2026-02-06 12:21:30 +08:00
committed by GitHub
parent 9e7825bef1
commit 7ba15cceba
46 changed files with 2872 additions and 2029 deletions

View File

@@ -1,114 +1,48 @@
# LobeChat Development Server Configuration
# This file contains environment variables for both LobeChat server mode and Docker compose setup
# LobeChat Development Environment Configuration
# ⚠️ DO NOT USE THESE VALUES IN PRODUCTION!
COMPOSE_FILE="docker-compose.development.yml"
# Application
APP_URL=http://localhost:3010
# ⚠️⚠️⚠️ DO NOT USE THE SECRETS BELOW IN PRODUCTION!
UNSAFE_SECRET="ww+0igxjGRAAR/eTNFQ55VmhQB5KE5trFZseuntThJs="
UNSAFE_PASSWORD="CHANGE_THIS_PASSWORD_IN_PRODUCTION"
# Allow access to private IP addresses (localhost services) in development
# https://lobehub.com/docs/self-hosting/environment-variables/basic#ssrf-allow-private-ip-address
SSRF_ALLOW_PRIVATE_IP_ADDRESS=1
# Core Server Configuration
# Secrets (pre-generated for development only)
KEY_VAULTS_SECRET=ww+0igxjGRAAR/eTNFQ55VmhQB5KE5trFZseuntThJs=
AUTH_SECRET=ww+0igxjGRAAR/eTNFQ55VmhQB5KE5trFZseuntThJs=
# Service Ports Configuration
LOBE_PORT=3010
# Application URL - the base URL where LobeChat will be accessible
APP_URL=http://localhost:${LOBE_PORT}
# Secret key for encrypting vault data (generate with: openssl rand -base64 32)
KEY_VAULTS_SECRET=${UNSAFE_SECRET}
# Database Configuration
# Database name for LobeChat
LOBE_DB_NAME=lobechat
# PostgreSQL password
POSTGRES_PASSWORD=${UNSAFE_PASSWORD}
# PostgreSQL database connection URL
DATABASE_URL=postgresql://postgres:${POSTGRES_PASSWORD}@localhost:5432/${LOBE_DB_NAME}
# Database driver type
# Database (PostgreSQL)
DATABASE_URL=postgresql://postgres:change_this_password_on_production@localhost:5432/lobechat
DATABASE_DRIVER=node
# Redis Cache/Queue Configuration
# Redis
REDIS_URL=redis://localhost:6379
REDIS_PREFIX=lobechat
REDIS_TLS=0
# Authentication Configuration
# Auth secret for JWT signing (generate with: openssl rand -base64 32)
AUTH_SECRET=${UNSAFE_SECRET}
# SSO providers configuration - using Casdoor for development
AUTH_SSO_PROVIDERS=casdoor
# Casdoor Configuration
# Casdoor service port
CASDOOR_PORT=8000
# Casdoor OIDC issuer URL
AUTH_CASDOOR_ISSUER=http://localhost:${CASDOOR_PORT}
# Casdoor application client ID
AUTH_CASDOOR_ID=a387a4892ee19b1a2249 # DO NOT USE IN PROD
# Casdoor application client secret
AUTH_CASDOOR_SECRET=dbf205949d704de81b0b5b3603174e23fbecc354 # DO NOT USE IN PROD
# Origin URL for Casdoor internal configuration
origin=http://localhost:${CASDOOR_PORT}
# MinIO Storage Configuration
# MinIO service port
MINIO_PORT=9000
# MinIO root user (admin username)
MINIO_ROOT_USER=admin
# MinIO root password
MINIO_ROOT_PASSWORD=${UNSAFE_PASSWORD}
# MinIO bucket for LobeChat files
MINIO_LOBE_BUCKET=lobe
# S3/MinIO Configuration for LobeChat
# S3/MinIO access key ID
S3_ACCESS_KEY_ID=${MINIO_ROOT_USER}
# S3/MinIO secret access key
S3_SECRET_ACCESS_KEY=${MINIO_ROOT_PASSWORD}
# S3/MinIO endpoint URL
S3_ENDPOINT=http://localhost:${MINIO_PORT}
# S3 bucket name for storing files
S3_BUCKET=${MINIO_LOBE_BUCKET}
# Enable path-style S3 requests (required for MinIO)
# S3 Storage (RustFS)
S3_ACCESS_KEY_ID=admin
S3_SECRET_ACCESS_KEY=change_this_password_on_production
S3_ENDPOINT=http://localhost:9000
S3_BUCKET=lobe
S3_ENABLE_PATH_STYLE=1
# Disable S3 ACL setting (for MinIO compatibility)
S3_SET_ACL=0
# Use base64 encoding for LLM vision images
# LLM vision uses base64 to avoid S3 presigned URL issues in development
LLM_VISION_IMAGE_USE_BASE64=1
# Search Service Configuration
# SearXNG search engine URL
SEARXNG_URL=http://searxng:8080
# Search (SearXNG)
SEARXNG_URL=http://localhost:8180
# Development Options
# Uncomment to skip authentication during development
# Proxy Configuration (Optional)
# Uncomment if you need proxy support (e.g., for GitHub auth or API access)
# Proxy (Optional)
# HTTP_PROXY=http://localhost:7890
# HTTPS_PROXY=http://localhost:7890
# AI Model Configuration (Optional)
# Add your AI model API keys and configurations here
# ⚠️ WARNING: Never commit real API keys to version control!
# OPENAI_API_KEY=sk-NEVER_USE_REAL_API_KEYS_IN_CONFIG_FILES
# AI Model API Keys (Required for chat functionality)
# ANTHROPIC_API_KEY=sk-ant-xxx
# ANTHROPIC_PROXY_URL=https://api.anthropic.com
# OPENAI_API_KEY=sk-xxx
# OPENAI_PROXY_URL=https://api.openai.com/v1
# OPENAI_MODEL_LIST=...

3
.gitignore vendored
View File

@@ -108,6 +108,9 @@ CLAUDE.local.md
# MCP tools
.serena/**
# Docker development data
docker-compose/dev/data/
# Migration scripts data
scripts/clerk-to-betterauth/test/*.csv
scripts/clerk-to-betterauth/test/*.json

View File

@@ -1,90 +0,0 @@
name: lobe-chat-development
services:
network-service:
image: alpine
container_name: lobe-network
restart: always
ports:
- '${MINIO_PORT}:${MINIO_PORT}' # MinIO API
- '9001:9001' # MinIO Console
- '${CASDOOR_PORT}:${CASDOOR_PORT}' # Casdoor
- '3000:3000' # Grafana
- '4318:4318' # otel-collector HTTP
- '4317:4317' # otel-collector gRPC
command: tail -f /dev/null
networks:
- lobe-network
postgresql:
extends:
file: docker-compose/local/docker-compose.yml
service: postgresql
redis:
extends:
file: docker-compose/local/docker-compose.yml
service: redis
minio:
extends:
file: docker-compose/local/docker-compose.yml
service: minio
casdoor:
extends:
file: docker-compose/local/docker-compose.yml
service: casdoor
searxng:
extends:
file: docker-compose/local/docker-compose.yml
service: searxng
grafana:
profiles:
- otel
extends:
file: docker-compose/local/grafana/docker-compose.yml
service: grafana
tempo:
profiles:
- otel
extends:
file: docker-compose/local/grafana/docker-compose.yml
service: tempo
prometheus:
profiles:
- otel
extends:
file: docker-compose/local/grafana/docker-compose.yml
service: prometheus
otel-collector:
profiles:
- otel
extends:
file: docker-compose/local/grafana/docker-compose.yml
service: otel-collector
otel-tracing-test:
profiles:
- otel-test
extends:
file: docker-compose/local/grafana/docker-compose.yml
service: otel-tracing-test
volumes:
data:
driver: local
s3_data:
driver: local
grafana_data:
driver: local
tempo_data:
driver: local
prometheus_data:
driver: local
redis_data:
driver: local
networks:
lobe-network:
driver: bridge

View File

@@ -22,6 +22,10 @@ APP_URL=http://localhost:3210
# to bypass CDN/proxy. If not set, defaults to APP_URL.
# Example: INTERNAL_APP_URL=http://localhost:3210
# Secrets (auto-generated by setup.sh)
KEY_VAULTS_SECRET=YOUR_KEY_VAULTS_SECRET
AUTH_SECRET=YOUR_AUTH_SECRET
# Postgres related, which are the necessary environment variables for DB
LOBE_DB_NAME=lobechat
POSTGRES_PASSWORD=uWNZugjBqixf8dxC

View File

@@ -19,8 +19,12 @@ LOBE_PORT=3210
RUSTFS_PORT=9000
APP_URL=http://localhost:3210
# 密钥配置(由 setup.sh 自动生成)
KEY_VAULTS_SECRET=YOUR_KEY_VAULTS_SECRET
AUTH_SECRET=YOUR_AUTH_SECRET
# Postgres 相关,也即 DB 必须的环境变量
LOBE_DB_NAME=lobehub
LOBE_DB_NAME=lobechat
POSTGRES_PASSWORD=uWNZugjBqixf8dxC
# RustFS S3 配置

View File

@@ -117,8 +117,8 @@ services:
redis:
condition: service_healthy
environment:
- 'KEY_VAULTS_SECRET=Kix2wcUONd4CX51E/ZPAd36BqM4wzJgKjPtz2sGztqQ='
- 'AUTH_SECRET=NX2kaPE923dt6BL2U8e9oSre5RfoT7hg'
- 'KEY_VAULTS_SECRET=${KEY_VAULTS_SECRET}'
- 'AUTH_SECRET=${AUTH_SECRET}'
- 'DATABASE_URL=postgresql://postgres:${POSTGRES_PASSWORD}@postgresql:5432/${LOBE_DB_NAME}'
- 'S3_BUCKET=${RUSTFS_LOBE_BUCKET}'
- 'S3_ENABLE_PATH_STYLE=1'

View File

@@ -1,17 +1,3 @@
# Logto secret
AUTH_LOGTO_ID=
AUTH_LOGTO_SECRET=
AUTH_LOGTO_ISSUER=
# MinIO S3 configuration
MINIO_ROOT_USER=YOUR_MINIO_USER
MINIO_ROOT_PASSWORD=YOUR_MINIO_PASSWORD
# Configure the bucket information of MinIO
MINIO_LOBE_BUCKET=lobe
S3_ACCESS_KEY_ID=
S3_SECRET_ACCESS_KEY=
# Proxy, if you need it
# HTTP_PROXY=http://localhost:7890
# HTTPS_PROXY=http://localhost:7890
@@ -23,12 +9,21 @@ S3_SECRET_ACCESS_KEY=
# OPENAI_MODEL_LIST=...
# ----- Other config -----
# ===========================
# ====== Preset config ======
# ===========================
# if no special requirements, no need to change
LOBE_PORT=3210
LOGTO_PORT=3001
MINIO_PORT=9000
RUSTFS_PORT=9000
# Postgres related, which are the necessary environment variables for DB
LOBE_DB_NAME=lobechat
POSTGRES_PASSWORD=uWNZugjBqixf8dxC
POSTGRES_PASSWORD=change_this_password_on_production
# RUSTFS S3 configuration
RUSTFS_ACCESS_KEY=admin
RUSTFS_SECRET_KEY=change_this_password_on_production
# Configure the bucket information of RUSTFS
S3_ENDPOINT=http://localhost:9000
RUSTFS_LOBE_BUCKET=lobe

View File

@@ -1,34 +1,29 @@
# Logto 鉴权相关
AUTH_LOGTO_ID=
AUTH_LOGTO_SECRET=
AUTH_LOGTO_ISSUER=
# MinIO S3 配置
MINIO_ROOT_USER=YOUR_MINIO_USER
MINIO_ROOT_PASSWORD=YOUR_MINIO_PASSWORD
# 在下方配置 minio 中添加的桶
MINIO_LOBE_BUCKET=lobe
S3_ACCESS_KEY_ID=
S3_SECRET_ACCESS_KEY=
# Proxy如果你需要的话比如你使用 GitHub 作为鉴权服务提供商)
# HTTP_PROXY=http://localhost:7890
# HTTPS_PROXY=http://localhost:7890
# 其他环境变量,视需求而定,可以参照客户端版本的环境变量配置,注意不要有 ACCESS_CODE
# OPENAI_API_KEY=sk-xxxx
# OPENAI_PROXY_URL=https://api.openai.com/v1
# OPENAI_MODEL_LIST=...
# ----- 相关配置 start -----
# ===================
# ===== 预设配置 =====
# ===================
# 如没有特殊需要不用更改
LOBE_PORT=3210
LOGTO_PORT=3001
MINIO_PORT=9000
RUSTFS_PORT=9000
# Postgres 相关,也即 DB 必须的环境变量
LOBE_DB_NAME=lobechat
POSTGRES_PASSWORD=uWNZugjBqixf8dxC
POSTGRES_PASSWORD=change_this_password_on_production
# RustFS S3 配置
RUSTFS_ACCESS_KEY=admin
RUSTFS_SECRET_KEY=change_this_password_on_production
# RustFS bucket 配置
S3_ENDPOINT=http://localhost:9000
RUSTFS_LOBE_BUCKET=lobe

View File

@@ -0,0 +1,18 @@
{
"ID": "",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": ["*"]
},
"Action": ["s3:GetObject"],
"NotAction": [],
"Resource": ["arn:aws:s3:::lobe/*"],
"NotResource": [],
"Condition": {}
}
],
"Version": "2012-10-17"
}

View File

@@ -0,0 +1,122 @@
name: lobehub
services:
network-service:
image: alpine
container_name: lobe-network
restart: always
ports:
- '${RUSTFS_PORT}:9000' # RustFS API
- '9001:9001' # RustFS Console
- '${LOBE_PORT}:3210' # LobeChat
command: tail -f /dev/null
networks:
- lobe-network
env_file:
- .env
postgresql:
image: pgvector/pgvector:pg17
container_name: lobe-postgres
ports:
- '5432:5432'
volumes:
- './data:/var/lib/postgresql/data'
environment:
- 'POSTGRES_DB=${LOBE_DB_NAME}'
- 'POSTGRES_PASSWORD=${POSTGRES_PASSWORD}'
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U postgres']
interval: 5s
timeout: 5s
retries: 5
restart: always
networks:
- lobe-network
env_file:
- .env
redis:
image: redis:7-alpine
container_name: lobe-redis
ports:
- '6379:6379'
command: redis-server --save 60 1000 --appendonly yes
volumes:
- 'redis_data:/data'
healthcheck:
test: ['CMD', 'redis-cli', 'ping']
interval: 5s
timeout: 3s
retries: 5
restart: always
networks:
- lobe-network
rustfs:
image: rustfs/rustfs:latest
container_name: lobe-rustfs
network_mode: 'service:network-service'
environment:
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_ACCESS_KEY=${RUSTFS_ACCESS_KEY}
- RUSTFS_SECRET_KEY=${RUSTFS_SECRET_KEY}
volumes:
- rustfs-data:/data
healthcheck:
test: ["CMD-SHELL", "wget -qO- http://localhost:9000/health >/dev/null 2>&1 || exit 1"]
interval: 5s
timeout: 3s
retries: 30
command: ["--access-key","${RUSTFS_ACCESS_KEY}","--secret-key","${RUSTFS_SECRET_KEY}","/data"]
env_file:
- .env
rustfs-init:
image: minio/mc:latest
container_name: lobe-rustfs-init
depends_on:
rustfs:
condition: service_healthy
volumes:
- ./bucket.config.json:/bucket.config.json:ro
entrypoint: /bin/sh
command: -c '
set -eux;
echo "S3_ACCESS_KEY=${RUSTFS_ACCESS_KEY}, S3_SECRET_KEY=${RUSTFS_SECRET_KEY}";
mc --version;
mc alias set rustfs "http://network-service:9000" "${RUSTFS_ACCESS_KEY}" "${RUSTFS_SECRET_KEY}";
mc ls rustfs || true;
mc mb "rustfs/lobe" --ignore-existing;
mc admin info rustfs || true;
mc anonymous set-json "/bucket.config.json" "rustfs/lobe";
'
restart: "no"
networks:
- lobe-network
env_file:
- .env
searxng:
image: searxng/searxng
container_name: lobe-searxng
ports:
- '8180:8080'
volumes:
- './searxng-settings.yml:/etc/searxng/settings.yml'
environment:
- 'SEARXNG_SETTINGS_FILE=/etc/searxng/settings.yml'
restart: always
networks:
- lobe-network
volumes:
data:
driver: local
redis_data:
driver: local
rustfs-data:
driver: local
networks:
lobe-network:
driver: bridge

View File

@@ -1,46 +0,0 @@
# Proxy, if you need it
# HTTP_PROXY=http://localhost:7890
# HTTPS_PROXY=http://localhost:7890
# Other environment variables, as needed. You can refer to the environment variables configuration for the client version, making sure not to have ACCESS_CODE.
# OPENAI_API_KEY=sk-xxxx
# OPENAI_PROXY_URL=https://api.openai.com/v1
# OPENAI_MODEL_LIST=...
# ===========================
# ====== Preset config ======
# ===========================
# if no special requirements, no need to change
LOBE_PORT=3210
CASDOOR_PORT=8000
RUSTFS_PORT=9000
APP_URL=http://localhost:3210
# INTERNAL_APP_URL is optional, used for server-to-server calls
# to bypass CDN/proxy. If not set, defaults to APP_URL.
# Example: INTERNAL_APP_URL=http://localhost:3210
# Postgres related, which are the necessary environment variables for DB
LOBE_DB_NAME=lobechat
POSTGRES_PASSWORD=uWNZugjBqixf8dxC
AUTH_SSO_PROVIDERS=casdoor
AUTH_CASDOOR_ISSUER=http://localhost:8000
# Casdoor secret
AUTH_CASDOOR_ID=a387a4892ee19b1a2249
AUTH_CASDOOR_SECRET=dbf205949d704de81b0b5b3603174e23fbecc354
CASDOOR_WEBHOOK_SECRET=casdoor-secret
# RUSTFS S3 configuration
RUSTFS_ACCESS_KEY=admin
RUSTFS_SECRET_KEY=YOUR_RUSTFS_PASSWORD
# Configure the bucket information of RUSTFS
S3_ENDPOINT=http://localhost:9000
RUSTFS_LOBE_BUCKET=lobe
# Configure for casdoor
origin=http://localhost:8000
JWKS_KEY={"keys":[{"d":"PVoFyqyrGstB8wU52S7gqqQQdZLtin_thcEM0nrNtqp9U-NlKLlhgEcWp5t89ycgvhsAzmrRbezGj4JBTr3jn7eWdwQpPJNYiipnsgeJn0pwsB0H2dMqtavxinoPVXkMTOuGHMTFhhyguFBw2JbIL0PTQUcUlXjv40OoJpYHZeggSxgfV-TuxjwW8Ll4-n84M5IOi6A53RvioE-Hm1iyIc2XLBCfyOu-SbAQYi8HzrA64kCxobAB0peLQMiAzfZmwPKiGOhnhKrAlYmG02qFnbUYiJu_-AXwsAyGv9S9i6dwK7QXaGGWYyis8LlPpd_JmPrBnrWomwDlI045NUMWZQ","dp":"OSXI2NBBZl2r0Dpf4-1z44A_jC5lOyXtJhXQYnSXy5eIuxTJcEtkUYagGEwnREO4Q3t-4J-lT_6Y71M1ZlgKG1upwfw1O4aE3vGpHOik9iZYYCjA8fe5uBfOpX1ELmOtHNoHRhMtyjuPxSFXLlSp3bgcF1f3F40ClukdvXCx0Mc","dq":"m6hNdfj-F8E_7nUlX2nG95OffkFrhHTo67ML9aPgpvFwBlzg-hk5LwtxMfUzngqWF78TMl0JDm7vS1bz0xlWqXqu8pFPoTUnUoWgYfvuyHLBwR5TgccQkfoKbkSMzYNy8VJPXZeyIjVXsW98tZvj-NZF-M9Pke_EWJm-jjXCu_8","e":"AQAB","kty":"RSA","n":"piffosMS0HOSgsSr_zQkXYaQt1kOCD73VR0b2XJD6UdQCKPbnBOzTIuA_xowX61QVsl5pCZLTw8ERC3r2Nlxj5Rp_H6RuOT7ioUqlbnxSGnfuAn8dFupY3A-sf9HVDOvtJdlS-nO9yA4wWU-A50zZ1Mf0pPZlUZE6dUQfsJFi5yXaNAybyk3U4VpMO_SXAilWEHVhiO0F0ccpJMCkT47AeXmYH9MlWwIGcay0UiAsdrs8J-q1arZ7Mbq0oxHmUXJG0vwRvAL8KnCEi8cJ3e2kKCRcr-BQCujsHUyUl6f_ATwSVuTHdAR1IzIcW37v27h3WQK_v0ffQM1NstamDX5vQ","p":"4myVm2M5cZGvVXsOmWUTUG87VC1GlQcL5tmMNSGSpQCL8yWZ1vANkmCxSMptrKB4dU9DAB3On6_oMhW1pJ3uYNGSW49BcmJoLkiWKeg5zWFnKPQNuThQmY1sCCubtKhBQgaYUr7TVzN9smrDV3zCu9MlRl-XPwnEmWaDII3g-f8","q":"u9v4IOEsb4l2Y3eWKE2bwJh5fJRR4vivaYA7U-1-OpvDwB3A48Rey9IL1ucXqE5G1Du8BtijPm5oSAar5uzrjtg1bZ9gevif6DnBGaIRE7LnSrUsTPfZwzntJ1rTaGiVe_pAdnTKXXaH6DxygXxH4wvGgA44V3TTfBXQUcjzdEM","qi":"lDBnSPKkRnYqQvbqVD1LxzqBPEeqEA3GyCqMj6fIZNgoEaBSLi0TSsUyGZ5mahX3KO35vKAZa5jvGjhvUGUiXycq8KvRZdeGK45vJdwZT2TiXiDwo9IQgJcbFMpxaB9DhjX2x0yqxgUY5ca75jLqbMuKBKBN0PVqIr9jlHkR8_s","use":"sig","kid":"6823046760c5d460","alg":"RS256"}]}

View File

@@ -1,43 +0,0 @@
# Proxy如果你需要的话比如你使用 GitHub 作为鉴权服务提供商)
# HTTP_PROXY=http://localhost:7890
# HTTPS_PROXY=http://localhost:7890
# 其他环境变量,视需求而定,可以参照客户端版本的环境变量配置,注意不要有 ACCESS_CODE
# OPENAI_API_KEY=sk-xxxx
# OPENAI_PROXY_URL=https://api.openai.com/v1
# OPENAI_MODEL_LIST=...
# ===================
# ===== 预设配置 =====
# ===================
# 如没有特殊需要不用更改
LOBE_PORT=3210
CASDOOR_PORT=8000
RUSTFS_PORT=9000
APP_URL=http://localhost:3210
# Postgres 相关,也即 DB 必须的环境变量
LOBE_DB_NAME=lobechat
POSTGRES_PASSWORD=uWNZugjBqixf8dxC
AUTH_SSO_PROVIDERS=casdoor
AUTH_CASDOOR_ISSUER=http://localhost:8000
# Casdoor secret
AUTH_CASDOOR_ID=a387a4892ee19b1a2249
AUTH_CASDOOR_SECRET=dbf205949d704de81b0b5b3603174e23fbecc354
CASDOOR_WEBHOOK_SECRET=casdoor-secret
# RustFS S3 配置
RUSTFS_ACCESS_KEY=admin
RUSTFS_SECRET_KEY=YOUR_RUSTFS_PASSWORD
# 在下方配置 rustfs 中添加的桶
S3_ENDPOINT=http://localhost:9000
RUSTFS_LOBE_BUCKET=lobe
# 为 casdoor 配置
origin=http://localhost:8000
JWKS_KEY={"keys":[{"d":"PVoFyqyrGstB8wU52S7gqqQQdZLtin_thcEM0nrNtqp9U-NlKLlhgEcWp5t89ycgvhsAzmrRbezGj4JBTr3jn7eWdwQpPJNYiipnsgeJn0pwsB0H2dMqtavxinoPVXkMTOuGHMTFhhyguFBw2JbIL0PTQUcUlXjv40OoJpYHZeggSxgfV-TuxjwW8Ll4-n84M5IOi6A53RvioE-Hm1iyIc2XLBCfyOu-SbAQYi8HzrA64kCxobAB0peLQMiAzfZmwPKiGOhnhKrAlYmG02qFnbUYiJu_-AXwsAyGv9S9i6dwK7QXaGGWYyis8LlPpd_JmPrBnrWomwDlI045NUMWZQ","dp":"OSXI2NBBZl2r0Dpf4-1z44A_jC5lOyXtJhXQYnSXy5eIuxTJcEtkUYagGEwnREO4Q3t-4J-lT_6Y71M1ZlgKG1upwfw1O4aE3vGpHOik9iZYYCjA8fe5uBfOpX1ELmOtHNoHRhMtyjuPxSFXLlSp3bgcF1f3F40ClukdvXCx0Mc","dq":"m6hNdfj-F8E_7nUlX2nG95OffkFrhHTo67ML9aPgpvFwBlzg-hk5LwtxMfUzngqWF78TMl0JDm7vS1bz0xlWqXqu8pFPoTUnUoWgYfvuyHLBwR5TgccQkfoKbkSMzYNy8VJPXZeyIjVXsW98tZvj-NZF-M9Pke_EWJm-jjXCu_8","e":"AQAB","kty":"RSA","n":"piffosMS0HOSgsSr_zQkXYaQt1kOCD73VR0b2XJD6UdQCKPbnBOzTIuA_xowX61QVsl5pCZLTw8ERC3r2Nlxj5Rp_H6RuOT7ioUqlbnxSGnfuAn8dFupY3A-sf9HVDOvtJdlS-nO9yA4wWU-A50zZ1Mf0pPZlUZE6dUQfsJFi5yXaNAybyk3U4VpMO_SXAilWEHVhiO0F0ccpJMCkT47AeXmYH9MlWwIGcay0UiAsdrs8J-q1arZ7Mbq0oxHmUXJG0vwRvAL8KnCEi8cJ3e2kKCRcr-BQCujsHUyUl6f_ATwSVuTHdAR1IzIcW37v27h3WQK_v0ffQM1NstamDX5vQ","p":"4myVm2M5cZGvVXsOmWUTUG87VC1GlQcL5tmMNSGSpQCL8yWZ1vANkmCxSMptrKB4dU9DAB3On6_oMhW1pJ3uYNGSW49BcmJoLkiWKeg5zWFnKPQNuThQmY1sCCubtKhBQgaYUr7TVzN9smrDV3zCu9MlRl-XPwnEmWaDII3g-f8","q":"u9v4IOEsb4l2Y3eWKE2bwJh5fJRR4vivaYA7U-1-OpvDwB3A48Rey9IL1ucXqE5G1Du8BtijPm5oSAar5uzrjtg1bZ9gevif6DnBGaIRE7LnSrUsTPfZwzntJ1rTaGiVe_pAdnTKXXaH6DxygXxH4wvGgA44V3TTfBXQUcjzdEM","qi":"lDBnSPKkRnYqQvbqVD1LxzqBPEeqEA3GyCqMj6fIZNgoEaBSLi0TSsUyGZ5mahX3KO35vKAZa5jvGjhvUGUiXycq8KvRZdeGK45vJdwZT2TiXiDwo9IQgJcbFMpxaB9DhjX2x0yqxgUY5ca75jLqbMuKBKBN0PVqIr9jlHkR8_s","use":"sig","kid":"6823046760c5d460","alg":"RS256"}]}

View File

@@ -1,24 +0,0 @@
{
"ID": "",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": [
"*"
]
},
"Action": [
"s3:GetObject"
],
"NotAction": [],
"Resource": [
"arn:aws:s3:::lobe/*"
],
"NotResource": [],
"Condition": {}
}
]
}

View File

@@ -1,297 +0,0 @@
name: lobehub
services:
network-service:
image: alpine
container_name: lobe-network
restart: always
ports:
- '${RUSTFS_PORT}:9000' # RustFS API
- '9001:9001' # RustFS Console
- '${CASDOOR_PORT}:${CASDOOR_PORT}' # Casdoor
- '${LOBE_PORT}:3210' # LobeChat
- '3000:3000' # Grafana
- '4318:4318' # otel-collector HTTP
- '4317:4317' # otel-collector gRPC
command: tail -f /dev/null
networks:
- lobe-network
postgresql:
image: pgvector/pgvector:pg17
container_name: lobe-postgres
ports:
- '5432:5432'
volumes:
- './data:/var/lib/postgresql/data'
environment:
- 'POSTGRES_DB=${LOBE_DB_NAME}'
- 'POSTGRES_PASSWORD=${POSTGRES_PASSWORD}'
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U postgres']
interval: 5s
timeout: 5s
retries: 5
restart: always
networks:
- lobe-network
redis:
image: redis:7-alpine
container_name: lobe-redis
ports:
- '6379:6379'
command: redis-server --save 60 1000 --appendonly yes
volumes:
- 'redis_data:/data'
healthcheck:
test: ['CMD', 'redis-cli', 'ping']
interval: 5s
timeout: 3s
retries: 5
restart: always
networks:
- lobe-network
rustfs:
image: rustfs/rustfs:latest
container_name: lobe-rustfs
network_mode: 'service:network-service'
environment:
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_ACCESS_KEY=${RUSTFS_ACCESS_KEY}
- RUSTFS_SECRET_KEY=${RUSTFS_SECRET_KEY}
volumes:
- rustfs-data:/data
healthcheck:
test: ["CMD-SHELL", "wget -qO- http://localhost:9000/health >/dev/null 2>&1 || exit 1"]
interval: 5s
timeout: 3s
retries: 30
command: ["--access-key","${RUSTFS_ACCESS_KEY}","--secret-key","${RUSTFS_SECRET_KEY}","/data"]
rustfs-init:
image: minio/mc:latest
container_name: lobe-rustfs-init
depends_on:
rustfs:
condition: service_healthy
volumes:
- ./bucket.config.json:/bucket.config.json:ro
entrypoint: /bin/sh
command: -c '
set -eux;
echo "S3_ACCESS_KEY=${RUSTFS_ACCESS_KEY}, S3_SECRET_KEY=${RUSTFS_SECRET_KEY}";
mc --version;
mc alias set rustfs "http://network-service:9000" "${RUSTFS_ACCESS_KEY}" "${RUSTFS_SECRET_KEY}";
mc ls rustfs || true;
mc mb "rustfs/lobe" --ignore-existing;
mc admin info rustfs || true;
mc anonymous set-json "/bucket.config.json" "rustfs/lobe";
'
restart: "no"
networks:
- lobe-network
# version lock ref: https://github.com/lobehub/lobe-chat/pull/7331
casdoor:
image: casbin/casdoor:v2.13.0
container_name: lobe-casdoor
entrypoint: /bin/sh -c './server --createDatabase=true'
network_mode: 'service:network-service'
depends_on:
postgresql:
condition: service_healthy
environment:
httpport: ${CASDOOR_PORT}
RUNNING_IN_DOCKER: 'true'
driverName: 'postgres'
dataSourceName: 'user=postgres password=${POSTGRES_PASSWORD} host=postgresql port=5432 sslmode=disable dbname=casdoor'
runmode: 'dev'
volumes:
- ./init_data.json:/init_data.json
env_file:
- .env
searxng:
image: searxng/searxng
container_name: lobe-searxng
volumes:
- './searxng-settings.yml:/etc/searxng/settings.yml'
environment:
- 'SEARXNG_SETTINGS_FILE=/etc/searxng/settings.yml'
restart: always
networks:
- lobe-network
env_file:
- .env
lobe:
image: lobehub/lobehub
container_name: lobehub
network_mode: 'service:network-service'
depends_on:
postgresql:
condition: service_healthy
network-service:
condition: service_started
rustfs:
condition: service_healthy
rustfs-init:
condition: service_completed_successfully
casdoor:
condition: service_started
redis:
condition: service_healthy
environment:
- 'AUTH_SSO_PROVIDERS=casdoor'
- 'KEY_VAULTS_SECRET=Kix2wcUONd4CX51E/ZPAd36BqM4wzJgKjPtz2sGztqQ='
- 'AUTH_SECRET=NX2kaPE923dt6BL2U8e9oSre5RfoT7hg'
- 'DATABASE_URL=postgresql://postgres:${POSTGRES_PASSWORD}@postgresql:5432/${LOBE_DB_NAME}'
- 'S3_BUCKET=${RUSTFS_LOBE_BUCKET}'
- 'S3_ENABLE_PATH_STYLE=1'
- 'S3_ACCESS_KEY=${RUSTFS_ACCESS_KEY}'
- 'S3_ACCESS_KEY_ID=${RUSTFS_ACCESS_KEY}'
- 'S3_SECRET_ACCESS_KEY=${RUSTFS_SECRET_KEY}'
- 'LLM_VISION_IMAGE_USE_BASE64=1'
- 'S3_SET_ACL=0'
- 'SEARXNG_URL=http://searxng:8080'
- 'REDIS_URL=redis://redis:6379'
- 'REDIS_PREFIX=lobechat'
- 'REDIS_TLS=0'
env_file:
- .env
restart: always
entrypoint: >
/bin/sh -c "
/bin/node /app/startServer.js &
LOBE_PID=\$!
sleep 3
if [ $(wget --timeout=5 --spider --server-response ${AUTH_CASDOOR_ISSUER}/.well-known/openid-configuration 2>&1 | grep -c 'HTTP/1.1 200 OK') -eq 0 ]; then
echo '⚠Warning: Unable to fetch OIDC configuration from Casdoor'
echo 'Request URL: ${AUTH_CASDOOR_ISSUER}/.well-known/openid-configuration'
echo 'Read more at: https://lobehub.com/docs/self-hosting/server-database/docker-compose#necessary-configuration'
echo ''
echo '⚠️注意:无法从 Casdoor 获取 OIDC 配置'
echo '请求 URL: ${AUTH_CASDOOR_ISSUER}/.well-known/openid-configuration'
echo '了解更多https://lobehub.com/zh/docs/self-hosting/server-database/docker-compose#necessary-configuration'
echo ''
else
if ! wget -O - --timeout=5 ${AUTH_CASDOOR_ISSUER}/.well-known/openid-configuration 2>&1 | grep 'issuer' | grep ${AUTH_CASDOOR_ISSUER}; then
printf '❌Error: The Auth issuer is conflict, Issuer in OIDC configuration is: %s' \$(wget -O - --timeout=5 ${AUTH_CASDOOR_ISSUER}/.well-known/openid-configuration 2>&1 | grep -E 'issuer.*' | awk -F '\"' '{print \$4}')
echo ' , but the issuer in .env file is: ${AUTH_CASDOOR_ISSUER} '
echo 'Request URL: ${AUTH_CASDOOR_ISSUER}/.well-known/openid-configuration'
echo 'Read more at: https://lobehub.com/docs/self-hosting/server-database/docker-compose#necessary-configuration'
echo ''
printf '❌错误Auth 的 issuer 冲突OIDC 配置中的 issuer 是:%s' \$(wget -O - --timeout=5 ${AUTH_CASDOOR_ISSUER}/.well-known/openid-configuration 2>&1 | grep -E 'issuer.*' | awk -F '\"' '{print \$4}')
echo ' , 但 .env 文件中的 issuer 是:${AUTH_CASDOOR_ISSUER} '
echo '请求 URL: ${AUTH_CASDOOR_ISSUER}/.well-known/openid-configuration'
echo '了解更多https://lobehub.com/zh/docs/self-hosting/server-database/docker-compose#necessary-configuration'
echo ''
fi
fi
if [ $(wget --timeout=5 --spider --server-response ${S3_ENDPOINT}/health 2>&1 | grep -c 'HTTP/1.1 200 OK') -eq 0 ]; then
echo '⚠Warning: Unable to fetch RustFS health status'
echo 'Request URL: ${S3_ENDPOINT}/health'
echo 'Read more at: https://lobehub.com/docs/self-hosting/server-database/docker-compose#necessary-configuration'
echo ''
echo '⚠️注意:无法获取 RustFS 健康状态'
echo '请求 URL: ${S3_ENDPOINT}/health'
echo '了解更多https://lobehub.com/zh/docs/self-hosting/server-database/docker-compose#necessary-configuration'
echo ''
fi
wait \$LOBE_PID
"
grafana:
profiles:
- otel
image: grafana/grafana:12.2.0-17419259409
container_name: lobe-grafana
network_mode: 'service:network-service'
restart: always
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
- GF_AUTH_DISABLE_LOGIN_FORM=true
- GF_FEATURE_TOGGLES_ENABLE=traceqlEditor
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/dashboards:/etc/grafana/provisioning/dashboards
- ./grafana/datasources:/etc/grafana/provisioning/datasources
depends_on:
- tempo
- prometheus
tempo:
profiles:
- otel
image: grafana/tempo:latest
container_name: lobe-tempo
network_mode: 'service:network-service'
restart: always
volumes:
- ./tempo/tempo.yaml:/etc/tempo.yaml
- tempo_data:/var/tempo
command: ['-config.file=/etc/tempo.yaml']
prometheus:
profiles:
- otel
image: prom/prometheus
container_name: lobe-prometheus
network_mode: 'service:network-service'
restart: always
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--web.enable-otlp-receiver'
- '--web.enable-remote-write-receiver'
- '--enable-feature=exemplar-storage'
otel-collector:
profiles:
- otel
image: otel/opentelemetry-collector
container_name: lobe-otel-collector
network_mode: 'service:network-service'
restart: always
volumes:
- ./otel-collector/collector-config.yaml:/etc/otelcol/config.yaml
command: ['--config', '/etc/otelcol/config.yaml']
depends_on:
- tempo
- prometheus
otel-tracing-test:
profiles:
- otel-test
image: ghcr.io/grafana/xk6-client-tracing:v0.0.9
container_name: lobe-otel-tracing-test
network_mode: 'service:network-service'
restart: always
environment:
- ENDPOINT=127.0.0.1:4317
volumes:
data:
driver: local
s3_data:
driver: local
grafana_data:
driver: local
tempo_data:
driver: local
prometheus_data:
driver: local
redis_data:
driver: local
rustfs-data:
driver: local
networks:
lobe-network:
driver: bridge

View File

@@ -1,41 +0,0 @@
# Proxy, if you need it
# HTTP_PROXY=http://localhost:7890
# HTTPS_PROXY=http://localhost:7890
# Other environment variables, as needed. You can refer to the environment variables configuration for the client version, making sure not to have ACCESS_CODE.
# OPENAI_API_KEY=sk-xxxx
# OPENAI_PROXY_URL=https://api.openai.com/v1
# OPENAI_MODEL_LIST=...
# ===========================
# ====== Preset config ======
# ===========================
# if no special requirements, no need to change
LOBE_PORT=3210
CASDOOR_PORT=8000
MINIO_PORT=9000
APP_URL=http://localhost:3210
AUTH_URL=http://localhost:3210/api/auth
# Postgres related, which are the necessary environment variables for DB
LOBE_DB_NAME=lobechat
POSTGRES_PASSWORD=uWNZugjBqixf8dxC
AUTH_CASDOOR_ISSUER=http://localhost:8000
# Casdoor secret
AUTH_CASDOOR_ID=a387a4892ee19b1a2249
AUTH_CASDOOR_SECRET=dbf205949d704de81b0b5b3603174e23fbecc354
CASDOOR_WEBHOOK_SECRET=casdoor-secret
# MinIO S3 configuration
MINIO_ROOT_USER=admin
MINIO_ROOT_PASSWORD=YOUR_MINIO_PASSWORD
# Configure the bucket information of MinIO
S3_ENDPOINT=http://localhost:9000
MINIO_LOBE_BUCKET=lobe
# Configure for casdoor
origin=http://localhost:8000

View File

@@ -1,41 +0,0 @@
# Proxy如果你需要的话比如你使用 GitHub 作为鉴权服务提供商)
# HTTP_PROXY=http://localhost:7890
# HTTPS_PROXY=http://localhost:7890
# 其他环境变量,视需求而定,可以参照客户端版本的环境变量配置,注意不要有 ACCESS_CODE
# OPENAI_API_KEY=sk-xxxx
# OPENAI_PROXY_URL=https://api.openai.com/v1
# OPENAI_MODEL_LIST=...
# ===================
# ===== 预设配置 =====
# ===================
# 如没有特殊需要不用更改
LOBE_PORT=3210
CASDOOR_PORT=8000
MINIO_PORT=9000
APP_URL=http://localhost:3210
AUTH_URL=http://localhost:3210/api/auth
# Postgres 相关,也即 DB 必须的环境变量
LOBE_DB_NAME=lobechat
POSTGRES_PASSWORD=uWNZugjBqixf8dxC
AUTH_CASDOOR_ISSUER=http://localhost:8000
# Casdoor secret
AUTH_CASDOOR_ID=a387a4892ee19b1a2249
AUTH_CASDOOR_SECRET=dbf205949d704de81b0b5b3603174e23fbecc354
CASDOOR_WEBHOOK_SECRET=casdoor-secret
# MinIO S3 配置
MINIO_ROOT_USER=admin
MINIO_ROOT_PASSWORD=YOUR_MINIO_PASSWORD
# 在下方配置 minio 中添加的桶
S3_ENDPOINT=http://localhost:9000
MINIO_LOBE_BUCKET=lobe
# 为 casdoor 配置
origin=http://localhost:8000

View File

@@ -1,251 +0,0 @@
name: lobehub
services:
network-service:
image: alpine
container_name: lobe-network
restart: always
ports:
- '${MINIO_PORT}:${MINIO_PORT}' # MinIO API
- '9001:9001' # MinIO Console
- '${CASDOOR_PORT}:${CASDOOR_PORT}' # Casdoor
- '${LOBE_PORT}:3210' # LobeChat
- '3000:3000' # Grafana
- '4318:4318' # otel-collector HTTP
- '4317:4317' # otel-collector gRPC
command: tail -f /dev/null
networks:
- lobe-network
postgresql:
image: pgvector/pgvector:pg17
container_name: lobe-postgres
ports:
- '5432:5432'
volumes:
- './data:/var/lib/postgresql/data'
environment:
- 'POSTGRES_DB=${LOBE_DB_NAME}'
- 'POSTGRES_PASSWORD=${POSTGRES_PASSWORD}'
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U postgres']
interval: 5s
timeout: 5s
retries: 5
restart: always
networks:
- lobe-network
minio:
image: minio/minio:RELEASE.2025-04-22T22-12-26Z
container_name: lobe-minio
network_mode: 'service:network-service'
volumes:
- './s3_data:/etc/minio/data'
environment:
- 'MINIO_API_CORS_ALLOW_ORIGIN=*'
env_file:
- .env
restart: always
entrypoint: >
/bin/sh -c "
minio server /etc/minio/data --address ':${MINIO_PORT}' --console-address ':9001' &
MINIO_PID=\$!
while ! curl -s http://localhost:${MINIO_PORT}/minio/health/live; do
echo 'Waiting for MinIO to start...'
sleep 1
done
sleep 5
mc alias set myminio http://localhost:${MINIO_PORT} ${MINIO_ROOT_USER} ${MINIO_ROOT_PASSWORD}
echo 'Creating bucket ${MINIO_LOBE_BUCKET}'
mc mb myminio/${MINIO_LOBE_BUCKET}
wait \$MINIO_PID
"
# version lock ref: https://github.com/lobehub/lobe-chat/pull/7331
casdoor:
image: casbin/casdoor:v2.13.0
container_name: lobe-casdoor
entrypoint: /bin/sh -c './server --createDatabase=true'
network_mode: 'service:network-service'
depends_on:
postgresql:
condition: service_healthy
environment:
httpport: ${CASDOOR_PORT}
RUNNING_IN_DOCKER: 'true'
driverName: 'postgres'
dataSourceName: 'user=postgres password=${POSTGRES_PASSWORD} host=postgresql port=5432 sslmode=disable dbname=casdoor'
runmode: 'dev'
volumes:
- ./init_data.json:/init_data.json
env_file:
- .env
searxng:
image: searxng/searxng
container_name: lobe-searxng
volumes:
- './searxng-settings.yml:/etc/searxng/settings.yml'
environment:
- 'SEARXNG_SETTINGS_FILE=/etc/searxng/settings.yml'
restart: always
networks:
- lobe-network
env_file:
- .env
grafana:
image: grafana/grafana:12.2.0-17419259409
container_name: lobe-grafana
network_mode: 'service:network-service'
restart: always
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
- GF_AUTH_DISABLE_LOGIN_FORM=true
- GF_FEATURE_TOGGLES_ENABLE=traceqlEditor
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/dashboards:/etc/grafana/provisioning/dashboards
- ./grafana/datasources:/etc/grafana/provisioning/datasources
depends_on:
- tempo
- prometheus
tempo:
image: grafana/tempo:latest
container_name: lobe-tempo
network_mode: 'service:network-service'
restart: always
volumes:
- ./tempo/tempo.yaml:/etc/tempo.yaml
- tempo_data:/var/tempo
command: ['-config.file=/etc/tempo.yaml']
prometheus:
image: prom/prometheus
container_name: lobe-prometheus
network_mode: 'service:network-service'
restart: always
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--web.enable-otlp-receiver'
- '--web.enable-remote-write-receiver'
- '--enable-feature=exemplar-storage'
otel-collector:
image: otel/opentelemetry-collector
container_name: lobe-otel-collector
network_mode: 'service:network-service'
restart: always
volumes:
- ./otel-collector/collector-config.yaml:/etc/otelcol/config.yaml
command: ['--config', '/etc/otelcol/config.yaml']
depends_on:
- tempo
- prometheus
otel-tracing-test:
profiles:
- otel-test
image: ghcr.io/grafana/xk6-client-tracing:v0.0.9
container_name: lobe-otel-tracing-test
network_mode: 'service:network-service'
restart: always
environment:
- ENDPOINT=127.0.0.1:4317
lobe:
image: lobehub/lobehub
container_name: lobehub
network_mode: 'service:network-service'
depends_on:
postgresql:
condition: service_healthy
network-service:
condition: service_started
minio:
condition: service_started
casdoor:
condition: service_started
environment:
- 'AUTH_SSO_PROVIDERS=casdoor'
- 'KEY_VAULTS_SECRET=Kix2wcUONd4CX51E/ZPAd36BqM4wzJgKjPtz2sGztqQ='
- 'AUTH_SECRET=NX2kaPE923dt6BL2U8e9oSre5RfoT7hg'
- 'DATABASE_URL=postgresql://postgres:${POSTGRES_PASSWORD}@postgresql:5432/${LOBE_DB_NAME}'
- 'S3_BUCKET=${MINIO_LOBE_BUCKET}'
- 'S3_ENABLE_PATH_STYLE=1'
- 'S3_ACCESS_KEY=${MINIO_ROOT_USER}'
- 'S3_ACCESS_KEY_ID=${MINIO_ROOT_USER}'
- 'S3_SECRET_ACCESS_KEY=${MINIO_ROOT_PASSWORD}'
- 'LLM_VISION_IMAGE_USE_BASE64=1'
- 'S3_SET_ACL=0'
- 'SEARXNG_URL=http://searxng:8080'
- 'OTEL_EXPORTER_OTLP_METRICS_PROTOCOL=http/protobuf'
- 'OTEL_EXPORTER_OTLP_METRICS_ENDPOINT=http://localhost:4318/v1/metrics'
- 'OTEL_EXPORTER_OTLP_TRACES_PROTOCOL=http/protobuf'
- 'OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4318/v1/traces'
env_file:
- .env
restart: always
entrypoint: >
/bin/sh -c "
/bin/node /app/startServer.js &
LOBE_PID=\$!
sleep 3
if [ $(wget --timeout=5 --spider --server-response ${AUTH_CASDOOR_ISSUER}/.well-known/openid-configuration 2>&1 | grep -c 'HTTP/1.1 200 OK') -eq 0 ]; then
echo '⚠Warning: Unable to fetch OIDC configuration from Casdoor'
echo 'Request URL: ${AUTH_CASDOOR_ISSUER}/.well-known/openid-configuration'
echo 'Read more at: https://lobehub.com/docs/self-hosting/server-database/docker-compose#necessary-configuration'
echo ''
echo '⚠️注意:无法从 Casdoor 获取 OIDC 配置'
echo '请求 URL: ${AUTH_CASDOOR_ISSUER}/.well-known/openid-configuration'
echo '了解更多https://lobehub.com/zh/docs/self-hosting/server-database/docker-compose#necessary-configuration'
echo ''
else
if ! wget -O - --timeout=5 ${AUTH_CASDOOR_ISSUER}/.well-known/openid-configuration 2>&1 | grep 'issuer' | grep ${AUTH_CASDOOR_ISSUER}; then
printf '❌Error: The Auth issuer is conflict, Issuer in OIDC configuration is: %s' \$(wget -O - --timeout=5 ${AUTH_CASDOOR_ISSUER}/.well-known/openid-configuration 2>&1 | grep -E 'issuer.*' | awk -F '\"' '{print \$4}')
echo ' , but the issuer in .env file is: ${AUTH_CASDOOR_ISSUER} '
echo 'Request URL: ${AUTH_CASDOOR_ISSUER}/.well-known/openid-configuration'
echo 'Read more at: https://lobehub.com/docs/self-hosting/server-database/docker-compose#necessary-configuration'
echo ''
printf '❌错误Auth 的 issuer 冲突OIDC 配置中的 issuer 是:%s' \$(wget -O - --timeout=5 ${AUTH_CASDOOR_ISSUER}/.well-known/openid-configuration 2>&1 | grep -E 'issuer.*' | awk -F '\"' '{print \$4}')
echo ' , 但 .env 文件中的 issuer 是:${AUTH_CASDOOR_ISSUER} '
echo '请求 URL: ${AUTH_CASDOOR_ISSUER}/.well-known/openid-configuration'
echo '了解更多https://lobehub.com/zh/docs/self-hosting/server-database/docker-compose#necessary-configuration'
echo ''
fi
fi
if [ $(wget --timeout=5 --spider --server-response ${S3_ENDPOINT}/minio/health/live 2>&1 | grep -c 'HTTP/1.1 200 OK') -eq 0 ]; then
echo '⚠Warning: Unable to fetch MinIO health status'
echo 'Request URL: ${S3_ENDPOINT}/minio/health/live'
echo 'Read more at: https://lobehub.com/docs/self-hosting/server-database/docker-compose#necessary-configuration'
echo ''
echo '⚠️注意:无法获取 MinIO 健康状态'
echo '请求 URL: ${S3_ENDPOINT}/minio/health/live'
echo '了解更多https://lobehub.com/zh/docs/self-hosting/server-database/docker-compose#necessary-configuration'
echo ''
fi
wait \$LOBE_PID
"
volumes:
data:
driver: local
s3_data:
driver: local
grafana_data:
driver: local
tempo_data:
driver: local
prometheus_data:
driver: local
networks:
lobe-network:
driver: bridge

View File

@@ -1,15 +0,0 @@
apiVersion: 1
prune: true
datasources:
- name: Prometheus
type: prometheus
uid: prometheus
access: proxy
orgId: 1
url: http://127.0.0.1:9090
basicAuth: false
isDefault: false
version: 1
editable: false

View File

@@ -1,16 +0,0 @@
apiVersion: 1
prune: true
datasources:
- name: Tempo
type: tempo
access: proxy
orgId: 1
url: http://127.0.0.1:3200
basicAuth: false
isDefault: true
version: 1
editable: false
apiVersion: 1
uid: tempo

View File

@@ -1,45 +0,0 @@
extensions:
health_check:
endpoint: 127.0.0.1:13133
receivers:
prometheus:
config:
scrape_configs:
- job_name: otel-collector-metrics
scrape_interval: 10s
static_configs:
- targets: ["127.0.0.1:8888"]
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
exporters:
prometheusremotewrite:
endpoint: http://127.0.0.1:9090/api/v1/write
tls:
insecure: true
otlp:
endpoint: 127.0.0.1:14317
tls:
insecure: true
debug:
verbosity: detailed
service:
pipelines:
metrics:
receivers: [prometheus, otlp]
exporters: [prometheusremotewrite]
traces:
receivers: [otlp]
exporters: [otlp]
logs:
receivers: [otlp]
exporters: [debug]

View File

@@ -1,11 +0,0 @@
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: "prometheus"
static_configs:
- targets: ["127.0.0.1:9090"]
- job_name: "tempo"
static_configs:
- targets: ["127.0.0.1:3200"]

View File

@@ -1,58 +0,0 @@
stream_over_http_enabled: true
server:
http_listen_port: 3200
log_level: info
query_frontend:
search:
duration_slo: 5s
throughput_bytes_slo: 1.073741824e+09
metadata_slo:
duration_slo: 5s
throughput_bytes_slo: 1.073741824e+09
trace_by_id:
duration_slo: 5s
distributor:
max_attribute_bytes: 10485760
receivers:
otlp:
protocols:
grpc:
endpoint: 127.0.0.1:14317
http:
endpoint: 127.0.0.1:14318
ingester:
max_block_duration: 5m # cut the headblock when this much time passes. this is being set for demo purposes and should probably be left alone normally
compactor:
compaction:
block_retention: 1h # overall Tempo trace retention. set for demo purposes
metrics_generator:
registry:
external_labels:
source: tempo
cluster: docker-compose
storage:
path: /var/tempo/generator/wal
remote_write:
- url: http://127.0.0.1:9090/api/v1/write
send_exemplars: true
traces_storage:
path: /var/tempo/generator/traces
storage:
trace:
backend: local # backend configuration to use
wal:
path: /var/tempo/wal # where to store the wal locally
local:
path: /var/tempo/blocks
overrides:
defaults:
metrics_generator:
processors: [service-graphs, span-metrics, local-blocks] # enables metrics generator
generate_native_histograms: both

View File

@@ -1,124 +0,0 @@
name: lobehub
services:
network-service:
image: alpine
container_name: lobe-network
ports:
- '${MINIO_PORT}:${MINIO_PORT}' # MinIO API
- '9001:9001' # MinIO Console
- '${LOGTO_PORT}:${LOGTO_PORT}' # Logto
- '3002:3002' # Logto Admin
- '${LOBE_PORT}:3210' # LobeChat
command: tail -f /dev/null
networks:
- lobe-network
postgresql:
image: pgvector/pgvector:pg16
container_name: lobe-postgres
ports:
- '5432:5432'
volumes:
- './data:/var/lib/postgresql/data'
environment:
- 'POSTGRES_DB=${LOBE_DB_NAME}'
- 'POSTGRES_PASSWORD=${POSTGRES_PASSWORD}'
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U postgres']
interval: 5s
timeout: 5s
retries: 5
restart: always
networks:
- lobe-network
redis:
image: redis:7-alpine
container_name: lobe-redis
ports:
- '6379:6379'
command: redis-server --save 60 1000 --appendonly yes
volumes:
- 'redis_data:/data'
healthcheck:
test: ['CMD', 'redis-cli', 'ping']
interval: 5s
timeout: 3s
retries: 5
restart: always
networks:
- lobe-network
minio:
image: minio/minio:RELEASE.2025-04-22T22-12-26Z
container_name: lobe-minio
network_mode: "service:network-service"
volumes:
- './s3_data:/etc/minio/data'
environment:
- 'MINIO_ROOT_USER=${MINIO_ROOT_USER}'
- 'MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD}'
- 'MINIO_API_CORS_ALLOW_ORIGIN=http://localhost:${LOBE_PORT}'
restart: always
command: >
server /etc/minio/data --address ":${MINIO_PORT}" --console-address ":9001"
logto:
image: svhd/logto
container_name: lobe-logto
network_mode: 'service:network-service'
depends_on:
postgresql:
condition: service_healthy
environment:
- 'TRUST_PROXY_HEADER=1'
- 'PORT=${LOGTO_PORT}'
- 'DB_URL=postgresql://postgres:${POSTGRES_PASSWORD}@postgresql:5432/logto'
- 'ENDPOINT=http://localhost:${LOGTO_PORT}'
- 'ADMIN_ENDPOINT=http://localhost:3002'
entrypoint: ['sh', '-c', 'npm run cli db seed -- --swe && npm start']
lobe:
image: lobehub/lobehub
container_name: lobehub
network_mode: 'service:network-service'
depends_on:
postgresql:
condition: service_healthy
network-service:
condition: service_started
minio:
condition: service_started
logto:
condition: service_started
redis:
condition: service_healthy
environment:
- 'APP_URL=http://localhost:3210'
- 'AUTH_SSO_PROVIDERS=logto'
- 'KEY_VAULTS_SECRET=Kix2wcUONd4CX51E/ZPAd36BqM4wzJgKjPtz2sGztqQ='
- 'AUTH_SECRET=NX2kaPE923dt6BL2U8e9oSre5RfoT7hg'
- 'AUTH_LOGTO_ISSUER=http://localhost:${LOGTO_PORT}/oidc'
- 'DATABASE_URL=postgresql://postgres:${POSTGRES_PASSWORD}@postgresql:5432/${LOBE_DB_NAME}'
- 'S3_ENDPOINT=http://localhost:${MINIO_PORT}'
- 'S3_BUCKET=${MINIO_LOBE_BUCKET}'
- 'S3_ENABLE_PATH_STYLE=1'
- 'REDIS_URL=redis://redis:6379'
- 'REDIS_PREFIX=lobechat'
- 'REDIS_TLS=0'
env_file:
- .env
restart: always
volumes:
data:
driver: local
s3_data:
driver: local
redis_data:
driver: local
networks:
lobe-network:
driver: bridge

View File

@@ -1,31 +0,0 @@
# Required: LobeChat domain for tRPC calls
# Ensure this domain is whitelisted in your SSO providers and S3 service CORS settings
APP_URL=http://localhost:3210
# Postgres related environment variables
# Required: Secret key for encrypting sensitive information. Generate with: openssl rand -base64 32
KEY_VAULTS_SECRET=Kix2wcUONd4CX51E/ZPAd36BqM4wzJgKjPtz2sGztqQ=
# Required: Postgres database connection string
DATABASE_URL=postgresql://postgres:uWNZugjBqixf8dxC@postgresql:5432/lobechat
# Authentication related environment variables
AUTH_SECRET=NX2kaPE923dt6BL2U8e9oSre5RfoT7hg
AUTH_SSO_PROVIDERS=zitadel
# ZiTADEL provider configuration
# Please refer to: https://lobehub.com/docs/self-hosting/auth/providers/zitadel
AUTH_ZITADEL_ID=285945938244075523
AUTH_ZITADEL_SECRET=hkbtzHLaCEIeHeFThym14UcydpmQiEB5JtAX08HSqSoJxhAlVVkyovTuNUZ5TNrT
AUTH_ZITADEL_ISSUER=http://localhost:8080
# MinIO S3 configuration
S3_ACCESS_KEY_ID=
S3_SECRET_ACCESS_KEY=
S3_ENDPOINT=http://localhost:9000
S3_BUCKET=lobe
S3_ENABLE_PATH_STYLE=1
LLM_VISION_IMAGE_USE_BASE64=1
# Other environment variables, as needed. You can refer to the environment variables configuration for the client version, making sure not to have ACCESS_CODE.
# OPENAI_API_KEY=sk-xxxx
# OPENAI_PROXY_URL=https://api.openai.com/v1
# OPENAI_MODEL_LIST=...

View File

@@ -1,30 +0,0 @@
# LobeChat 域名
APP_URL=http://localhost:3210
# Postgres 相关,也即 DB 必须的环境变量
# 用于加密敏感信息的密钥,可以使用 openssl rand -base64 32 生成
KEY_VAULTS_SECRET=Kix2wcUONd4CX51E/ZPAd36BqM4wzJgKjPtz2sGztqQ=
# Postgres 数据库连接字符串
DATABASE_URL=postgresql://postgres:uWNZugjBqixf8dxC@postgresql:5432/lobechat
# 鉴权相关
AUTH_SECRET=NX2kaPE923dt6BL2U8e9oSre5RfoT7hg
AUTH_SSO_PROVIDERS=zitadel
# ZiTADEL 鉴权服务提供商部分
# 请参考https://lobehub.com/zh/docs/self-hosting/auth/providers/zitadel
AUTH_ZITADEL_ID=285945938244075523
AUTH_ZITADEL_SECRET=hkbtzHLaCEIeHeFThym14UcydpmQiEB5JtAX08HSqSoJxhAlVVkyovTuNUZ5TNrT
AUTH_ZITADEL_ISSUER=http://localhost:8080
# MinIO S3 配置
S3_ACCESS_KEY_ID=
S3_SECRET_ACCESS_KEY=
S3_ENDPOINT=http://localhost:9000
S3_BUCKET=lobe
S3_ENABLE_PATH_STYLE=1
LLM_VISION_IMAGE_USE_BASE64=1
# 其他环境变量,视需求而定,可以参照客户端版本的环境变量配置,注意不要有 ACCESS_CODE
# OPENAI_API_KEY=sk-xxxx
# OPENAI_PROXY_URL=https://api.openai.com/v1
# OPENAI_MODEL_LIST=...

View File

@@ -1,86 +0,0 @@
name: lobehub
services:
network-service:
image: alpine
container_name: lobe-network
ports:
- '9000:9000' # MinIO API
- '9001:9001' # MinIO Console
- '8080:8080' # Zitadel Console
- '3210:3210' # LobeChat
command: tail -f /dev/null
networks:
- lobe-network
postgresql:
image: pgvector/pgvector:pg16
container_name: lobe-postgres
ports:
- '5432:5432'
volumes:
- './data:/var/lib/postgresql/data'
environment:
- 'POSTGRES_DB=lobechat'
- 'POSTGRES_PASSWORD=uWNZugjBqixf8dxC'
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U postgres']
interval: 5s
timeout: 5s
retries: 5
restart: always
networks:
- lobe-network
minio:
image: minio/minio:RELEASE.2025-04-22T22-12-26Z
container_name: lobe-minio
network_mode: 'service:network-service'
volumes:
- './s3_data:/etc/minio/data'
environment:
- 'MINIO_ROOT_USER=YOUR_MINIO_USER'
- 'MINIO_ROOT_PASSWORD=YOUR_MINIO_PASSWORD'
- 'MINIO_API_CORS_ALLOW_ORIGIN=http://localhost:3210'
restart: always
command: >
server /etc/minio/data --address ":9000" --console-address ":9001"
zitadel:
restart: 'always'
image: 'ghcr.io/zitadel/zitadel:latest'
container_name: lobe-zitadel
network_mode: 'service:network-service'
command: start-from-init --config /zitadel-config.yaml --steps /zitadel-init-steps.yaml --masterkey "cft3Tekr/rQBOqwoQSCPoncA9BHbn7QJ" --tlsMode disabled #MasterkeyNeedsToHave32Characters
volumes:
- ./zitadel-config.yaml:/zitadel-config.yaml:ro
- ./zitadel-init-steps.yaml:/zitadel-init-steps.yaml:ro
depends_on:
postgresql:
condition: service_healthy
lobe:
image: lobehub/lobehub
container_name: lobehub
network_mode: 'service:network-service'
depends_on:
postgresql:
condition: service_healthy
network-service:
condition: service_started
minio:
condition: service_started
zitadel:
condition: service_started
env_file:
- .env
restart: always
volumes:
data:
driver: local
s3_data:
driver: local
networks:
lobe-network:
driver: bridge

View File

@@ -1,26 +0,0 @@
Log:
Level: 'info'
Port: 8080
ExternalPort: 8080
ExternalDomain: localhost
ExternalSecure: false
TLS:
Enabled: false
# If not using the docker compose example, adjust these values for connecting ZITADEL to your PostgreSQL
Database:
postgres:
Host: postgresql
Port: 5432
Database: zitadel
User:
Username: 'zitadel'
Password: 'zitadel'
SSL:
Mode: 'disable'
Admin:
Username: 'postgres'
Password: 'uWNZugjBqixf8dxC' #postgres password
SSL:
Mode: 'disable'

View File

@@ -1,11 +0,0 @@
# All possible options and their defaults: https://github.com/zitadel/zitadel/blob/main/cmd/setup/steps.yaml
FirstInstance:
Org:
Human:
# use the loginname root@zitadel.localhost
Username: 'root'
# The password must be 8 characters or more and must contain uppercase letters, lowercase letters, symbols, and numbers. The first login will require a password change.
Password: 'Password1!'
Email:
# Optional, if set, can be used to log in with email.
Address: 'example@zitadel.com' # ZITADEL_FIRSTINSTANCE_ORG_HUMAN_EMAIL_ADDRESS

View File

@@ -1,34 +0,0 @@
{
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": ["*"]
},
"Action": ["s3:GetBucketLocation"],
"Resource": ["arn:aws:s3:::lobe"]
},
{
"Effect": "Allow",
"Principal": {
"AWS": ["*"]
},
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::lobe"],
"Condition": {
"StringEquals": {
"s3:prefix": ["files/*"]
}
}
},
{
"Effect": "Allow",
"Principal": {
"AWS": ["*"]
},
"Action": ["s3:PutObject", "s3:DeleteObject", "s3:GetObject"],
"Resource": ["arn:aws:s3:::lobe/files/**"]
}
],
"Version": "2012-10-17"
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,52 +0,0 @@
# Required: LobeChat domain for tRPC calls
# Ensure this domain is whitelisted in your SSO providers and S3 service CORS settings
APP_URL=https://lobe.example.com/
# Postgres related environment variables
# Required: Secret key for encrypting sensitive information. Generate with: openssl rand -base64 32
KEY_VAULTS_SECRET=Kix2wcUONd4CX51E/ZPAd36BqM4wzJgKjPtz2sGztqQ=
# Required: Postgres database connection string
# Format: postgresql://username:password@host:port/dbname
# If using Docker, you can use the container name as the host
DATABASE_URL=postgresql://postgres:uWNZugjBqixf8dxC@postgresql:5432/lobe
# Authentication related environment variables
# Supports Auth0, Azure AD, GitHub, Authentik, Zitadel, Logto, etc.
# For supported providers, see: https://lobehub.com/docs/self-hosting/auth
# If you have ACCESS_CODE, please remove it. We use Better Auth as the sole authentication source
# Required: Auth secret key. Generate with: openssl rand -base64 32
AUTH_SECRET=NX2kaPE923dt6BL2U8e9oSre5RfoT7hg
# Required: Specify the authentication provider (e.g., Logto)
AUTH_SSO_PROVIDERS=logto
# SSO providers configuration (example using Logto)
# For other providers, see: https://lobehub.com/docs/self-hosting/environment-variables/auth
AUTH_LOGTO_ID=YOUR_LOGTO_ID
AUTH_LOGTO_SECRET=YOUR_LOGTO_SECRET
AUTH_LOGTO_ISSUER=https://lobe-auth-api.example.com/oidc
# Proxy settings (if needed, e.g., when using GitHub as an auth provider)
# HTTP_PROXY=http://localhost:7890
# HTTPS_PROXY=http://localhost:7890
# S3 related environment variables (example using MinIO)
# Required: S3 Access Key ID (for MinIO, invalid until manually created in MinIO UI)
S3_ACCESS_KEY_ID=YOUR_S3_ACCESS_KEY_ID
# Required: S3 Secret Access Key (for MinIO, invalid until manually created in MinIO UI)
S3_SECRET_ACCESS_KEY=YOUR_S3_SECRET_ACCESS_KEY
# Required: S3 Endpoint for server/client connections to S3 API
S3_ENDPOINT=https://lobe-s3-api.example.com
# Required: S3 Bucket (invalid until manually created in MinIO UI)
S3_BUCKET=lobe
# Optional: S3 Enable Path Style
# Use 0 for mainstream S3 cloud providers; use 1 for self-hosted MinIO
# See: https://lobehub.com/docs/self-hosting/advanced/s3#s-3-enable-path-style
S3_ENABLE_PATH_STYLE=1
# Other basic environment variables (as needed)
# See: https://lobehub.com/docs/self-hosting/environment-variables/basic
# Note: For server versions, the API must support embedding models (OpenAI text-embedding-3-small) for file processing
# You don't need to specify this model in OPENAI_MODEL_LIST
# OPENAI_API_KEY=sk-xxxx
# OPENAI_PROXY_URL=https://api.openai.com/v1
# OPENAI_MODEL_LIST=...

View File

@@ -1,51 +0,0 @@
# 必填LobeChat 域名,用于 tRPC 调用
# 请保证此域名在你的 SSO 鉴权服务提供商、S3 服务商的 CORS 白名单中
APP_URL=https://lobe.example.com/
# Postgres 相关,也即 DB 必需的环境变量
# 必填,用于加密敏感信息的密钥,可以使用 openssl rand -base64 32 生成
KEY_VAULTS_SECRET=Kix2wcUONd4CX51E/ZPAd36BqM4wzJgKjPtz2sGztqQ=
# 必填Postgres 数据库连接字符串,用于连接到数据库
# 格式postgresql://username:password@host:port/dbname如果你的 pg 实例为 Docker 容器且位于同一 docker-compose 文件中,亦可使用容器名作为 host
DATABASE_URL=postgresql://postgres:uWNZugjBqixf8dxC@postgresql:5432/lobe
# 鉴权服务必需的环境变量
# 可以使用 Auth0、Azure AD、GitHub、Authentik、Zitadel、Logto 等,如有其他接入诉求欢迎提 PR
# 目前支持的鉴权服务提供商请参考https://lobehub.com/zh/docs/self-hosting/auth
# 如果你有 ACCESS_CODE请务必清空我们以 Better Auth 作为唯一鉴权来源
# 必填,用于鉴权的密钥,可以使用 openssl rand -base64 32 生成
AUTH_SECRET=NX2kaPE923dt6BL2U8e9oSre5RfoT7hg
# 必填,指定鉴权服务提供商,这里以 Logto 为例
AUTH_SSO_PROVIDERS=logto
# SSO 鉴权服务提供商部分,以 Logto 为例
# 其他鉴权服务提供商所需的环境变量请参考https://lobehub.com/zh/docs/self-hosting/environment-variables/auth
AUTH_LOGTO_ID=YOUR_LOGTO_ID
AUTH_LOGTO_SECRET=YOUR_LOGTO_SECRET
AUTH_LOGTO_ISSUER=https://lobe-auth-api.example.com/oidc
# 代理相关,如果你需要的话(比如你使用 GitHub 作为鉴权服务提供商)
# HTTP_PROXY=http://localhost:7890
# HTTPS_PROXY=http://localhost:7890
# S3 相关,也即非结构化数据(文件、图片等)存储必需的环境变量
# 这里以 MinIO 为例
# 必填S3 的 Access Key ID对于 MinIO 来说,直到在 MinIO UI 中手动创建之前都是无效的
S3_ACCESS_KEY_ID=YOUR_S3_ACCESS_KEY_ID
# 必填S3 的 Secret Access Key对于 MinIO 来说,直到在 MinIO UI 中手动创建之前都是无效的
S3_SECRET_ACCESS_KEY=YOUR_S3_SECRET_ACCESS_KEY
# 必填S3 的 Endpoint用于服务端/客户端连接到 S3 API
S3_ENDPOINT=https://lobe-s3-api.example.com
# 必填S3 的 Bucket直到在 MinIO UI 中手动创建之前都是无效的
S3_BUCKET=lobe
# 选填S3 的 Enable Path Style
# 对于主流 S3 Cloud 服务商,一般填 0 即可;对于自部署的 MinIO请填 1
# 请参考https://lobehub.com/zh/docs/self-hosting/advanced/s3#s-3-enable-path-style
S3_ENABLE_PATH_STYLE=1
# 其他基础环境变量,视需求而定。注意不要有 ACCESS_CODE
# 请参考https://lobehub.com/zh/docs/self-hosting/environment-variables/basic
# 请注意,对于服务端版本,其 API 必须支持嵌入(即 OpenAI text-embedding-3-small模型否则无法对上传文件进行处理但你无需在 OPENAI_MODEL_LIST 中指定此模型
# OPENAI_API_KEY=sk-xxxx
# OPENAI_PROXY_URL=https://api.openai.com/v1
# OPENAI_MODEL_LIST=...

View File

@@ -1,71 +0,0 @@
name: lobehub
services:
postgresql:
image: pgvector/pgvector:pg16
container_name: lobe-postgres
ports:
- '5432:5432'
volumes:
- './data:/var/lib/postgresql/data'
environment:
- 'POSTGRES_DB=lobe'
- 'POSTGRES_PASSWORD=uWNZugjBqixf8dxC'
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U postgres']
interval: 5s
timeout: 5s
retries: 5
restart: always
minio:
image: minio/minio:RELEASE.2025-04-22T22-12-26Z
container_name: lobe-minio
ports:
- '9000:9000'
- '9001:9001'
volumes:
- './s3_data:/etc/minio/data'
environment:
- 'MINIO_ROOT_USER=YOUR_MINIO_USER'
- 'MINIO_ROOT_PASSWORD=YOUR_MINIO_PASSWORD'
- 'MINIO_DOMAIN=lobe-s3-api.example.com'
- 'MINIO_API_CORS_ALLOW_ORIGIN=https://lobe.example.com' # Your LobeChat's domain name.
restart: always
command: >
server /etc/minio/data --address ":9000" --console-address ":9001"
logto:
image: svhd/logto
container_name: lobe-logto
ports:
- '3001:3001'
- '3002:3002'
depends_on:
postgresql:
condition: service_healthy
environment:
- 'TRUST_PROXY_HEADER=1'
- 'DB_URL=postgresql://postgres:uWNZugjBqixf8dxC@postgresql:5432/logto'
- 'ENDPOINT=https://lobe-auth-api.example.com'
- 'ADMIN_ENDPOINT=https://lobe-auth-ui.example.com'
entrypoint: ['sh', '-c', 'npm run cli db seed -- --swe && npm start']
lobe:
image: lobehub/lobehub
container_name: lobehub
ports:
- '3210:3210'
depends_on:
- postgresql
- minio
- logto
env_file:
- .env
restart: always
volumes:
data:
driver: local
s3_data:
driver: local

View File

@@ -1,49 +0,0 @@
# Required: LobeChat domain for tRPC calls
# Ensure this domain is whitelisted in your SSO providers and S3 service CORS settings
APP_URL=https://lobe.example.com/
# Postgres related environment variables
# Required: Secret key for encrypting sensitive information. Generate with: openssl rand -base64 32
KEY_VAULTS_SECRET=Kix2wcUONd4CX51E/ZPAd36BqM4wzJgKjPtz2sGztqQ=
# Required: Postgres database connection string
# Format: postgresql://username:password@host:port/dbname
# If using Docker, you can use the container name as the host
DATABASE_URL=postgresql://postgres:uWNZugjBqixf8dxC@postgresql:5432/lobe
# Authentication related environment variables
# Required: Auth secret key. Generate with: openssl rand -base64 32
AUTH_SECRET=NX2kaPE923dt6BL2U8e9oSre5RfoT7hg
# Required: Specify the authentication provider
AUTH_SSO_PROVIDERS=zitadel
# ZiTADEL provider configuration
# Please refer to: https://lobehub.com/docs/self-hosting/auth/providers/zitadel
AUTH_ZITADEL_ID=285934220675723622
AUTH_ZITADEL_SECRET=pe7Nh3lopXkZkfqh5YEDYI2xsbIz08eZKqInOUZxssd3refRia518Apbv3DZ
AUTH_ZITADEL_ISSUER=https://zitadel.example.com
# Proxy settings (if needed, e.g., when using GitHub as an auth provider)
# HTTP_PROXY=http://localhost:7890
# HTTPS_PROXY=http://localhost:7890
# S3 related environment variables (example using MinIO)
# Required: S3 Access Key ID (for MinIO, invalid until manually created in MinIO UI)
S3_ACCESS_KEY_ID=YOUR_S3_ACCESS_KEY_ID
# Required: S3 Secret Access Key (for MinIO, invalid until manually created in MinIO UI)
S3_SECRET_ACCESS_KEY=YOUR_S3_SECRET_ACCESS_KEY
# Required: S3 Endpoint for server/client connections to S3 API
S3_ENDPOINT=https://lobe-s3-api.example.com
# Required: S3 Bucket (invalid until manually created in MinIO UI)
S3_BUCKET=lobe
# Optional: S3 Enable Path Style
# Use 0 for mainstream S3 cloud providers; use 1 for self-hosted MinIO
# See: https://lobehub.com/docs/self-hosting/advanced/s3#s-3-enable-path-style
S3_ENABLE_PATH_STYLE=1
# Other basic environment variables (as needed)
# See: https://lobehub.com/docs/self-hosting/environment-variables/basic
# Note: For server versions, the API must support embedding models (OpenAI text-embedding-3-small) for file processing
# You don't need to specify this model in OPENAI_MODEL_LIST
# OPENAI_API_KEY=sk-xxxx
# OPENAI_PROXY_URL=https://api.openai.com/v1
# OPENAI_MODEL_LIST=...

View File

@@ -1,44 +0,0 @@
# 必填LobeChat 域名,用于 tRPC 调用
# 请保证此域名在你的 SSO 鉴权服务提供商、S3 服务商的 CORS 白名单中
APP_URL=https://lobe.example.com/
# Postgres 相关,也即 DB 必需的环境变量
# 必填,用于加密敏感信息的密钥,可以使用 openssl rand -base64 32 生成
KEY_VAULTS_SECRET=Kix2wcUONd4CX51E/ZPAd36BqM4wzJgKjPtz2sGztqQ=
# 必填Postgres 数据库连接字符串,用于连接到数据库
# 格式postgresql://username:password@host:port/dbname如果你的 pg 实例为 Docker 容器且位于同一 docker-compose 文件中,亦可使用容器名作为 host
DATABASE_URL=postgresql://postgres:uWNZugjBqixf8dxC@postgresql:5432/lobe
# 鉴权服务必需的环境变量
# 必填,用于鉴权的密钥,可以使用 openssl rand -base64 32 生成
AUTH_SECRET=NX2kaPE923dt6BL2U8e9oSre5RfoT7hg
# 必填,指定鉴权服务提供商
AUTH_SSO_PROVIDERS=zitadel
# ZiTADEL 鉴权服务提供商部分
# 请参考https://lobehub.com/zh/docs/self-hosting/auth/providers/zitadel
AUTH_ZITADEL_ID=285934220675723622
AUTH_ZITADEL_SECRET=pe7Nh3lopXkZkfqh5YEDYI2xsbIz08eZKqInOUZxssd3refRia518Apbv3DZ
AUTH_ZITADEL_ISSUER=https://zitadel.example.com
# S3 相关,也即非结构化数据(文件、图片等)存储必需的环境变量
# 这里以 MinIO 为例
# 必填S3 的 Access Key ID对于 MinIO 来说,直到在 MinIO UI 中手动创建之前都是无效的
S3_ACCESS_KEY_ID=YOUR_S3_ACCESS_KEY_ID
# 必填S3 的 Secret Access Key对于 MinIO 来说,直到在 MinIO UI 中手动创建之前都是无效的
S3_SECRET_ACCESS_KEY=YOUR_S3_SECRET_ACCESS_KEY
# 必填S3 的 Endpoint用于服务端/客户端连接到 S3 API
S3_ENDPOINT=https://lobe-s3-api.example.com
# 必填S3 的 Bucket直到在 MinIO UI 中手动创建之前都是无效的
S3_BUCKET=lobe
# 选填S3 的 Enable Path Style
# 对于主流 S3 Cloud 服务商,一般填 0 即可;对于自部署的 MinIO请填 1
# 请参考https://lobehub.com/zh/docs/self-hosting/advanced/s3#s-3-enable-path-style
S3_ENABLE_PATH_STYLE=1
# 其他基础环境变量,视需求而定。注意不要有 ACCESS_CODE
# 请参考https://lobehub.com/zh/docs/self-hosting/environment-variables/basic
# 请注意,对于服务端版本,其 API 必须支持嵌入(即 OpenAI text-embedding-3-small模型否则无法对上传文件进行处理但你无需在 OPENAI_MODEL_LIST 中指定此模型
# OPENAI_API_KEY=sk-xxxx
# OPENAI_PROXY_URL=https://api.openai.com/v1
# OPENAI_MODEL_LIST=...

View File

@@ -1,69 +0,0 @@
name: lobehub
services:
postgresql:
image: pgvector/pgvector:pg16
container_name: lobe-postgres
ports:
- '5432:5432'
volumes:
- './data:/var/lib/postgresql/data'
environment:
- 'POSTGRES_DB=lobe'
- 'POSTGRES_PASSWORD=uWNZugjBqixf8dxC'
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U postgres']
interval: 5s
timeout: 5s
retries: 5
restart: always
minio:
image: minio/minio:RELEASE.2025-04-22T22-12-26Z
container_name: lobe-minio
ports:
- '9000:9000'
- '9001:9001'
volumes:
- './s3_data:/etc/minio/data'
environment:
- 'MINIO_ROOT_USER=YOUR_MINIO_USER'
- 'MINIO_ROOT_PASSWORD=YOUR_MINIO_PASSWORD'
- 'MINIO_DOMAIN=lobe-s3-api.example.com'
- 'MINIO_API_CORS_ALLOW_ORIGIN=https://lobe.example.com' # Your LobeChat's domain name.
restart: always
command: >
server /etc/minio/data --address ":9000" --console-address ":9001"
zitadel:
restart: always
image: ghcr.io/zitadel/zitadel:latest
container_name: lobe-zitadel
command: start-from-init --config /zitadel-config.yaml --steps /zitadel-init-steps.yaml --masterkey "cft3Tekr/rQBOqwoQSCPoncA9BHbn7QJ" --tlsMode external #MasterkeyNeedsToHave32Characters
ports:
- 8080:8080
volumes:
- ./zitadel-config.yaml:/zitadel-config.yaml:ro
- ./zitadel-init-steps.yaml:/zitadel-init-steps.yaml:ro
depends_on:
postgresql:
condition: service_healthy
lobe:
image: lobehub/lobehub
container_name: lobehub
ports:
- '3210:3210'
depends_on:
- postgresql
- minio
- zitadel
env_file:
- .env
restart: always
volumes:
data:
driver: local
s3_data:
driver: local

View File

@@ -1,25 +0,0 @@
Log:
Level: 'info'
ExternalPort: 443
ExternalDomain: example.com #Your zitadel's domain name
ExternalSecure: true
TLS:
Enabled: false # ZITADEL_TLS_ENABLED
# If not using the docker compose example, adjust these values for connecting ZITADEL to your PostgreSQL
Database:
postgres:
Host: postgresql
Port: 5432
Database: zitadel
User:
Username: 'zitadel'
Password: 'zitadel'
SSL:
Mode: 'disable'
Admin:
Username: 'postgres'
Password: 'uWNZugjBqixf8dxC' #postgres password
SSL:
Mode: 'disable'

View File

@@ -1,11 +0,0 @@
# All possible options and their defaults: https://github.com/zitadel/zitadel/blob/main/cmd/setup/steps.yaml
FirstInstance:
Org:
Human:
# use the loginname root@zitadel.localhost, replace localhost with your configured external domain
Username: 'root'
# The password must be 8 characters or more and must contain uppercase letters, lowercase letters, symbols, and numbers. The first login will require a password change.
Password: 'Password1!'
Email:
# Optional, if set, can be used to log in with email.
Address: 'example@zitadel.com' # ZITADEL_FIRSTINSTANCE_ORG_HUMAN_EMAIL_ADDRESS

View File

@@ -668,12 +668,33 @@ section_regenerate_secrets() {
echo $(show_message "security_secrect_regenerate_failed") "RUSTFS_SECRET_KEY"
RUSTFS_SECRET_KEY="YOUR_RUSTFS_PASSWORD"
else
# Search and replace the value of S3_SECRET_ACCESS_KEY in .env
sed "${SED_INPLACE_ARGS[@]}" "s#^RUSTFS_SECRET_KEY=.*#RUSTFS_SECRET_KEY=${RUSTFS_SECRET_KEY}#" .env
if [ $? -ne 0 ]; then
echo $(show_message "security_secrect_regenerate_failed") "RUSTFS_SECRET_KEY in \`.env\`"
fi
fi
# Generate KEY_VAULTS_SECRET (base64 encoded 32 bytes)
KEY_VAULTS_SECRET=$(openssl rand -base64 32)
if [ $? -ne 0 ]; then
echo $(show_message "security_secrect_regenerate_failed") "KEY_VAULTS_SECRET"
else
sed "${SED_INPLACE_ARGS[@]}" "s#^KEY_VAULTS_SECRET=.*#KEY_VAULTS_SECRET=${KEY_VAULTS_SECRET}#" .env
if [ $? -ne 0 ]; then
echo $(show_message "security_secrect_regenerate_failed") "KEY_VAULTS_SECRET in \`.env\`"
fi
fi
# Generate AUTH_SECRET (base64 encoded 32 bytes)
AUTH_SECRET=$(openssl rand -base64 32)
if [ $? -ne 0 ]; then
echo $(show_message "security_secrect_regenerate_failed") "AUTH_SECRET"
else
sed "${SED_INPLACE_ARGS[@]}" "s#^AUTH_SECRET=.*#AUTH_SECRET=${AUTH_SECRET}#" .env
if [ $? -ne 0 ]; then
echo $(show_message "security_secrect_regenerate_failed") "AUTH_SECRET in \`.env\`"
fi
fi
}
show_message "ask_regenerate_secrets"

View File

@@ -36,7 +36,7 @@ First, you need to install the following software:
- PNPM: We use PNPM as the preferred package manager. You can download and install it from the [PNPM official website](https://pnpm.io/installation).
- Bun: We use Bun as the npm scripts runner. You can download and install it from the [Bun official website](https://bun.com/docs/installation).
- Git: We use Git for version control. You can download and install it from the Git official website.
- Docker: Required for running PostgreSQL, MinIO, and other services. You can download and install it from the [Docker official website](https://www.docker.com/get-started).
- Docker: Required for running PostgreSQL, RustFS, and other services. You can download and install it from the [Docker official website](https://www.docker.com/get-started).
- IDE: You can choose your preferred integrated development environment (IDE). We recommend using WebStorm/VSCode.
### VSCode Users
@@ -66,38 +66,40 @@ pnpm i
#### 3. Configure Environment
Copy the example environment file to create your Docker Compose configuration:
Copy the example environment files:
```bash
cp docker-compose/local/.env.example docker-compose/local/.env
# Docker services configuration
cp docker-compose/dev/.env.example docker-compose/dev/.env
# Next.js development server configuration
cp .env.example.development .env
```
Edit `docker-compose/local/.env` as needed for your development setup. This file contains all necessary environment variables for the Docker services and configures:
Edit these files as needed:
- **Database**: PostgreSQL with connection string
- **Authentication**: Better Auth with Casdoor SSO
- **Storage**: MinIO S3-compatible storage
- **Search**: SearXNG search engine
- `docker-compose/dev/.env` - Docker services (PostgreSQL, Redis, RustFS, SearXNG)
- `.env` - Next.js dev server (database connection, S3 storage, auth, etc.)
#### 4. Start Docker Services
Start all required services using Docker Compose:
```bash
docker-compose -f docker-compose.development.yml up -d
bun run dev:docker
```
This will start the following services:
- PostgreSQL database (port 5432)
- MinIO storage (port 9000)
- Casdoor authentication (port 8000)
- SearXNG search (port 8080)
- Redis cache (port 6379)
- RustFS storage (API port 9000, Console port 9001)
- SearXNG search (port 8180)
You can check all Docker services are running by running:
```bash
docker-compose -f docker-compose.development.yml ps
docker compose -f docker-compose/dev/docker-compose.yml ps
```
#### 5. Run Database Migrations
@@ -122,83 +124,15 @@ Now, you can open `http://localhost:3010` in your browser, and you should see th
![](https://github-production-user-asset-6210df.s3.amazonaws.com/28616219/274655364-414bc31e-8511-47a3-af17-209b530effc7.png)
## Image Generation Development
When working with image generation features (text-to-image, image-to-image), the Docker Compose setup already includes all necessary storage services for handling generated images and user uploads.
### Image Generation Configuration
The existing Docker Compose configuration already includes MinIO storage service and all necessary environment variables in `docker-compose/local/.env.example`. No additional setup is required.
### Image Generation Architecture
The image generation feature requires:
- **PostgreSQL**: Stores metadata about generated images
- **MinIO/S3**: Stores the actual image files
### Storage Configuration
The `docker-compose/local/.env.example` file includes all necessary S3 environment variables:
```bash
# S3 Storage Configuration (MinIO for local development)
S3_ACCESS_KEY_ID=${MINIO_ROOT_USER}
S3_SECRET_ACCESS_KEY=${MINIO_ROOT_PASSWORD}
S3_ENDPOINT=http://localhost:${MINIO_PORT}
S3_BUCKET=${MINIO_LOBE_BUCKET}
S3_ENABLE_PATH_STYLE=1 # Required for MinIO
S3_SET_ACL=0 # MinIO compatibility
```
### File Storage Structure
Generated images and user uploads are organized in the MinIO bucket:
```
lobe/ # S3 Bucket (MINIO_LOBE_BUCKET)
├── generated/ # Generated images
│ └── {userId}/
│ └── {sessionId}/
│ └── {imageId}.png
└── uploads/ # User uploads for image-to-image
└── {userId}/
└── {fileId}.{ext}
```
### Development Workflow for Images
When developing image generation features, generated images will be:
1. Created by the AI model
2. Uploaded to S3/MinIO via presigned URLs
3. Metadata stored in PostgreSQL
4. Served via the public S3 URL
Example code for testing image upload:
```typescript
// Example: Upload generated image
const uploadUrl = await trpc.upload.createPresignedUrl.mutate({
filename: 'generated-image.png',
contentType: 'image/png',
});
// Upload to S3
await fetch(uploadUrl, {
method: 'PUT',
body: imageBlob,
headers: { 'Content-Type': 'image/png' },
});
```
### Service URLs
## Service URLs
When running with Docker Compose development setup:
- **PostgreSQL**: `postgres://postgres@localhost:5432/LobeHub`
- **MinIO API**: `http://localhost:9000`
- **MinIO Console**: `http://localhost:9001` (admin/CHANGE\_THIS\_PASSWORD\_IN\_PRODUCTION)
- **PostgreSQL**: `postgres://postgres@localhost:5432/lobechat`
- **Redis**: `redis://localhost:6379`
- **RustFS API**: `http://localhost:9000`
- **RustFS Console**: `http://localhost:9001`
- **SearXNG**: `http://localhost:8180`
- **Application**: `http://localhost:3010`
## Troubleshooting
@@ -208,15 +142,8 @@ When running with Docker Compose development setup:
If you encounter issues, you can reset the entire stack:
```bash
# Stop and remove all containers
docker-compose -f docker-compose.development.yml down
# Remove volumes (this will delete all data)
docker-compose -f docker-compose.development.yml down -v
# Start fresh
docker-compose -f docker-compose.development.yml up -d
pnpm db:migrate
# Completely reset Docker environment (delete all data and restart)
bun run dev:docker:reset
```
### Port Conflicts
@@ -226,20 +153,20 @@ If ports are already in use:
```bash
# Check what's using the ports
lsof -i :5432 # PostgreSQL
lsof -i :9000 # MinIO API
lsof -i :9001 # MinIO Console
lsof -i :6379 # Redis
lsof -i :9000 # RustFS API
lsof -i :9001 # RustFS Console
lsof -i :8180 # SearXNG
```
### Database Migrations
The setup script runs migrations automatically. If you need to run them manually:
Run migrations manually when needed:
```bash
pnpm db:migrate
```
Note: In development mode with `pnpm dev:desktop`, migrations also run automatically on startup.
---
During the development process, if you encounter any issues with environment setup or have any questions about LobeHub development, feel free to ask us at any time. We look forward to seeing your contributions!

View File

@@ -33,7 +33,7 @@ tags:
- PNPM我们使用 PNPM 作为管理器。你可以从 [pnpm 的官方网站](https://pnpm.io/installation) 上下载并安装。
- Bun我们使用 Bun 作为 npm scripts runner你可以从 [Bun 的官方网站](https://bun.com/docs/installation) 上下载并安装。
- Git我们使用 Git 进行版本控制。你可以从 Git 的官方网站上下载并安装。
- Docker用于运行 PostgreSQL、MinIO 等服务。你可以从 [Docker 官方网站](https://www.docker.com/get-started) 下载并安装。
- Docker用于运行 PostgreSQL、RustFS 等服务。你可以从 [Docker 官方网站](https://www.docker.com/get-started) 下载并安装。
- IDE你可以选择你喜欢的集成开发环境IDE我们推荐使用 WebStorm/VSCode。
### VSCode 用户
@@ -63,38 +63,40 @@ pnpm i
#### 3. 配置环境
复制示例环境文件来创建你的 Docker Compose 配置
复制示例环境文件:
```bash
cp docker-compose/local/.env.example docker-compose/local/.env
# Docker 服务配置
cp docker-compose/dev/.env.example docker-compose/dev/.env
# Next.js 开发服务器配置
cp .env.example.development .env
```
根据需要编辑 `docker-compose/local/.env` 文件以适应你的开发设置。此文件包含 Docker 服务所需的所有环境变量,配置了
根据需要编辑这些文件
- **数据库**:带连接字符串的 PostgreSQL
- **身份验证**:带 Casdoor SSO 的 Better Auth
- **存储**MinIO S3 兼容存储
- **搜索**SearXNG 搜索引擎
- `docker-compose/dev/.env` - Docker 服务配置PostgreSQL、Redis、RustFS、SearXNG
- `.env` - Next.js 开发服务器配置数据库连接、S3 存储、认证等)
#### 4. 启动 Docker 服务
使用 Docker Compose 启动所有必需的服务:
```bash
docker-compose -f docker-compose.development.yml up -d
bun run dev:docker
```
这将启动以下服务:
- PostgreSQL 数据库(端口 5432
- MinIO 存储(端口 9000
- Casdoor 身份验证(端口 8000
- SearXNG 搜索(端口 8080
- Redis 缓存(端口 6379
- RustFS 存储API 端口 9000,控制台端口 9001
- SearXNG 搜索(端口 8180
可以通过运行以下命令检查所有 Docker 服务运行状态:
```bash
docker-compose -f docker-compose.development.yml ps
docker compose -f docker-compose/dev/docker-compose.yml ps
```
#### 5. 运行数据库迁移
@@ -119,83 +121,15 @@ bun run dev
![Chat Page](https://hub-apac-1.lobeobjects.space/docs/fc7b157a3bc016bc97719065f80c555c.png)
## 图像生成开发
在开发图像生成功能文生图、图生图Docker Compose 配置已经包含了处理生成图像和用户上传所需的所有存储服务。
### 图像生成配置
现有的 Docker Compose 配置已经包含了 MinIO 存储服务以及 `docker-compose/local/.env.example` 中的所有必要环境变量。无需额外配置。
### 图像生成架构
图像生成功能需要:
- **PostgreSQL**:存储生成图像的元数据
- **MinIO/S3**:存储实际的图像文件
### 存储配置
`docker-compose/local/.env.example` 文件包含所有必要的 S3 环境变量:
```bash
# S3 存储配置(本地开发使用 MinIO
S3_ACCESS_KEY_ID=${MINIO_ROOT_USER}
S3_SECRET_ACCESS_KEY=${MINIO_ROOT_PASSWORD}
S3_ENDPOINT=http://localhost:${MINIO_PORT}
S3_BUCKET=${MINIO_LOBE_BUCKET}
S3_ENABLE_PATH_STYLE=1 # MinIO 必需
S3_SET_ACL=0 # MinIO 兼容性
```
### 文件存储结构
生成的图像和用户上传在 MinIO 存储桶中按以下方式组织:
```
lobe/ # S3 存储桶 (MINIO_LOBE_BUCKET)
├── generated/ # 生成的图像
│ └── {userId}/
│ └── {sessionId}/
│ └── {imageId}.png
└── uploads/ # 用户上传的图像处理文件
└── {userId}/
└── {fileId}.{ext}
```
### 图像开发工作流
在开发图像生成功能时,生成的图像将:
1. 由 AI 模型创建
2. 通过预签名 URL 上传到 S3/MinIO
3. 元数据存储在 PostgreSQL 中
4. 通过公共 S3 URL 提供服务
测试图像上传的示例代码:
```typescript
// 示例:上传生成的图像
const uploadUrl = await trpc.upload.createPresignedUrl.mutate({
filename: 'generated-image.png',
contentType: 'image/png',
});
// 上传到 S3
await fetch(uploadUrl, {
method: 'PUT',
body: imageBlob,
headers: { 'Content-Type': 'image/png' },
});
```
### 服务地址
## 服务地址
运行 Docker Compose 开发环境时:
- **PostgreSQL**`postgres://postgres@localhost:5432/LobeHub`
- **MinIO API**`http://localhost:9000`
- **MinIO 控制台**`http://localhost:9001` (admin/CHANGE\_THIS\_PASSWORD\_IN\_PRODUCTION)
- **PostgreSQL**`postgres://postgres@localhost:5432/lobechat`
- **Redis**`redis://localhost:6379`
- **RustFS API**`http://localhost:9000`
- **RustFS 控制台**`http://localhost:9001`
- **SearXNG**`http://localhost:8180`
- **应用程序**`http://localhost:3010`
## 故障排除
@@ -205,15 +139,8 @@ await fetch(uploadUrl, {
如遇到问题,可以重置整个服务堆栈:
```bash
# 停止并删除所有容器
docker-compose -f docker-compose.development.yml down
# 删除卷(这将删除所有数据)
docker-compose -f docker-compose.development.yml down -v
# 重新启动
docker-compose -f docker-compose.development.yml up -d
pnpm db:migrate
# 完全重置 Docker 环境(删除所有数据并重新启动)
bun run dev:docker:reset
```
### 端口冲突
@@ -223,20 +150,20 @@ pnpm db:migrate
```bash
# 检查端口使用情况
lsof -i :5432 # PostgreSQL
lsof -i :9000 # MinIO API
lsof -i :9001 # MinIO 控制台
lsof -i :6379 # Redis
lsof -i :9000 # RustFS API
lsof -i :9001 # RustFS 控制台
lsof -i :8180 # SearXNG
```
### 数据库迁移
配置脚本会自动运行迁移。如需手动运行:
如需手动运行迁移
```bash
pnpm db:migrate
```
注意:在使用 `pnpm dev:desktop` 的开发模式下,迁移也会在启动时自动运行。
---
在开发过程中,如果你在环境设置上遇到任何问题,或者有任何关于 LobeHub 开发的问题,欢迎随时向我们提问。我们期待看到你的贡献!

View File

@@ -56,6 +56,9 @@
"dev:bun": "bun --bun next dev -p 3010",
"dev:desktop": "cross-env NEXT_PUBLIC_IS_DESKTOP_APP=1 tsx scripts/runNextDesktop.mts dev -p 3015",
"dev:desktop:static": "cross-env DESKTOP_RENDERER_STATIC=1 npm run electron:dev --prefix=./apps/desktop",
"dev:docker": "docker compose -f docker-compose/dev/docker-compose.yml up -d --wait postgresql redis rustfs searxng",
"dev:docker:down": "docker compose -f docker-compose/dev/docker-compose.yml down",
"dev:docker:reset": "docker compose -f docker-compose/dev/docker-compose.yml down -v && rm -rf docker-compose/dev/data && npm run dev:docker && pnpm db:migrate",
"dev:mobile": "next dev -p 3018",
"docs:cdn": "npm run workflow:docs-cdn && npm run lint:mdx",
"docs:i18n": "lobe-i18n md && npm run lint:mdx",