📝 docs: improve development guides to reflect current architecture (#12174)

* 🔧 chore(vscode): add typescript.tsdk and disable mdx server

Fix MDX extension crash caused by Cursor's bundled TypeScript version

* 🔧 chore(claude): add skills symlink to .claude directory

* 📝 docs: update development guides with current tech stack and architecture

- Update tech stack: Next.js 16 + React 19, hybrid routing (App Router + React Router DOM), tRPC, Drizzle ORM + PostgreSQL, react-i18next
- Update directory structure to reflect monorepo layout (apps/, packages/, e2e/, locales/)
- Expand src/server/ with detailed subdirectory descriptions
- Add complete SPA routing architecture with desktop and mobile route tables
- Add tRPC router grouping details (lambda, async, tools, mobile)
- Add data flow diagram
- Simplify dev setup section to link to setup-development guide
- Fix i18n default language description (English, not Chinese)
- Sync all changes between zh-CN and English versions

* 📝 docs: expand data flow diagram in folder structure guide

Replace the single-line data flow with a detailed layer-by-layer
flow diagram showing each layer's location and responsibility.

* 📝 docs: modernize feature development guide

- Remove outdated clientDB/pglite/indexDB references
- Update schema path to packages/database/src/schemas/
- Update types path to packages/types/src/
- Replace inline migration steps with link to db-migrations guide
- Add complete layered architecture table (Client Service, WebAPI,
  tRPC Router, Server Service, Server Module, Repository, DB Model)
- Clarify Client Service as frontend code
- Add i18n handling section with workflow and key naming convention
- Remove verbose CSS style code, keep core business logic only
- Expand testing section with commands, skill refs, and CI tip

* 🔥 docs: remove outdated frontend feature development guide

Content is superseded by the comprehensive feature-development guide
which covers the full chain from schema to testing.

* 📝 docs: add LobeHub ecosystem and community resources

Add official ecosystem packages (LobeUI, LobeIcons, LobeCharts,
LobeEditor, LobeTTS, LobeLint, Lobe i18n, MCP Mark) and community
platforms (Agent Market, MCP Market, YouTube, X, Discord).

* 📝 docs: improve contributing guidelines and resources

- Clarify semantic release triggers (feat/fix vs style/chore)
- Add testing section with Vitest/E2E/CI requirements
- Update contribution steps to include CI check
- Add LobeHub ecosystem packages and community platforms to resources

* 📝 docs: rewrite architecture guide to reflect current platform design

* 📝 docs: add code quality tools to architecture guide

* 📝 docs: rewrite chat-api guide to reflect current architecture

- Update sequence diagram with Agent Runtime loop as core execution engine
- Replace PluginGateway with ToolExecution layer (Builtin/MCP/Plugin)
- Update all path references (model-runtime, agent-runtime, fetch-sse packages)
- Split old AgentRuntime section into Model Runtime + Agent Runtime
- Add tool calling taxonomy: Builtin, MCP, and Plugin (deprecated)
- Add client-side vs server-side execution section
- Remove outdated adapter pseudo-code examples

* 📝 docs: update file paths in add-new-image-model guide

- src/libs/standard-parameters/ → packages/model-bank/src/standard-parameters/
- src/config/aiModels/ → packages/model-bank/src/aiModels/
- src/libs/model-runtime/ → packages/model-runtime/src/providers/

* 📝 docs: restore S3_PUBLIC_DOMAIN in deployment guides

The S3_PUBLIC_DOMAIN env var was incorrectly removed from all
documentation in commit 4a87b31. This variable is still required
by the code (src/server/services/file/impls/s3.ts) to generate
public URLs for uploaded files. Without it, image URLs sent to
vision models are just S3 keys instead of full URLs.

Closes #12161

* 📦 chore: pin @lobehub/ui to 4.33.4 to fix SortableList type errors

@lobehub/ui 4.34.0 introduced breaking type changes in SortableList
where SortableListItem became strict, causing type incompatibility
in onChange and renderItem callbacks across 6 files. Pin to 4.33.4
via pnpm overrides to enforce consistent version across monorepo.

* 🐛 fix: correct ReadableStream type annotations and add dom.asynciterable

- Add dom.asynciterable to tsconfig lib for ReadableStream async iteration
- Fix createCallbacksTransformer return type: TransformStream<string, Uint8Array>
- Update stream function return types from ReadableStream<string> to
  ReadableStream<Uint8Array> (llama.ts, ollama.ts, claude.ts)
- Remove @ts-ignore from for-await loops in test files
- Add explicit string[] type for chunks arrays

* Revert "📝 docs: restore S3_PUBLIC_DOMAIN in deployment guides"

This reverts commit 24073f83d3.
This commit is contained in:
YuTengjing
2026-02-07 22:29:14 +08:00
committed by GitHub
parent 5cae71eda7
commit 463d6c8762
28 changed files with 1604 additions and 1640 deletions

1
.claude/skills Symbolic link
View File

@@ -0,0 +1 @@
../.agents/skills

View File

@@ -24,6 +24,7 @@
// support mdx
"mdx"
],
"mdx.server.enable": false,
"npm.packageManager": "pnpm",
"search.exclude": {
"**/node_modules": true,
@@ -51,6 +52,7 @@
// make stylelint work with tsx antd-style css template string
"typescriptreact"
],
"typescript.tsdk": "node_modules/typescript/lib",
"vitest.maximumConfigs": 20,
"workbench.editor.customLabels.patterns": {
"**/app/**/[[]*[]]/[[]*[]]/page.tsx": "${dirname(2)}/${dirname(1)}/${dirname} • page component",

View File

@@ -15,7 +15,7 @@ tags:
## Parameter Standardization
All image generation models must use the standard parameters defined in `src/libs/standard-parameters/index.ts`. This ensures parameter consistency across different Providers, creating a more unified user experience.
All image generation models must use the standard parameters defined in `packages/model-bank/src/standard-parameters/index.ts`. This ensures parameter consistency across different Providers, creating a more unified user experience.
**Supported Standard Parameters**:
@@ -32,7 +32,7 @@ All image generation models must use the standard parameters defined in `src/lib
These models can be requested using the OpenAI SDK, with request parameters and return values consistent with DALL-E and GPT-Image-X series.
Taking Zhipu's CogView-4 as an example, which is an OpenAI-compatible model, you can add it by adding the model configuration in the corresponding AI models file `src/config/aiModels/zhipu.ts`:
Taking Zhipu's CogView-4 as an example, which is an OpenAI-compatible model, you can add it by adding the model configuration in the corresponding AI models file `packages/model-bank/src/aiModels/zhipu.ts`:
```ts
const zhipuImageModels: AIImageModelCard[] = [
@@ -71,7 +71,7 @@ Most Providers use `openaiCompatibleFactory` for OpenAI compatibility. You can p
1. **Read Provider documentation and standard parameter definitions**
- Review the Provider's image generation API documentation to understand request and response formats
- Read `src/libs/standard-parameters/index.ts` to understand supported parameters
- Read `packages/model-bank/src/standard-parameters/index.ts` to understand supported parameters
- Add image model configuration in the corresponding AI models file
2. **Implement custom createImage method**
@@ -87,7 +87,7 @@ Most Providers use `openaiCompatibleFactory` for OpenAI compatibility. You can p
**Code Example**:
```ts
// src/libs/model-runtime/provider-name/createImage.ts
// packages/model-runtime/src/providers/<provider-name>/createImage.ts
export const createProviderImage = async (
payload: ImageGenerationPayload,
options: any,
@@ -112,7 +112,7 @@ export const createProviderImage = async (
```
```ts
// src/libs/model-runtime/provider-name/index.ts
// packages/model-runtime/src/providers/<provider-name>/index.ts
export const LobeProviderAI = openaiCompatibleFactory({
constructorOptions: {
// ... other configurations
@@ -130,7 +130,7 @@ If your Provider has an independent class implementation, you can directly add t
1. **Read Provider documentation and standard parameter definitions**
- Review the Provider's image generation API documentation
- Read `src/libs/standard-parameters/index.ts`
- Read `packages/model-bank/src/standard-parameters/index.ts`
- Add image model configuration in the corresponding AI models file
2. **Implement createImage method in Provider class**
@@ -144,7 +144,7 @@ If your Provider has an independent class implementation, you can directly add t
**Code Example**:
```ts
// src/libs/model-runtime/provider-name/index.ts
// packages/model-runtime/src/providers/<provider-name>/index.ts
export class LobeProviderAI {
async createImage(
payload: ImageGenerationPayload,

View File

@@ -13,7 +13,7 @@ tags:
## 参数标准化
所有图像生成模型都必须使用 `src/libs/standard-parameters/index.ts` 中定义的标准参数。这确保了不同 Provider 之间的参数一致性,让用户体验更加统一。
所有图像生成模型都必须使用 `packages/model-bank/src/standard-parameters/index.ts` 中定义的标准参数。这确保了不同 Provider 之间的参数一致性,让用户体验更加统一。
**支持的标准参数**
@@ -30,7 +30,7 @@ tags:
指的是可以使用 openai SDK 进行请求,并且请求参数和和返回值和 dall-e 以及 gpt-image-x 系列一致。
以智谱的 CogView-4 为例,它是一个兼容 openai 请求格式的模型。你只需要在对应的 ai models 文件 `src/config/aiModels/zhipu.ts` 中,添加模型配置,例如:
以智谱的 CogView-4 为例,它是一个兼容 openai 请求格式的模型。你只需要在对应的 ai models 文件 `packages/model-bank/src/aiModels/zhipu.ts` 中,添加模型配置,例如:
```ts
const zhipuImageModels: AIImageModelCard[] = [
@@ -69,7 +69,7 @@ const zhipuImageModels: AIImageModelCard[] = [
1. **阅读 Provider 官方文档和标准参数定义**
- 查看 Provider 的图像生成 API 文档,了解请求格式和响应格式
- 阅读 `src/libs/standard-parameters/index.ts`,了解支持的参数
- 阅读 `packages/model-bank/src/standard-parameters/index.ts`,了解支持的参数
- 在对应的 ai models 文件中增加 image model 配置
2. **实现自定义的 createImage 方法**
@@ -85,7 +85,7 @@ const zhipuImageModels: AIImageModelCard[] = [
**代码示例**
```ts
// src/libs/model-runtime/provider-name/createImage.ts
// packages/model-runtime/src/providers/<provider-name>/createImage.ts
export const createProviderImage = async (
payload: ImageGenerationPayload,
options: any,
@@ -110,7 +110,7 @@ export const createProviderImage = async (
```
```ts
// src/libs/model-runtime/provider-name/index.ts
// packages/model-runtime/src/providers/<provider-name>/index.ts
export const LobeProviderAI = openaiCompatibleFactory({
constructorOptions: {
// ... 其他配置
@@ -128,7 +128,7 @@ export const LobeProviderAI = openaiCompatibleFactory({
1. **阅读 Provider 官方文档和标准参数定义**
- 查看 Provider 的图像生成 API 文档
- 阅读 `src/libs/standard-parameters/index.ts`
- 阅读 `packages/model-bank/src/standard-parameters/index.ts`
- 在对应的 ai models 文件中增加 image model 配置
2. **在 Provider 类中实现 createImage 方法**
@@ -142,7 +142,7 @@ export const LobeProviderAI = openaiCompatibleFactory({
**代码示例**
```ts
// src/libs/model-runtime/provider-name/index.ts
// packages/model-runtime/src/providers/<provider-name>/index.ts
export class LobeProviderAI {
async createImage(
payload: ImageGenerationPayload,

View File

@@ -1,50 +1,114 @@
---
title: Architecture Design
description: >-
Explore the architecture of LobeHub, an AI chat app built on Next.js,
featuring frontend, APIs, and markets.
Explore the architecture of LobeHub, an open-source AI Agent platform
built on Next.js, covering frontend, backend, runtime, and data storage.
tags:
- LobeHub
- AI Chat Application
- Next.js
- Architecture Design
- Frontend Development
- Agent Platform
- Next.js
---
# Architecture Design
LobeHub is an AI chat application built on the Next.js framework, aiming to provide an AI productivity platform that enables users to interact with AI through natural language. The following is an overview of the architecture design of LobeHub:
LobeHub is an open-source AI Agent platform built on Next.js, enabling users to interact with AI through natural language, use tools, manage knowledge bases, and more. The following is an overview of LobeHub's architecture design.
## Application Architecture Overview
The overall architecture of LobeHub consists of the frontend, EdgeRuntime API, Agents Market, Plugin Market, and independent plugins. These components collaborate to provide a complete AI experience.
The overall architecture of LobeHub consists of the following core layers:
```plaintext
+---------------------+--------------------------------------------------+
| Layer | Description |
+---------------------+--------------------------------------------------+
| Frontend | Next.js RSC + React Router DOM hybrid SPA |
| Backend API | RESTful WebAPI + tRPC Routers |
| Runtime | Model Runtime + Agent Runtime |
| Auth | Better Auth (email/password + SSO) |
| Data Storage | PostgreSQL + Redis + S3 |
| Marketplace | Agent Market + MCP Tool Market |
+---------------------+--------------------------------------------------+
```
## Frontend Architecture
The frontend of LobeHub adopts the Next.js framework, leveraging its powerful server-side rendering (SSR) capability and routing functionality. The frontend utilizes a stack of technologies, including the antd component library, lobe-ui AIGC component library, zustand state management, swr request library, i18next internationalization library, and more. These technologies collectively support the functionality and features of LobeHub.
The frontend uses the Next.js framework with a **Next.js RSC + React Router DOM hybrid routing** approach: Next.js App Router handles server-rendered pages (e.g., auth pages), while React Router DOM powers the main SPA.
The components in the frontend architecture include app, components, config, const, features, helpers, hooks, layout, locales, migrations, prompts, services, store, styles, types, and utils. Each component has specific responsibilities and collaborates with others to achieve different functionalities.
Key tech stack:
## Edge Runtime API
- **UI Components**: `@lobehub/ui`, antd
- **CSS-in-JS**: antd-style
- **State Management**: zustand (slice pattern)
- **Data Fetching**: SWR + tRPC
- **i18n**: react-i18next
The Edge Runtime API is one of the core components of LobeHub, responsible for handling the core logic of AI conversations. It provides interaction interfaces with the AI engine, including natural language processing, intent recognition, and response generation. The EdgeRuntime API communicates with the frontend, receiving user input and returning corresponding responses.
Frontend code is organized by responsibility under `src/`. See [Directory Structure](/docs/development/basic/folder-structure) for details.
## Agents Market
## Backend API
The Agents Market is a crucial part of LobeHub, providing various AI agents for different scenarios to handle specific tasks and domains. The Agents Market also offers functionality for discovering and uploading agents, allowing users to find agents created by others and easily share their own agents in the market.
The backend provides two API styles:
## Plugin Market
- **RESTful WebAPI** (`src/app/(backend)/webapi/`): Handles endpoints requiring special processing such as chat streaming, TTS, and file serving
- **tRPC Routers** (`src/server/routers/`): Type-safe main business routes, grouped by runtime:
- `lambda/` — Main business (agent, session, message, topic, file, knowledge, settings, etc.)
- `async/` — Long-running async operations (file processing, image generation, RAG evaluation)
- `tools/` — Tool invocations (search, MCP, market)
- `mobile/` — Mobile-specific routes
The Plugin Market is another key component of LobeHub, offering various plugins to extend the functionality and features of LobeHub. Plugins can be independent functional modules or integrated with agents from the Agents Market. During conversations, the assistant automatically identifies user input, recognizes suitable plugins, and passes them to the corresponding plugins for processing and returns the results.
## Runtime
## Security and Performance Optimization
### Model Runtime
LobeHub's security strategy includes authentication and permission management. Users need to authenticate before using LobeHub, and operations are restricted based on the user's permissions.
`@lobechat/model-runtime` (`packages/model-runtime/`) is the LLM API adapter layer that normalizes API differences across 30+ AI providers (OpenAI, Anthropic, Google, Bedrock, Ollama, etc.), providing a unified calling interface. Each provider has its own adapter implementation. It is stateless — each call is independent.
To optimize performance, LobeHub utilizes Next.js SSR functionality to achieve fast page loading and response times. Additionally, a series of performance optimization measures are implemented, including code splitting, caching, and resource compression.
### Agent Runtime
## Development and Deployment Process
`@lobechat/agent-runtime` (`packages/agent-runtime/`) is the agent orchestration engine that sits above Model Runtime, driving the full lifecycle of multi-step AI agent behavior:
LobeHub's development process includes version control, testing, continuous integration, and continuous deployment. The development team uses version control systems for code management and conducts unit and integration testing to ensure code quality. Continuous integration and deployment processes ensure rapid delivery and deployment of code.
- **Plan-Execute Loop**: Core state machine cycling through LLM calls → tool execution → result processing
- **Tool Invocation & Batch Execution**: Supports single and batch tool calls
- **Human-in-the-Loop**: Security checks and human approval flows
- **Context Compression**: Manages context window limits
- **Usage & Cost Tracking**: Accumulates token usage and monetary costs
- **Multi-Agent Orchestration**: `GroupOrchestrationRuntime` supports Supervisor + Executor pattern for multi-agent collaboration
The above is a brief introduction to the architecture design of LobeHub, detailing the responsibilities and collaboration of each component, as well as the impact of design decisions on application functionality and performance.
In short: Model Runtime handles "how to communicate with an LLM provider"; Agent Runtime handles "how to run a complete agent using LLMs, tools, and human approvals."
## Authentication
LobeHub uses [Better Auth](https://www.better-auth.com/) as the authentication framework, supporting:
- Email + password registration and login
- SSO single sign-on (GitHub, Google, and various OAuth providers)
Auth configuration is in `src/auth.ts`, with related routes under `src/app/(backend)/api/`.
## Data Storage
```plaintext
+---------------+----------------------------------------------+
| Storage | Usage |
+---------------+----------------------------------------------+
| PostgreSQL | Primary database for users, sessions, |
| | messages, agent configs, etc. |
| Redis | Caching, session state, rate limiting |
| S3 | File storage (uploads, images, knowledge |
| | base files, etc.) |
+---------------+----------------------------------------------+
```
Database operations use Drizzle ORM, with schemas defined in `packages/database/src/schemas/`.
## Marketplace
- **Agent Market**: Provides AI agents for various scenarios; users can discover, use, and share agents
- **MCP Tool Market**: Discover and integrate MCP tools to extend agent capabilities
## Development and Deployment
- **Version Control**: Git + GitHub, gitmoji commit conventions
- **Code Quality**: ESLint, Stylelint, TypeScript type checking, circular dependency detection (dpdm), dead code detection (knip)
- **Testing**: Vitest unit tests + Cucumber/Playwright E2E tests
- **CI/CD**: GitHub Actions for automated testing, building, and releasing
- **Deployment**: Supports Vercel, Docker, and self-hosting on major cloud platforms

View File

@@ -1,48 +1,110 @@
---
title: 架构设计
description: 深入了解 LobeHub 的架构设计,包括前端、API 和市场组件
description: 深入了解 LobeHub 的架构设计,包括前端、后端、运行时和数据存储
tags:
- LobeHub
- 架构设计
- AI 聊天应用
- Agent 平台
- Next.js
- Edge Runtime API
---
# 架构设计
LobeHub 是一个基于 Next.js 框架构建的 AI 聊天应用,旨在提供一个 AI 生产力平台,使用户能够与 AI 进行自然语言交互。以下是 LobeHub 的架构设计介稿:
LobeHub 是一个基于 Next.js 构建的开源 AI Agent 平台,使用户能够与 AI 进行自然语言交互、使用工具、管理知识库等。以下是 LobeHub 的架构设计概览。
## 应用架构概览
LobeHub 的整体架构由前端、EdgeRuntime API、Agents 市场、插件市场和独立插件组成。这些组件相互协作,以提供完整的 AI 体验。
LobeHub 的整体架构由以下核心层组成:
```plaintext
+---------------------+--------------------------------------------------+
| Layer | Description |
+---------------------+--------------------------------------------------+
| Frontend | Next.js RSC + React Router DOM 混合路由 SPA |
| Backend API | RESTful WebAPI + tRPC Routers |
| Runtime | Model Runtime + Agent Runtime |
| Auth | Better Auth邮箱密码 + SSO |
| Data Storage | PostgreSQL + Redis + S3 |
| Marketplace | Agent 市场 + MCP 工具市场 |
+---------------------+--------------------------------------------------+
```
## 前端架构
LobeHub 的前端采用 Next.js 框架,利用其强大的 SSR服务器端渲染能力和路由功能。前端使用了一系列技术栈包括 antd 组件库和 lobe-ui AIGC 组件库、zustand 状态管理、swr 请求库、i18next 国际化库等。这些技术栈共同支持了 LobeHub 的功能和特性
前端采用 Next.js 框架,使用 **Next.js RSC + React Router DOM 混合路由**方案Next.js App Router 处理服务端渲染页面如认证页React Router DOM 承载主应用 SPA
前端架构中的组件包括 app、components、config、const、features、helpers、hooks、layout、locales、migrations、prompts、services、store、styles、types 和 utils。每个组件都有特定的职责并与其他组件协同工作以实现不同的功能。
主要技术栈:
## Edge Runtime API
- **UI 组件**`@lobehub/ui`、antd
- **CSS-in-JS**antd-style
- **状态管理**zustandslice 模式)
- **数据请求**SWR + tRPC
- **国际化**react-i18next
Edge Runtime API 是 LobeHub 的核心组件之一,负责处理 AI 会话的核心逻辑。它提供了与 AI 引擎的交互接口包括自然语言处理、意图识别和回复生成等。EdgeRuntime API 与前端进行通信,接收用户的输入并返回相应的回复
前端代码按职责分层在 `src/` 目录下,详见 [目录架构](/zh/docs/development/basic/folder-structure)
## Agents 市场
## 后端 API
Agents 市场是 LobeHub 的一个重要组成部分,它提供了各种不同场景的 AI Agent用于处理特定的任务和领域。Agents 市场还提供了使用和上传 Agent 的功能,使用户能够发现其他人制作的 Agent ,也可以一键分享自己的 Agent 到市场上。
后端提供两种 API 形式:
## 插件市场
- **RESTful WebAPI**`src/app/(backend)/webapi/`):处理 chat 流式响应、TTS、文件服务等需要特殊处理的端点
- **tRPC Routers**`src/server/routers/`):类型安全的主要业务路由,按运行时分组:
- `lambda/` — 主业务agent、session、message、topic、file、knowledge、settings 等)
- `async/` — 耗时异步操作文件处理、图像生成、RAG 评估)
- `tools/` — 工具调用search、MCP、market
- `mobile/` — 移动端专用
插件市场是 LobeHub 的另一个关键组件,它提供了各种插件,用于扩展 LobeHub 的功能和特性。插件可以是独立的功能模块,也可以与 Agents 市场的 Agent 进行集成。在会话中,助手将自动识别用户的输入,并识别适合的插件并传递给相应的插件进行处理,并返回处理结果。
## Runtime
## 安全性和性能优化
### Model Runtime
LobeHub 的安全性策略包括身份验证和权限管理。用户需要进行身份验证后才能使用 LobeHub同时根据用户的权限进行相应的操作限制
`@lobechat/model-runtime``packages/model-runtime/`)是 LLM API 适配层,抹平了 30+ 不同 AI Provider 之间的 API 差异OpenAI、Anthropic、Google、Bedrock、Ollama 等),提供统一的调用接口。每个 Provider 有独立的适配器实现。它是无状态的,每次调用独立
为了优化性能LobeHub 使用了 Next.js 的 SSR 功能,实现了快速的页面加载和响应时间。此外,还采用了一系列的性能优化措施,包括代码分割、缓存和资源压缩等。
### Agent Runtime
## 开发和部署流程
`@lobechat/agent-runtime``packages/agent-runtime/`)是 Agent 编排引擎,位于 Model Runtime 之上,负责驱动多步 AI Agent 行为的完整生命周期:
LobeHub 的开发流程包括版本控制、测试、持续集成和持续部署。开发团队使用版本控制系统进行代码管理,并进行单元测试和集成测试以确保代码质量。持续集成和持续部署流程确保了代码的快速交付和部署。
- **Plan-Execute 循环**:核心状态机,循环执行 LLM 调用 → 工具执行 → 结果处理
- **工具调用与批量执行**:支持单工具和批量工具调用
- **Human-in-the-Loop**:安全检查、人工审批流程
- **上下文压缩**:管理上下文窗口
- **用量与成本追踪**:累计 token 用量和费用
- **多 Agent 协作**`GroupOrchestrationRuntime` 支持 Supervisor + Executor 模式的多 Agent 编排
以上是 LobeHub 的架构设计介绍简介,详细解释了各个组件的职责和协作方式,以及设计决策对应用功能和性能的影响。
简言之Model Runtime 解决 "如何与 LLM Provider 通信"Agent Runtime 解决 "如何运行一个使用 LLM、工具、人工审批的完整 Agent"。
## 认证鉴权
LobeHub 使用 [Better Auth](https://www.better-auth.com/) 作为认证框架,支持:
- 邮箱 + 密码注册登录
- SSO 单点登录GitHub、Google 等多种 OAuth Provider
认证配置位于 `src/auth.ts`,相关路由在 `src/app/(backend)/api/` 下。
## 数据存储
```plaintext
+---------------+----------------------------------------------+
| Storage | Usage |
+---------------+----------------------------------------------+
| PostgreSQL | 主数据库存储用户、会话、消息、Agent 配置等 |
| Redis | 缓存、会话状态、速率限制 |
| S3 | 文件存储(用户上传、图片、知识库文件等) |
+---------------+----------------------------------------------+
```
数据库使用 Drizzle ORM 操作schema 定义在 `packages/database/src/schemas/`。
## 市场
- **Agent 市场**:提供各种场景的 AI Agent用户可以发现、使用和分享 Agent
- **MCP 工具市场**:发现和集成 MCP 工具,扩展 Agent 的能力
## 开发和部署
- **版本控制**Git + GitHubgitmoji commit 规范
- **代码质量**ESLint、Stylelint、TypeScript 类型检查、循环依赖检测 (dpdm)、死代码检测 (knip)
- **测试**Vitest 单元测试 + Cucumber/Playwright E2E 测试
- **CI/CD**GitHub Actions 自动化测试、构建和发布
- **部署**:支持 Vercel、Docker、各大云平台自托管

View File

@@ -1,135 +1,222 @@
---
title: LobeHub API Client-Server Interaction Logic
title: Chat API Client-Server Interaction Logic
description: >-
Explore the client-server interaction logic of LobeHub API, including event
sequences.
Explore the client-server interaction logic of LobeChat
Chat API, including event sequences and core components.
tags:
- LobeHub API
- Chat API
- Client-Server Interaction
- Event Sequences
- API Logic
- Model Runtime
- Agent Runtime
- MCP
---
# LobeHub API Client-Server Interaction Logic
# Chat API Client-Server Interaction Logic
This document explains the implementation logic of LobeHub API in client-server interactions, including event sequences and core components involved.
This document explains the implementation logic of
LobeChat Chat API in client-server interactions,
including event sequences and core components involved.
## Interaction Sequence Diagram
```mermaid
sequenceDiagram
participant Client as Frontend Client
participant ChatService as Frontend ChatService
participant AgentLoop as Agent Runtime Loop
participant ChatService as ChatService
participant ChatAPI as Backend Chat API
participant AgentRuntime as AgentRuntime
participant ModelRuntime as Model Runtime
participant ModelProvider as Model Provider API
participant PluginGateway as Plugin Gateway
participant ToolExecution as Tool Execution Layer
Client->>ChatService: Call createAssistantMessage
Note over ChatService: Process messages, tools, and parameters
Client->>AgentLoop: sendMessage()
Note over AgentLoop: Create GeneralChatAgent + AgentRuntime
ChatService->>ChatService: Call getChatCompletion
Note over ChatService: Prepare request parameters
loop Agent Plan-Execute Loop
AgentLoop->>AgentLoop: Agent decides next instruction
ChatService->>ChatAPI: Send POST request to /webapi/chat/[provider]
alt call_llm instruction
AgentLoop->>ChatService: getChatCompletion
Note over ChatService: Context engineering
ChatService->>ChatAPI: POST /webapi/chat/[provider]
ChatAPI->>ModelRuntime: Initialize ModelRuntime
ModelRuntime->>ModelProvider: Chat completion request
ModelProvider-->>ChatService: Stream back SSE response
ChatService-->>Client: onMessageHandle callback
ChatAPI->>AgentRuntime: Initialize AgentRuntime
Note over AgentRuntime: Create runtime with provider and user config
else call_tool instruction
AgentLoop->>ToolExecution: Execute tool
Note over ToolExecution: Builtin / MCP / Plugin
ToolExecution-->>AgentLoop: Return tool result
ChatAPI->>AgentRuntime: Call chat method
AgentRuntime->>ModelProvider: Send chat completion request
else request_human_* instruction
AgentLoop-->>Client: Request user intervention
Client->>AgentLoop: User feedback
ModelProvider-->>AgentRuntime: Return streaming response
AgentRuntime-->>ChatAPI: Process response and return stream
ChatAPI-->>ChatService: Stream back SSE response
ChatService->>ChatService: Handle streaming response with fetchSSE
Note over ChatService: Process event stream with fetchEventSource
loop For each data chunk
ChatService->>ChatService: Handle different event types (text, tool_calls, reasoning, etc.)
ChatService-->>Client: Return current chunk via onMessageHandle callback
else finish instruction
AgentLoop-->>Client: onFinish callback
end
end
ChatService-->>Client: Return complete result via onFinish callback
Note over ChatService,ModelProvider: Plugin calling scenario
ModelProvider-->>ChatService: Return response with tool_calls
ChatService->>ChatService: Parse tool calls
ChatService->>ChatService: Call runPluginApi
ChatService->>PluginGateway: Send plugin request to gateway
PluginGateway-->>ChatService: Return plugin execution result
ChatService->>ModelProvider: Return plugin result to model
ModelProvider-->>ChatService: Generate final response based on plugin result
Note over ChatService,ModelProvider: Preset task scenario
Client->>ChatService: Trigger preset task (e.g., translation, search)
ChatService->>ChatService: Call fetchPresetTaskResult
Note over Client,ModelProvider: Preset task scenario (bypasses Agent loop)
Client->>ChatService: fetchPresetTaskResult
ChatService->>ChatAPI: Send preset task request
ChatAPI-->>ChatService: Return task result
ChatService-->>Client: Return result via callback function
ChatService-->>Client: Return result via callback
```
## Main Process Steps
1. **Client Initiates Request**: The client calls the createAssistantMessage method of the frontend ChatService.
### 1. Client Initiates Request
2. **Frontend Processes Request**:
After the user sends a message, `sendMessage()`
(`src/store/chat/slices/aiChat/actions/conversationLifecycle.ts`)
creates the user message and assistant message placeholder,
then calls `internal_execAgentRuntime()`.
- `src/services/chat.ts` preprocesses messages, tools, and parameters
- Calls getChatCompletion to prepare request parameters
- Uses `src/utils/fetch/fetchSSE.ts` to send request to backend API
### 2. Agent Runtime Drives the Loop
3. **Backend Processes Request**:
Agent Runtime is the **core execution engine**
of the entire chat flow. Every chat interaction
(from simple Q\&A to complex multi-step tool calling)
is driven by the `AgentRuntime.step()` loop.
- `src/app/(backend)/webapi/chat/[provider]/route.ts` receives the request
- Initializes AgentRuntime
- Creates the appropriate model instance based on user configuration and provider
**Initialization**
(`src/store/chat/slices/aiChat/actions/streamingExecutor.ts`):
4. **Model Call**:
1. Resolve agent config (model, provider, plugin list, etc.)
2. Create the tool registry via `createAgentToolsEngine()`
3. Create `GeneralChatAgent` (the "brain" that decides
what to do next) and `AgentRuntime` (the "engine"
that executes instructions)
4. Inject custom executors via `createAgentExecutors()`
- `src/libs/agent-runtime/AgentRuntime.ts` calls the respective model provider's API
- Returns streaming response
**Execution Loop**:
5. **Process Response**:
```ts
while (state.status !== 'done' && state.status !== 'error') {
result = await runtime.step(state, nextContext);
// GeneralChatAgent decides: call_llm → call_tool → call_llm → finish
}
```
- Backend converts model response to Stream and returns it
- Frontend processes streaming response via fetchSSE and [fetchEventSource](https://github.com/Azure/fetch-event-source)
- Handles different types of events (text, tool calls, reasoning, etc.)
At each step, `GeneralChatAgent` returns an
`AgentInstruction` based on current state,
and `AgentRuntime` executes it via the corresponding
executor:
- `call_llm`: Call the LLM (see steps 3-5 below)
- `call_tool`: Execute tool calls (see step 6 below)
- `finish`: End the loop
- `compress_context`: Context compression
- `request_human_approve` / `request_human_prompt` /
`request_human_select`: Request user intervention
### 3. Frontend Processes LLM Request
When the Agent issues a `call_llm` instruction,
the executor calls ChatService:
- `src/services/chat/index.ts` preprocesses messages,
tools, and parameters
- Modules under `src/services/chat/mecha/` perform
context engineering, including agent config resolution,
model parameter resolution, MCP context injection, etc.
- Calls `getChatCompletion` to prepare request parameters
- Uses `fetchSSE` from the `@lobechat/fetch-sse` package
to send the request to the backend API
### 4. Backend Processes Request
- `src/app/(backend)/webapi/chat/[provider]/route.ts`
receives the request
- Calls `initModelRuntimeFromDB` to read user's provider
config from the database and initialize ModelRuntime
- A tRPC route `src/server/routers/lambda/aiChat.ts`
also exists for server-side message sending
and structured output scenarios
### 5. Model Call and Response Processing
- `ModelRuntime`
(`packages/model-runtime/src/core/ModelRuntime.ts`)
calls the respective model provider's API
and returns a streaming response
- Frontend processes the streaming response via
`fetchSSE` and
[fetchEventSource](https://github.com/Azure/fetch-event-source)
- Handles different types of events
(text, tool calls, reasoning, etc.)
- Passes results back to client through callback functions
6. **Plugin Calling Scenario**:
### 6. Tool Calling Scenario
When the AI model returns a `tool_calls` field in its response, it triggers the plugin calling process:
When the AI model returns a `tool_calls` field in its
response, the Agent issues a `call_tool` instruction.
LobeChat supports three types of tools:
- AI model returns response containing `tool_calls`, indicating a need to call tools
- Frontend handles tool calls via the `internal_callPluginApi` method
- Calls `runPluginApi` method to execute plugin functionality, including retrieving plugin settings and manifest, creating authentication headers, and sending requests to the plugin gateway
- After plugin execution completes, the result is returned to the AI model, which generates the final response based on the result
**Builtin Tools**: Tools built into the application,
executed directly via local executors.
**Real-world Examples**:
- Frontend executes them directly on the client
via the `invokeBuiltinTool` method
- Includes built-in features like search,
DALL-E image generation, etc.
- **Search Plugin**: When a user needs real-time information, the AI calls a web search plugin to retrieve the latest data
- **DALL-E Plugin**: When a user requests image generation, the AI calls the DALL-E plugin to create images
- **Midjourney Plugin**: Provides higher quality image generation capabilities by calling the Midjourney service via API
**MCP Tools**: External tools connected via
[Model Context Protocol](https://modelcontextprotocol.io/).
7. **Preset Task Processing**:
- Frontend calls `MCPService` (`src/services/mcp.ts`)
via the `invokeMCPTypePlugin` method
- Supports three connection modes:
stdio, HTTP (streamable-http/SSE), and cloud (Klavis)
- MCP tool registration and discovery is managed
through MCP server configuration
Preset tasks are specific predefined functions that are typically triggered when users perform specific actions (rather than being part of the regular chat flow). These tasks use the `fetchPresetTaskResult` method, which is similar to the normal chat flow but uses specially designed prompt chains.
**Plugin Tools**: Legacy plugin system,
invoked via API gateway.
This system is expected to be gradually deprecated
in favor of the MCP tool system.
**Execution Timing**: Preset tasks are mainly triggered in the following scenarios:
- Frontend calls them via the
`invokeDefaultTypePlugin` method
- Retrieves plugin settings and manifest,
creates authentication headers,
and sends requests to the plugin gateway
1. **Agent Information Auto-generation**: Triggered when users create or edit an agent
After tool execution completes, results are written to
the message and returned to the Agent loop.
The Agent then calls the LLM again to generate
the final response based on tool results.
The tool dispatch logic is in
`src/store/chat/slices/plugin/actions/pluginTypes.ts`.
### 7. Preset Task Processing
Preset tasks are predefined functions typically triggered
when users perform specific actions
(bypassing the Agent Runtime loop, calling LLM directly).
These tasks use the `fetchPresetTaskResult` method,
which is similar to the normal chat flow
but uses specially designed prompt chains.
**Execution Timing**:
1. **Agent Information Auto-generation**: Triggered when
users create or edit an agent
- Agent avatar generation (via `autoPickEmoji` method)
- Agent description generation (via `autocompleteAgentDescription` method)
- Agent tag generation (via `autocompleteAgentTags` method)
- Agent title generation (via `autocompleteAgentTitle` method)
2. **Message Translation**: Triggered when users manually click the translate button (via `translateMessage` method)
3. **Web Search**: When search is enabled but the model doesn't support tool calling, search functionality is implemented via `fetchPresetTaskResult`
- Agent description generation
(via `autocompleteAgentDescription` method)
- Agent tag generation
(via `autocompleteAgentTags` method)
- Agent title generation
(via `autocompleteAgentTitle` method)
2. **Message Translation**: Triggered when users manually
click the translate button
(via `translateMessage` method)
3. **Web Search**: When search is enabled but the model
doesn't support tool calling, search functionality
is implemented via `fetchPresetTaskResult`
**Code Examples**:
@@ -150,9 +237,15 @@ sequenceDiagram
},
params: merge(
get().internal_getSystemAgentForMeta(),
chainPickEmoji([meta.title, meta.description, systemRole].filter(Boolean).join(',')),
chainPickEmoji(
[meta.title, meta.description, systemRole]
.filter(Boolean)
.join(','),
),
trace: get().getCurrentTracePayload({ traceName: TraceNameMap.EmojiPicker }),
),
trace: get().getCurrentTracePayload({
traceName: TraceNameMap.EmojiPicker,
}),
});
};
```
@@ -168,10 +261,19 @@ sequenceDiagram
chatService.fetchPresetTaskResult({
onFinish: async (data) => {
if (data && supportLocales.includes(data)) from = data;
await updateMessageTranslate(id, { content, from, to: targetLang });
await updateMessageTranslate(id, {
content,
from,
to: targetLang,
});
},
params: merge(translationSetting, chainLangDetect(message.content)),
trace: get().getCurrentTracePayload({ traceName: TraceNameMap.LanguageDetect }),
params: merge(
translationSetting,
chainLangDetect(message.content),
),
trace: get().getCurrentTracePayload({
traceName: TraceNameMap.LanguageDetect,
}),
});
// Perform translation
@@ -187,54 +289,96 @@ sequenceDiagram
}
},
onFinish: async () => {
await updateMessageTranslate(id, { content, from, to: targetLang });
internal_toggleChatLoading(false, id, n('translateMessage(end)', { id }) as string);
await updateMessageTranslate(id, {
content,
from,
to: targetLang,
});
internal_toggleChatLoading(
false,
id,
n('translateMessage(end)', { id }) as string,
);
},
params: merge(translationSetting, chainTranslate(message.content, targetLang)),
trace: get().getCurrentTracePayload({ traceName: TraceNameMap.Translation }),
params: merge(
translationSetting,
chainTranslate(message.content, targetLang),
),
trace: get().getCurrentTracePayload({
traceName: TraceNameMap.Translation,
}),
});
};
```
8. **Completion**:
- When the stream ends, the onFinish callback is called, providing the complete response result
### 8. Completion
## AgentRuntime Overview
When the Agent issues a `finish` instruction,
the loop ends, and the `onFinish` callback is called
with the complete response result.
AgentRuntime is a core abstraction layer in LobeHub that encapsulates a unified interface for interacting with different AI model providers. Its main responsibilities and features include:
## Client-Side vs Server-Side Execution
1. **Unified Abstraction Layer**: AgentRuntime provides a unified interface that hides the implementation details and differences between various AI provider APIs (such as OpenAI, Anthropic, Bedrock, etc.).
The Agent Runtime loop execution location
depends on the scenario:
2. **Model Initialization**: Through the static `initializeWithProvider` method, it initializes the corresponding runtime instance based on the specified provider and configuration parameters.
- **Client-side loop** (browser): Regular 1:1 chat,
continue generation, group orchestration decisions.
The loop runs in the browser, entry point is
`internal_execAgentRuntime()`
(`src/store/chat/slices/aiChat/actions/streamingExecutor.ts`)
- **Server-side loop** (queue/local):
Group chat supervisor agent, sub-agent tasks,
API/Cron triggers. The loop runs on the server,
streaming events to the client via SSE, entry point is
`AgentRuntimeService.executeStep()`
(`src/server/services/agentRuntime/AgentRuntimeService.ts`),
tRPC route at `src/server/routers/lambda/aiAgent.ts`
3. **Capability Encapsulation**:
## Model Runtime
- `chat` method: Handles chat streaming requests
- `models` method: Retrieves model lists
- Supports text embedding, text-to-image, text-to-speech, and other functionalities (if supported by the model provider)
Model Runtime (`packages/model-runtime/`) is the core
abstraction layer in LobeChat for interacting with
LLM model providers, adapting different provider APIs
into a unified interface.
4. **Plugin Architecture**: Through the `src/libs/agent-runtime/runtimeMap.ts` mapping table, it implements an extensible plugin architecture, making it easy to add new model providers. Currently, it supports over 40 different model providers:
```ts
export const providerRuntimeMap = {
openai: LobeOpenAI,
anthropic: LobeAnthropicAI,
google: LobeGoogleAI,
azure: LobeAzureOpenAI,
bedrock: LobeBedrockAI,
ollama: LobeOllamaAI,
// ...over 40 other model providers
};
```
5. **Adapter Pattern**: Internally, it uses the adapter pattern to adapt different provider APIs to the unified `src/libs/agent-runtime/BaseAI.ts` interface:
**Core Responsibilities**:
- **Unified Abstraction Layer**: Hides differences
between AI provider APIs through the `LobeRuntimeAI`
interface
(`packages/model-runtime/src/core/BaseAI.ts`)
- **Model Initialization**: Initializes the corresponding
runtime instance through the provider mapping table
(`packages/model-runtime/src/runtimeMap.ts`)
- **Capability Encapsulation**: `chat` (streaming chat),
`models` (model listing), `embeddings` (text embeddings),
`createImage` (image generation),
`textToSpeech` (speech synthesis),
`generateObject` (structured output)
**Core Interface**:
```ts
// packages/model-runtime/src/core/BaseAI.ts
export interface LobeRuntimeAI {
baseURL?: string;
chat(payload: ChatStreamPayload, options?: ChatCompetitionOptions): Promise<Response>;
embeddings?(payload: EmbeddingsPayload, options?: EmbeddingsOptions): Promise<Embeddings[]>;
chat?(
payload: ChatStreamPayload,
options?: ChatMethodOptions,
): Promise<Response>;
generateObject?(
payload: GenerateObjectPayload,
options?: GenerateObjectOptions,
): Promise<any>;
embeddings?(
payload: EmbeddingsPayload,
options?: EmbeddingsOptions,
): Promise<Embeddings[]>;
models?(): Promise<any>;
createImage?: (
payload: CreateImagePayload,
) => Promise<CreateImageResponse>;
textToSpeech?: (
payload: TextToSpeechPayload,
options?: TextToSpeechOptions,
@@ -242,116 +386,35 @@ AgentRuntime is a core abstraction layer in LobeHub that encapsulates a unified
}
```
**Adapter Implementation Examples**:
**Adapter Architecture**: Through two factory functions —
`openaiCompatibleFactory` and
`anthropicCompatibleFactory` — most providers can be
integrated with minimal configuration. Currently supports
over 40 model providers (OpenAI, Anthropic, Google,
Azure, Bedrock, Ollama, etc.), with implementations
in `packages/model-runtime/src/providers/`.
1. **OpenRouter Adapter**: OpenRouter is a unified API that allows access to AI models from multiple providers. LobeHub implements support for OpenRouter through an adapter:
## Agent Runtime
```ts
// OpenRouter adapter implementation
class LobeOpenRouterAI implements LobeRuntimeAI {
client: OpenAI;
baseURL: string;
Agent Runtime (`packages/agent-runtime/`) is LobeChat's
agent orchestration engine. As described above, it is the
core execution engine that drives the entire chat flow.
constructor(options: OpenAICompatibleOptions) {
// Initialize OpenRouter client using OpenAI-compatible API
this.client = new OpenAI({
apiKey: options.apiKey,
baseURL: OPENROUTER_BASE_URL,
defaultHeaders: {
'HTTP-Referer': 'https://github.com/lobehub/lobehub',
'X-Title': 'LobeHub',
},
});
this.baseURL = OPENROUTER_BASE_URL;
}
**Core Components**:
// Implement chat functionality
async chat(payload: ChatCompletionCreateParamsBase, options?: RequestOptions) {
// Convert LobeHub request format to OpenRouter format
// Handle model mapping, message format, etc.
return this.client.chat.completions.create(
{
...payload,
model: payload.model || 'openai/gpt-4-turbo', // Default model
},
options,
);
}
// Implement other LobeRuntimeAI interface methods
}
```
2. **Google Gemini Adapter**: Gemini is Google's large language model. LobeHub supports Gemini series models through a dedicated adapter:
```ts
import { GoogleGenerativeAI } from '@google/generative-ai';
// Gemini adapter implementation
class LobeGoogleAI implements LobeRuntimeAI {
client: GoogleGenerativeAI;
baseURL: string;
apiKey: string;
constructor(options: GoogleAIOptions) {
// Initialize Google Generative AI client
this.client = new GoogleGenerativeAI(options.apiKey);
this.apiKey = options.apiKey;
this.baseURL = options.baseURL || GOOGLE_AI_BASE_URL;
}
// Implement chat functionality
async chat(payload: ChatCompletionCreateParamsBase, options?: RequestOptions) {
// Select appropriate model (supports Gemini Pro, Gemini Flash, etc.)
const modelName = payload.model || 'gemini-pro';
const model = this.client.getGenerativeModel({ model: modelName });
// Process multimodal inputs (e.g., images)
const contents = this.processMessages(payload.messages);
// Set generation parameters
const generationConfig = {
temperature: payload.temperature,
topK: payload.top_k,
topP: payload.top_p,
maxOutputTokens: payload.max_tokens,
};
// Create chat session and get response
const chat = model.startChat({
generationConfig,
history: contents.slice(0, -1),
safetySettings: this.getSafetySettings(payload),
});
// Handle streaming response
return this.handleStreamResponse(chat, contents, options?.signal);
}
// Implement other processing methods
private processMessages(messages) {
/* ... */
}
private getSafetySettings(payload) {
/* ... */
}
private handleStreamResponse(chat, contents, signal) {
/* ... */
}
}
```
**Different Model Implementations**:
- `src/libs/agent-runtime/openai/index.ts` - OpenAI implementation
- `src/libs/agent-runtime/anthropic/index.ts` - Anthropic implementation
- `src/libs/agent-runtime/google/index.ts` - Google implementation
- `src/libs/agent-runtime/openrouter/index.ts` - OpenRouter implementation
For detailed implementation, see:
- `src/libs/agent-runtime/AgentRuntime.ts` - Core runtime class
- `src/libs/agent-runtime/BaseAI.ts` - Define base interface
- `src/libs/agent-runtime/runtimeMap.ts` - Provider mapping table
- `src/libs/agent-runtime/UniformRuntime/index.ts` - Handle multi-model unified runtime
- `src/libs/agent-runtime/utils/openaiCompatibleFactory/index.ts` - OpenAI compatible adapter factory
- **`AgentRuntime`**
(`packages/agent-runtime/src/core/runtime.ts`):
The "engine" that executes the Agent instruction loop,
supporting `call_llm`, `call_tool`, `finish`,
`compress_context`, `request_human_*`, etc.
- **`GeneralChatAgent`**
(`packages/agent-runtime/src/agents/GeneralChatAgent.ts`):
The "brain" that decides which instruction to execute
next based on current state
- **`GroupOrchestrationRuntime`**
(`packages/agent-runtime/src/groupOrchestration/`):
Multi-agent orchestration supporting speak / broadcast /
delegate / executeTask collaboration modes
- **`UsageCounter`**: Token usage and cost tracking
- **`InterventionChecker`**:
Security blacklist for managing agent behavior boundaries

View File

@@ -1,69 +1,62 @@
---
title: LobeHub API 前后端交互逻辑
description: 深入了解 LobeHub API 的前后端交互实现逻辑和核心组件。
title: Chat API 前后端交互逻辑
description: 深入了解 LobeChat Chat API 的前后端交互实现逻辑和核心组件。
tags:
- LobeHub API
- Chat API
- 前后端交互
- 事件序列
- 核心组件
- Model Runtime
- Agent Runtime
- MCP
---
# LobeHub API 前后端交互逻辑
# Chat API 前后端交互逻辑
本文档说明了 LobeHub API 在前后端交互中的实现逻辑,包括事件序列和涉及的核心组件。
本文档说明了 LobeChat Chat API 在前后端交互中的实现逻辑,
包括事件序列和涉及的核心组件。
## 交互时序图
```mermaid
sequenceDiagram
participant Client as 前端客户端
participant ChatService as 前端 ChatService
participant AgentLoop as Agent Runtime 循环
participant ChatService as ChatService
participant ChatAPI as 后端 Chat API
participant AgentRuntime as AgentRuntime
participant ModelRuntime as Model Runtime
participant ModelProvider as 模型提供商 API
participant PluginGateway as 插件网关
participant ToolExecution as 工具执行层
Client->>ChatService: 调用 createAssistantMessage
Note over ChatService: 处理消息、工具和参数
Client->>AgentLoop: sendMessage()
Note over AgentLoop: 创建 GeneralChatAgent + AgentRuntime
ChatService->>ChatService: 调用 getChatCompletion
Note over ChatService: 准备请求参数
loop Agent Plan-Execute 循环
AgentLoop->>AgentLoop: Agent 决定下一步指令
ChatService->>ChatAPI: 发送 POST 请求到 /webapi/chat/[provider]
alt call_llm 指令
AgentLoop->>ChatService: getChatCompletion
Note over ChatService: 上下文工程 (context engineering)
ChatService->>ChatAPI: POST /webapi/chat/[provider]
ChatAPI->>ModelRuntime: 初始化 ModelRuntime
ModelRuntime->>ModelProvider: chat completion 请求
ModelProvider-->>ChatService: 流式返回 SSE 响应
ChatService-->>Client: onMessageHandle 回调
ChatAPI->>AgentRuntime: 初始化 AgentRuntime
Note over AgentRuntime: 通过 provider 和 用户配置创建运行时
else call_tool 指令
AgentLoop->>ToolExecution: 执行工具
Note over ToolExecution: Builtin / MCP / Plugin
ToolExecution-->>AgentLoop: 返回工具结果
ChatAPI->>AgentRuntime: 调用 chat 方法
AgentRuntime->>ModelProvider: 发送 chat completion 请求
else request_human_* 指令
AgentLoop-->>Client: 请求用户介入
Client->>AgentLoop: 用户反馈
ModelProvider-->>AgentRuntime: 返回流式响应
AgentRuntime-->>ChatAPI: 处理响应并返回 stream
ChatAPI-->>ChatService: 流式返回 SSE 响应
ChatService->>ChatService: 使用 fetchSSE 处理流式响应
Note over ChatService: 通过 fetchEventSource 处理事件流
loop 对于每个数据块
ChatService->>ChatService: 处理不同类型的事件 (text, tool_calls, reasoning 等)
ChatService-->>Client: 通过 onMessageHandle 回调返回当前块
else finish 指令
AgentLoop-->>Client: onFinish 回调
end
end
ChatService-->>Client: 通过 onFinish 回调返回完整结果
Note over ChatService,ModelProvider: 插件调用场景
ModelProvider-->>ChatService: 返回包含 tool_calls 的响应
ChatService->>ChatService: 解析工具调用
ChatService->>ChatService: 调用 runPluginApi
ChatService->>PluginGateway: 发送插件请求到网关
PluginGateway-->>ChatService: 返回插件执行结果
ChatService->>ModelProvider: 将插件结果返回给模型
ModelProvider-->>ChatService: 基于插件结果生成最终响应
Note over ChatService,ModelProvider: 预设任务场景
Client->>ChatService: 触发预设任务(如自动翻译、搜索等)
ChatService->>ChatService: 调用 fetchPresetTaskResult
Note over Client,ModelProvider: 预设任务场景 (不经过 Agent 循环)
Client->>ChatService: fetchPresetTaskResult
ChatService->>ChatAPI: 发送预设任务请求
ChatAPI-->>ChatService: 返回任务结果
ChatService-->>Client: 通过回调函数返回结果
@@ -71,63 +64,130 @@ sequenceDiagram
## 主要步骤说明
1. **客户端发起请求**:客户端调用前端 ChatService 的 createAssistantMessage 方法。
### 1. 客户端发起请求
2. **前端处理请求**
用户发送消息后,`sendMessage()`
`src/store/chat/slices/aiChat/actions/conversationLifecycle.ts`
创建用户消息和助手消息占位,然后调用 `internal_execAgentRuntime()`。
- `src/services/chat.ts` 对消息、工具和参数进行预处理
- 调用 getChatCompletion 准备请求参数
- 使用 `src/utils/fetch/fetchSSE.ts` 发送请求到后端 API
### 2. Agent Runtime 驱动循环
3. **后端处理请求**
Agent Runtime 是整个 chat 流程的**核心执行引擎**
每次聊天交互(从简单问答到复杂多步工具调用)都通过
`AgentRuntime.step()` 循环驱动。
**初始化**
`src/store/chat/slices/aiChat/actions/streamingExecutor.ts`
1. 解析 agent 配置模型、provider、插件列表等
2. 通过 `createAgentToolsEngine()` 创建工具注册表
3. 创建 `GeneralChatAgent`"大脑",决定下一步做什么)
和 `AgentRuntime`"引擎",执行指令)
4. 通过 `createAgentExecutors()` 注入自定义执行器
**执行循环**
```ts
while (state.status !== 'done' && state.status !== 'error') {
result = await runtime.step(state, nextContext);
// GeneralChatAgent 决定: call_llm → call_tool → call_llm → finish
}
```
每一步中,`GeneralChatAgent` 根据当前状态返回一条
`AgentInstruction``AgentRuntime` 通过对应的 executor 执行:
- `call_llm`:调用 LLM见下方步骤 3-5
- `call_tool`:执行工具调用(见下方步骤 6
- `finish`:结束循环
- `compress_context`:上下文压缩
- `request_human_approve` / `request_human_prompt` /
`request_human_select`:请求用户介入
### 3. 前端处理 LLM 请求
当 Agent 发出 `call_llm` 指令时executor 调用 ChatService
- `src/services/chat/index.ts` 对消息、工具和参数进行预处理
- `src/services/chat/mecha/` 下的模块完成上下文工程
context engineering包括 agent 配置解析、
模型参数解析、MCP 上下文注入等
- 调用 `getChatCompletion` 准备请求参数
- 使用 `@lobechat/fetch-sse` 包中的 `fetchSSE`
发送请求到后端 API
### 4. 后端处理请求
- `src/app/(backend)/webapi/chat/[provider]/route.ts` 接收请求
- 初始化 AgentRuntime
- 根据用户配置和提供商创建相应的模型实例
- 调用 `initModelRuntimeFromDB` 从数据库读取用户的
provider 配置,初始化 ModelRuntime
- 同时存在 tRPC 路由
`src/server/routers/lambda/aiChat.ts`
用于服务端消息发送和结构化输出等场景
4. **模型调用**
### 5. 模型调用与响应处理
- `src/libs/agent-runtime/AgentRuntime.ts` 调用相应模型提供商的 API
- 返回流式响应
5. **处理响应**
- 后端将模型响应转换为 Stream 返回
- 前端通过 fetchSSE 和 [fetchEventSource](https://github.com/Azure/fetch-event-source) 处理流式响应
- `ModelRuntime`
`packages/model-runtime/src/core/ModelRuntime.ts`
调用相应模型提供商的 API返回流式响应
- 前端通过 `fetchSSE` 和
[fetchEventSource](https://github.com/Azure/fetch-event-source)
处理流式响应
- 对不同类型的事件(文本、工具调用、推理等)进行处理
- 通过回调函数将结果传递回客户端
6. **插件调用场景**
### 6. 工具调用场景
当 AI 模型在响应中返回 `tool_calls` 字段时,会触发插件调用流程:
当 AI 模型在响应中返回 `tool_calls` 字段时,Agent 会发出
`call_tool` 指令。LobeChat 支持三种工具类型:
- AI 模型返回包含 `tool_calls` 的响应,表明需要调用工具
- 前端通过 `internal_callPluginApi` 方法处理工具调用
- 调用 `runPluginApi` 方法执行插件功能,包括获取插件设置和清单、创建认证请求头、发送请求到插件网关
- 插件执行完成后,结果返回给 AI 模型,模型基于结果生成最终响应
**Builtin 工具**:内置在应用中的工具,通过本地执行器直接运行。
**实际应用示例**
- 前端通过 `invokeBuiltinTool` 方法在客户端直接执行
- 包括搜索、DALL-E 图像生成等内置功能
- **搜索插件**当用户需要获取实时信息时AI 会调用网页搜索插件来获取最新数据
- **DALL-E 插件**用户要求生成图片时AI 调用 DALL-E 插件创建图像
- **Midjourney 插件**:提供更高质量的图像生成能力,通过 API 调用 Midjourney 服务
**MCP 工具**:通过
[Model Context Protocol](https://modelcontextprotocol.io/)
连接的外部工具。
7. **预设任务处理**
- 前端通过 `invokeMCPTypePlugin` 方法调用
`MCPService``src/services/mcp.ts`
- 支持 stdio、HTTPstreamable-http/SSE
和云端Klavis三种连接方式
- MCP 工具的注册和发现通过 MCP 服务器配置管理
预设任务是指系统预定义的特定功能任务,通常在用户执行特定操作时触发(而非常规聊天流程的一部分)。这些任务使用 `fetchPresetTaskResult` 方法执行该方法与正常聊天流程类似但会使用专门设计的提示词prompt chain
**Plugin 工具**:传统插件体系,通过 API 网关调用
该体系预期将逐步废弃,由 MCP 工具体系替代。
**执行时机**:预设任务主要在以下场景被触发:
- 前端通过 `invokeDefaultTypePlugin` 方法调用
- 获取插件设置和清单、创建认证请求头、
发送请求到插件网关
工具执行完成后,结果写入消息并返回给 Agent 循环,
Agent 会再次调用 LLM 基于工具结果生成最终响应。
工具分发逻辑位于
`src/store/chat/slices/plugin/actions/pluginTypes.ts`。
### 7. 预设任务处理
预设任务是系统预定义的特定功能任务,
通常在用户执行特定操作时触发
(不经过 Agent Runtime 循环,直接调用 LLM
这些任务使用 `fetchPresetTaskResult` 方法执行,
该方法与正常聊天流程类似,
但会使用专门设计的提示词prompt chain
**执行时机**
1. **角色信息自动生成**:当用户创建或编辑角色时触发
- 角色头像生成(通过 `autoPickEmoji` 方法)
- 角色描述生成(通过 `autocompleteAgentDescription` 方法)
- 角色标签生成(通过 `autocompleteAgentTags` 方法)
- 角色标题生成(通过 `autocompleteAgentTitle` 方法)
2. **消息翻译**:用户手动点击翻译按钮时触发(通过 `translateMessage` 方法)
3. **网页搜索**:当启用搜索但模型不支持工具调用时,通过 `fetchPresetTaskResult` 实现搜索功能
2. **消息翻译**:用户手动点击翻译按钮时触发
(通过 `translateMessage` 方法)
3. **网页搜索**:当启用搜索但模型不支持工具调用时,
通过 `fetchPresetTaskResult` 实现搜索功能
**实际代码示例**
@@ -148,9 +208,15 @@ sequenceDiagram
},
params: merge(
get().internal_getSystemAgentForMeta(),
chainPickEmoji([meta.title, meta.description, systemRole].filter(Boolean).join(',')),
chainPickEmoji(
[meta.title, meta.description, systemRole]
.filter(Boolean)
.join(','),
),
trace: get().getCurrentTracePayload({ traceName: TraceNameMap.EmojiPicker }),
),
trace: get().getCurrentTracePayload({
traceName: TraceNameMap.EmojiPicker,
}),
});
};
```
@@ -166,10 +232,19 @@ sequenceDiagram
chatService.fetchPresetTaskResult({
onFinish: async (data) => {
if (data && supportLocales.includes(data)) from = data;
await updateMessageTranslate(id, { content, from, to: targetLang });
await updateMessageTranslate(id, {
content,
from,
to: targetLang,
});
},
params: merge(translationSetting, chainLangDetect(message.content)),
trace: get().getCurrentTracePayload({ traceName: TraceNameMap.LanguageDetect }),
params: merge(
translationSetting,
chainLangDetect(message.content),
),
trace: get().getCurrentTracePayload({
traceName: TraceNameMap.LanguageDetect,
}),
});
// 执行翻译
@@ -185,54 +260,89 @@ sequenceDiagram
}
},
onFinish: async () => {
await updateMessageTranslate(id, { content, from, to: targetLang });
internal_toggleChatLoading(false, id, n('translateMessage(end)', { id }) as string);
await updateMessageTranslate(id, {
content,
from,
to: targetLang,
});
internal_toggleChatLoading(
false,
id,
n('translateMessage(end)', { id }) as string,
);
},
params: merge(translationSetting, chainTranslate(message.content, targetLang)),
trace: get().getCurrentTracePayload({ traceName: TraceNameMap.Translation }),
params: merge(
translationSetting,
chainTranslate(message.content, targetLang),
),
trace: get().getCurrentTracePayload({
traceName: TraceNameMap.Translation,
}),
});
};
```
8. **完成**
- 当流结束时,调用 onFinish 回调,提供完整的响应结果
### 8. 完成
## AgentRuntime 说明
Agent 发出 `finish` 指令时,循环结束,
调用 `onFinish` 回调,提供完整的响应结果。
AgentRuntime 是 LobeHub 中的一个核心抽象层,它封装了与不同 AI 模型提供商交互的统一接口。其主要职责和特点包括:
## 客户端 vs 服务端执行
1. **统一抽象层**AgentRuntime 提供了一个统一的接口,隐藏了不同 AI 提供商 API 的实现细节差异(如 OpenAI、Anthropic、Bedrock 等)。
Agent Runtime 循环的执行位置取决于场景:
2. **模型初始化**:通过 `initializeWithProvider` 静态方法,根据指定的提供商和配置参数初始化对应的运行时实例。
- **客户端循环**(浏览器):常规 1:1 对话、继续生成、
群组编排决策。循环在浏览器中运行,
入口为 `internal_execAgentRuntime()`
`src/store/chat/slices/aiChat/actions/streamingExecutor.ts`
- **服务端循环**(队列 / 本地):群聊 supervisor agent、
子 agent 任务、API/Cron 触发。循环在服务端运行,
通过 SSE 流式推送事件到客户端,
入口为 `AgentRuntimeService.executeStep()`
`src/server/services/agentRuntime/AgentRuntimeService.ts`
tRPC 路由为 `src/server/routers/lambda/aiAgent.ts`
3. **能力封装**
## Model Runtime
- `chat` 方法:处理聊天流式请求
- `models` 方法:获取模型列表
- 支持文本嵌入、文本到图像、文本到语音等功能(如果模型提供商支持)
Model Runtime`packages/model-runtime/`)是 LobeChat 中
与 LLM 模型提供商交互的核心抽象层,
负责将不同提供商的 API 适配为统一接口。
4. **插件化架构**:通过 `src/libs/agent-runtime/runtimeMap.ts` 映射表,实现了可扩展的插件化架构,方便添加新的模型提供商。目前支持超过 40 个不同的模型提供商
```ts
export const providerRuntimeMap = {
openai: LobeOpenAI,
anthropic: LobeAnthropicAI,
google: LobeGoogleAI,
azure: LobeAzureOpenAI,
bedrock: LobeBedrockAI,
ollama: LobeOllamaAI,
// ...其他40多个模型提供商
};
```
5. **适配器模式**:在内部使用适配器模式,将不同提供商的 API 适配到统一的 `src/libs/agent-runtime/BaseAI.ts` 接口:
**核心职责**
- **统一抽象层**:通过 `LobeRuntimeAI` 接口
`packages/model-runtime/src/core/BaseAI.ts`
隐藏不同 AI 提供商 API 的差异
- **模型初始化**:通过 provider 映射表
`packages/model-runtime/src/runtimeMap.ts`
初始化对应的运行时实例
- **能力封装**`chat`(聊天流式请求)、
`models`(模型列表)、`embeddings`(文本嵌入)、
`createImage`(图像生成)、`textToSpeech`(语音合成)、
`generateObject`(结构化输出)
**核心接口**
```ts
// packages/model-runtime/src/core/BaseAI.ts
export interface LobeRuntimeAI {
baseURL?: string;
chat(payload: ChatStreamPayload, options?: ChatCompetitionOptions): Promise<Response>;
embeddings?(payload: EmbeddingsPayload, options?: EmbeddingsOptions): Promise<Embeddings[]>;
chat?(
payload: ChatStreamPayload,
options?: ChatMethodOptions,
): Promise<Response>;
generateObject?(
payload: GenerateObjectPayload,
options?: GenerateObjectOptions,
): Promise<any>;
embeddings?(
payload: EmbeddingsPayload,
options?: EmbeddingsOptions,
): Promise<Embeddings[]>;
models?(): Promise<any>;
createImage?: (
payload: CreateImagePayload,
) => Promise<CreateImageResponse>;
textToSpeech?: (
payload: TextToSpeechPayload,
options?: TextToSpeechOptions,
@@ -240,116 +350,33 @@ AgentRuntime 是 LobeHub 中的一个核心抽象层,它封装了与不同 AI
}
```
**适配器实现示例**
**适配器架构**:通过 `openaiCompatibleFactory` 和
`anthropicCompatibleFactory` 两种工厂函数,
大多数提供商只需少量配置即可接入。
目前支持超过 40 个模型提供商
OpenAI、Anthropic、Google、Azure、Bedrock、Ollama 等),
各实现位于 `packages/model-runtime/src/providers/`。
1. **OpenRouter 适配器**OpenRouter 是一个统一 API可以通过它访问多个模型提供商的 AI 模型。LobeHub 通过适配器实现对 OpenRouter 的支持:
## Agent Runtime
```ts
// OpenRouter 适配器实现
class LobeOpenRouterAI implements LobeRuntimeAI {
client: OpenAI;
baseURL: string;
Agent Runtime`packages/agent-runtime/`)是
LobeChat 的 Agent 编排引擎。
如上文所述,它是驱动整个 chat 流程的核心执行引擎。
constructor(options: OpenAICompatibleOptions) {
// 初始化 OpenRouter 客户端,使用 OpenAI 兼容的 API
this.client = new OpenAI({
apiKey: options.apiKey,
baseURL: OPENROUTER_BASE_URL,
defaultHeaders: {
'HTTP-Referer': 'https://github.com/lobehub/lobehub',
'X-Title': 'LobeHub',
},
});
this.baseURL = OPENROUTER_BASE_URL;
}
**核心组件**
// 实现聊天功能
async chat(payload: ChatCompletionCreateParamsBase, options?: RequestOptions) {
// 将 LobeHub 的请求格式转换为 OpenRouter 格式
// 处理模型映射、消息格式等
return this.client.chat.completions.create(
{
...payload,
model: payload.model || 'openai/gpt-4-turbo', // 默认模型
},
options,
);
}
// 实现其他 LobeRuntimeAI 接口方法
}
```
2. **Google Gemini 适配器**Gemini 是 Google 的大语言模型LobeHub 通过专门的适配器支持 Gemini 系列模型:
```ts
import { GoogleGenerativeAI } from '@google/generative-ai';
// Gemini 适配器实现
class LobeGoogleAI implements LobeRuntimeAI {
client: GoogleGenerativeAI;
baseURL: string;
apiKey: string;
constructor(options: GoogleAIOptions) {
// 初始化 Google Generative AI 客户端
this.client = new GoogleGenerativeAI(options.apiKey);
this.apiKey = options.apiKey;
this.baseURL = options.baseURL || GOOGLE_AI_BASE_URL;
}
// 实现聊天功能
async chat(payload: ChatCompletionCreateParamsBase, options?: RequestOptions) {
// 选择合适的模型(支持 Gemini Pro、Gemini Flash 等)
const modelName = payload.model || 'gemini-pro';
const model = this.client.getGenerativeModel({ model: modelName });
// 处理多模态输入(如图像)
const contents = this.processMessages(payload.messages);
// 设置生成参数
const generationConfig = {
temperature: payload.temperature,
topK: payload.top_k,
topP: payload.top_p,
maxOutputTokens: payload.max_tokens,
};
// 创建聊天会话并获取响应
const chat = model.startChat({
generationConfig,
history: contents.slice(0, -1),
safetySettings: this.getSafetySettings(payload),
});
// 处理流式响应
return this.handleStreamResponse(chat, contents, options?.signal);
}
// 实现其他处理方法
private processMessages(messages) {
/* ... */
}
private getSafetySettings(payload) {
/* ... */
}
private handleStreamResponse(chat, contents, signal) {
/* ... */
}
}
```
**不同模型的适配实现**
- `src/libs/agent-runtime/openai/index.ts` - OpenAI 实现
- `src/libs/agent-runtime/anthropic/index.ts` - Anthropic 实现
- `src/libs/agent-runtime/google/index.ts` - Google 实现
- `src/libs/agent-runtime/openrouter/index.ts` - OpenRouter 实现
详细实现可以查看:
- `src/libs/agent-runtime/AgentRuntime.ts` - 核心运行时类
- `src/libs/agent-runtime/BaseAI.ts` - 定义基础接口
- `src/libs/agent-runtime/runtimeMap.ts` - 提供商映射表
- `src/libs/agent-runtime/UniformRuntime/index.ts` - 处理多模型统一运行时
- `src/libs/agent-runtime/utils/openaiCompatibleFactory/index.ts` - OpenAI 兼容适配器工厂
- **`AgentRuntime`**
`packages/agent-runtime/src/core/runtime.ts`
"引擎",执行 Agent 指令循环,
支持 `call_llm`、`call_tool`、`finish`、
`compress_context`、`request_human_*` 等指令类型
- **`GeneralChatAgent`**
`packages/agent-runtime/src/agents/GeneralChatAgent.ts`
"大脑",根据当前状态决定下一步执行什么指令
- **`GroupOrchestrationRuntime`**
`packages/agent-runtime/src/groupOrchestration/`
多 Agent 编排,支持 speak /broadcast/
delegate /executeTask 等协作模式
- **`UsageCounter`**token 使用和费用追踪
- **`InterventionChecker`**
安全黑名单管理 Agent 行为边界

View File

@@ -65,7 +65,10 @@ In this example, the 📝 emoji represents a documentation update. The commit me
### Semantic Release
We use semantic release to automate version control and release processes. Ensure that your commit messages adhere to the semantic release specifications so that when the code is merged into the main branch, the system can automatically create a new version and release it.
We use semantic release to automate version control and release processes. When a PR is merged into the main branch, the system automatically determines whether to publish a new version based on the gitmoji prefix in commit messages:
- Commits with `✨ feat` or `🐛 fix` prefixes will **trigger a new release**
- For minor changes that don't need a release, use prefixes like `💄 style` or `🔨 chore`
### Commitlint
@@ -73,13 +76,27 @@ To ensure consistency in commit messages, we use `commitlint` to check the forma
Before committing your code, ensure that your commit messages adhere to our standards.
### Testing
LobeHub has comprehensive unit tests (Vitest) and E2E tests (Cucumber + Playwright), which run automatically via GitHub Actions CI on every PR. Before submitting a PR or requesting a merge, make sure all tests pass.
You can run specific test files locally to verify:
```bash
# Run specific test (never run bun run test — full suite is very slow)
bunx vitest run --silent='passed-only' '[file-path]'
```
For more testing details, see the [Testing Guide](/docs/development/basic/test).
### How to Contribute
1. Fork the project to your account.
2. Create a new branch for development.
3. After completing the development, ensure that your code passes the aforementioned code style checks.
3. After completing development, ensure your code passes code style checks and tests.
4. Commit your changes and use appropriate gitmoji to label your commit message.
5. Create a Pull Request to the main branch of the original project.
6. Await code review and make necessary modifications based on feedback.
6. Ensure all GitHub Actions CI checks pass.
7. Await code review and make necessary modifications based on feedback.
Thank you for following these guidelines, as they help us maintain the quality and consistency of the project. We look forward to your contributions!

View File

@@ -63,7 +63,10 @@ Gitmoji commit messages 使用特定的 emoji 来表示提交的类型或意图
### Semantic Release
我们使用 semantic release 来自动化版本控制和发布流程。请确保您的提交信息遵循 semantic release 的规范,这样当代码合并到主分支后,系统就可以自动创建新的版本并发布。
我们使用 semantic release 来自动化版本控制和发布流程。当 PR 合并到主分支后,系统会根据提交信息中的 gitmoji 前缀自动判断是否需要发布新版本:
- `✨ feat` 和 `🐛 fix` 前缀的提交会**触发新版本发布**
- 如果只是小修改不想发新版本,可以使用 `💄 style` 或 `🔨 chore` 等前缀
### Commitlint
@@ -71,13 +74,27 @@ Gitmoji commit messages 使用特定的 emoji 来表示提交的类型或意图
在您提交代码之前,请确保您的提交信息遵循我们的规范。
### 测试
LobeHub 配置了完善的单元测试Vitest和 E2E 测试Cucumber + Playwright并通过 GitHub Actions CI 在每次 PR 提交时自动运行。提交 PR 或请求合并前,请务必确保所有测试通过。
你可以在本地运行指定测试文件来验证:
```bash
# 运行指定测试(不要运行 bun run test全量测试耗时很长
bunx vitest run --silent='passed-only' '[file-path]'
```
更多测试相关信息请参阅 [测试指南](/zh/docs/development/basic/test)。
### 如何贡献
1. Fork 项目到您的账户。
2. 创建一个新的分支进行开发。
3. 开发完成后,确保您的代码通过了上述的代码风格检查。
3. 开发完成后,确保您的代码通过了代码风格检查和测试
4. 提交您的更改,并使用合适的 gitmoji 标注您的提交信息。
5. 创建一个 Pull Request 到原项目的主分支。
6. 等待代码审查,并根据反馈进行必要的修改
6. 确保 GitHub Actions CI 全部通过
7. 等待代码审查,并根据反馈进行必要的修改。
感谢您遵循这些指导原则,它们有助于我们维护项目的质量和一致性。我们期待您的贡献!

View File

@@ -1,144 +0,0 @@
---
title: How to Develop a New Feature
description: >-
Learn how to implement the Chat Messages feature in LobeHub using Next.js and
TypeScript.
tags:
- LobeHub
- Next.js
- TypeScript
- Chat Feature
- Zustand
---
# How to Develop a New Feature
LobeHub is built on the Next.js framework and uses TypeScript as the primary development language. When developing a new feature, we need to follow a certain development process to ensure the quality and stability of the code. The general process can be divided into the following five steps:
1. Routing: Define routes (`src/app`).
2. Data Structure: Define data structures (`src/types`).
3. Business Logic Implementation: Zustand store (`src/store`).
4. Page Display: Write static components/pages. Create features in:
- `src/features/<feature-name>/` for **shared global features** (used across multiple pages)
- `src/app/<new-page>/features/<feature-name>/` for **page-specific features** (only used in this page)
5. Function Binding: Bind the store with page triggers (`const [state, function] = useNewStore(s => [s.state, s.function])`).
Taking the "Chat Messages" feature as an example, here are the brief steps to implement this feature:
## 1. Define Routes
In the `src/app` directory, we need to define a new route to host the "Chat Messages" page. Generally, we would create a new folder under `src/app`, for example, `chat`, and create a `page.tsx` file within this folder to export a React component as the main body of the page.
```tsx
// src/app/chat/page.tsx
import ChatPage from './features/chat';
export default ChatPage;
```
## 2. Define Data Structure
In the `src/types` directory, we need to define the data structure for "Chat Messages". For example, we create a `chat.ts` file and define the `ChatMessage` type within it:
```ts
// src/types/chat.ts
export type ChatMessage = {
id: string;
content: string;
timestamp: number;
sender: 'user' | 'bot';
};
```
## 3. Create Zustand Store
In the `src/store` directory, we need to create a new Zustand Store to manage the state of "Chat Messages". For example, we create a `chatStore.ts` file and define a Zustand Store within it:
```ts
// src/store/chatStore.ts
import create from 'zustand';
type ChatState = {
messages: ChatMessage[];
addMessage: (message: ChatMessage) => void;
};
export const useChatStore = create<ChatState>((set) => ({
messages: [],
addMessage: (message) => set((state) => ({ messages: [...state.messages, message] })),
}));
```
## 4. Create Page and Components
In `src/app/<new-page>/features/<new-feature>.tsx`, we need to create a new page or component to display "Chat Messages". In this file, we can use the Zustand Store created earlier and Ant Design components to build the UI:
```jsx
// src/app/chat/features/ChatPage/index.tsx
// Note: Use src/app/<page>/features/ for page-specific components
import { List, Typography } from 'antd';
import { useChatStore } from 'src/store/chatStore';
const ChatPage = () => {
const messages = useChatStore((state) => state.messages);
return (
<List
dataSource={messages}
renderItem={(message) => (
<List.Item>
<Typography.Text>{message.content}</Typography.Text>
</List.Item>
)}
/>
);
};
export default ChatPage;
```
> **Note on Feature Organization**: LobeHub uses two patterns for organizing features:
>
> - **Global features** (`src/features/`): Shared components like `ChatInput`, `Conversation` used across the app
> - **Page-specific features** (`src/app/<page>/features/`): Components used only within a specific page route
>
> Choose based on reusability. If unsure, start with page-specific and refactor to global if needed elsewhere.
## 5. Function Binding
In a page or component, we need to bind the Zustand Store's state and methods to the UI. In the example above, we have already bound the `messages` state to the `dataSource` property of the list. Now, we also need a method to add new messages. We can define this method in the Zustand Store and then use it in the page or component:
```jsx
import { Button } from 'antd';
const ChatPage = () => {
const messages = useChatStore((state) => state.messages);
const addMessage = useChatStore((state) => state.addMessage);
const handleSend = () => {
addMessage({ id: '1', content: 'Hello, world!', timestamp: Date.now(), sender: 'user' });
};
return (
<>
<List
dataSource={messages}
renderItem={(message) => (
<List.Item>
<Typography.Text>{message.content}</Typography.Text>
</List.Item>
)}
/>
<Button onClick={handleSend}>Send</Button>
</>
);
};
export default ChatPage;
```
The above is the step to implement the "chat message" feature in LobeHub. Of course, in the actual development of LobeHub, the business requirements and scenarios faced in real situations are far more complex than the above demo. Please develop according to the actual situation.

View File

@@ -1,142 +0,0 @@
---
title: 如何开发一个新功能:前端实现
description: 学习如何在 LobeHub 中实现会话消息功能,使用 Next.js 和 TypeScript。
tags:
- 前端开发
- Next.js
- TypeScript
- Zustand
- 功能实现
---
# 如何开发一个新功能:前端实现
LobeHub 基于 Next.js 框架构建,使用 TypeScript 作为主要开发语言。在开发新功能时,我们需要遵循一定的开发流程,以确保代码的质量和稳定性。大致的流程分为以下五步:
1. 路由:定义路由 (`src/app`)
2. 数据结构: 定义数据结构 ( `src/types` )
3. 业务功能实现: zustand store (`src/store`)
4. 页面展示:书写静态组件 / 页面。根据以下方式创建功能组件:
- `src/features/<feature-name>/` 用于 **全局共享功能**(跨多个页面使用)
- `src/app/<new-page>/features/<feature-name>/` 用于 **页面专属功能**(仅在当前页面使用)
5. 功能绑定:绑定 store 与页面的触发 (`const [state,function]= useNewStore(s=>[s.state,s.function])`)
我们以 "会话消息" 功能为例,以下是实现这个功能的简要步骤:
## 1. 定义路由
在 `src/app` 目录下,我们需要定义一个新的路由来承载 "会话消息" 页面。一般来说,我们会在 `src/app` 下创建一个新的文件夹,例如 `chat`,并且在这个文件夹中创建 `page.tsx`文件,在该文件中导出 React 组件作为页面的主体。
```tsx
// src/app/chat/page.tsx
import ChatPage from './features/chat';
export default ChatPage;
```
## 2. 定义数据结构
在 `src/types` 目录下,我们需要定义 "会话消息" 的数据结构。例如,我们创建一个 `chat.ts` 文件,并在其中定义 `ChatMessage` 类型:
```ts
// src/types/chat.ts
export type ChatMessage = {
id: string;
content: string;
timestamp: number;
sender: 'user' | 'bot';
};
```
## 3. 创建 Zustand Store
在 `src/store` 目录下,我们需要创建一个新的 Zustand Store 来管理 "会话消息" 的状态。例如,我们创建一个 `chatStore.ts` 文件,并在其中定义一个 Zustand Store
```ts
// src/store/chatStore.ts
import { create } from 'zustand';
type ChatState = {
messages: ChatMessage[];
addMessage: (message: ChatMessage) => void;
};
export const useChatStore = create<ChatState>((set) => ({
messages: [],
addMessage: (message) => set((state) => ({ messages: [...state.messages, message] })),
}));
```
## 4. 创建页面与组件
在 `src/app/<new-page>/features/<new-feature>.tsx` 中,我们需要创建一个新的页面或组件来显示 "会话消息"。在这个文件中,我们可以使用上面创建的 Zustand Store以及 Ant Design 的组件来构建 UI
```jsx
// src/app/chat/features/ChatPage/index.tsx
// 注意:使用 src/app/<page>/features/ 放置页面专属组件
import { List, Typography } from 'antd';
import { useChatStore } from 'src/store/chatStore';
const ChatPage = () => {
const messages = useChatStore((state) => state.messages);
return (
<List
dataSource={messages}
renderItem={(message) => (
<List.Item>
<Typography.Text>{message.content}</Typography.Text>
</List.Item>
)}
/>
);
};
export default ChatPage;
```
> **关于功能组件组织方式的说明**LobeHub 使用两种模式来组织功能组件:
>
> - **全局功能**`src/features/`):跨应用共享的组件,如 `ChatInput`、`Conversation` 等
> - **页面专属功能**`src/app/<page>/features/`):仅在特定页面路由中使用的组件
>
> 根据可复用性选择合适的方式。如果不确定,可以先放在页面专属位置,需要时再重构为全局共享。
## 5. 功能绑定
在页面或组件中,我们需要将 Zustand Store 的状态和方法绑定到 UI 上。在上面的示例中,我们已经将 `messages` 状态绑定到了列表的 `dataSource` 属性上。现在,我们还需要一个方法来添加新的消息。我们可以在 Zustand Store 中定义这个方法,然后在页面或组件中使用它:
```jsx
import { Button } from 'antd';
const ChatPage = () => {
const messages = useChatStore((state) => state.messages);
const addMessage = useChatStore((state) => state.addMessage);
const handleSend = () => {
addMessage({ id: '1', content: 'Hello, world!', timestamp: Date.now(), sender: 'user' });
};
return (
<>
<List
dataSource={messages}
renderItem={(message) => (
<List.Item>
<Typography.Text>{message.content}</Typography.Text>
</List.Item>
)}
/>
<Button onClick={handleSend}>Send</Button>
</>
);
};
export default ChatPage;
```
以上就是在 LobeHub 中实现 "会话消息" 功能的步骤。当然,在 LobeHub 的实际开发中,真实场景所面临的业务诉求和场景远比上述 demo 复杂,请根据实际情况进行开发。

View File

@@ -17,14 +17,12 @@ We will use [RFC 021 - Custom Assistant Opening Guidance](https://github.com/lob
## 1. Update Schema
lobehub uses a postgres database, with the browser-side local database using [pglite](https://pglite.dev/) (wasm version of postgres). The project also uses [drizzle](https://orm.drizzle.team/) ORM to operate the database.
LobeHub uses a PostgreSQL database, with [Drizzle ORM](https://orm.drizzle.team/) to operate the database.
Compared to the old solution where the browser side used indexDB, having both the browser side and server side use postgres has the advantage that the model layer code can be completely reused.
All schemas are uniformly placed in `src/database/schemas`. We need to adjust the `agents` table to add two fields corresponding to the configuration items:
All schemas are located in `packages/database/src/schemas/`. We need to adjust the `agents` table to add two fields corresponding to the configuration items:
```diff
// src/database/schemas/agent.ts
// packages/database/src/schemas/agent.ts
export const agents = pgTable(
'agents',
{
@@ -53,22 +51,15 @@ export const agents = pgTable(
Note that sometimes we may also need to update the index, but for this feature, we don't have any related performance bottleneck issues, so we don't need to update the index.
After adjusting the schema, we need to run `npm run db:generate` to use drizzle-kit's built-in database migration capability to generate the corresponding SQL code for migrating to the latest schema. After execution, four files will be generated:
### Database Migration
- src/database/migrations/meta/\_journal.json: Saves information about each migration
- src/database/migrations/0021\_add\_agent\_opening\_settings.sql: SQL commands for this migration
- src/database/client/migrations.json: SQL commands for this migration used by pglite
- src/database/migrations/meta/0021\_snapshot.json: The current latest complete database snapshot
Note that the migration SQL filename generated by the script by default is not semantically clear like `0021_add_agent_opening_settings.sql`. You need to manually rename it and update `src/database/migrations/meta/_journal.json`.
Previously, client-side storage using indexDB made database migration relatively complicated. Now with pglite on the local side, database migration is a simple command, very quick and easy. You can also check if there's any room for optimization in the generated migration SQL and make manual adjustments.
After adjusting the schema, you need to generate and optimize migration files. See the [Database Migration Guide](https://github.com/lobehub/lobe-chat/blob/main/.agents/skills/drizzle/references/db-migrations.md) for detailed steps.
## 2. Update Data Model
Data models used in our project are defined in `src/types`. We don't directly use the types exported from the drizzle schema, such as `export type NewAgent = typeof agents.$inferInsert;`, but instead define corresponding data models based on frontend requirements and data types of the corresponding fields in the db schema definition.
Data models are defined in `packages/types/src/`. We don't directly use the types exported from the Drizzle schema (e.g., `typeof agents.$inferInsert`), but instead define independent data models based on frontend requirements.
Data model definitions are placed in the `src/types` folder. Update the `LobeAgentConfig` type in `src/types/agent/index.ts`:
Update the `LobeAgentConfig` type in `packages/types/src/agent/index.ts`:
```diff
export interface LobeAgentConfig {
@@ -93,51 +84,42 @@ export interface LobeAgentConfig {
* Language model parameters
*/
params: LLMParams;
/**
* Enabled plugins
*/
plugins?: string[];
/**
* Model provider
*/
provider?: string;
/**
* System role
*/
systemRole: string;
/**
* Text-to-speech service
*/
tts: LobeAgentTTSConfig;
// ...
}
```
## 3. Service Implementation / Model Implementation
## 3. Service / Model Layer Implementation
- The `model` layer encapsulates reusable operations on the DB
- The `service` layer implements application business logic
The project is divided into multiple frontend and backend layers by responsibility:
Both have corresponding top-level folders in the `src` directory.
```plaintext
+-------------------+--------------------------------------+------------------------------------------------------+
| Layer | Location | Responsibility |
+-------------------+--------------------------------------+------------------------------------------------------+
| Client Service | src/services/ | Reusable frontend business logic, often multiple tRPC |
| WebAPI | src/app/(backend)/webapi/ | REST API endpoints |
| tRPC Router | src/server/routers/ | tRPC entry, validates input, routes to service |
| Server Service | src/server/services/ | Server-side business logic, with DB access |
| Server Module | src/server/modules/ | Server-side modules, no direct DB access |
| Repository | packages/database/src/repositories/ | Complex queries, cross-table operations |
| DB Model | packages/database/src/models/ | Single-table CRUD operations |
+-------------------+--------------------------------------+------------------------------------------------------+
```
We need to check if we need to update their implementation. Agent configuration in the frontend is abstracted as session configuration. In `src/services/session/server.ts` we can see a service function `updateSessionConfig`:
**Client Service** is frontend code that encapsulates reusable business logic, calling the backend via tRPC client. For example, `src/services/session/index.ts`:
```typescript
export class ServerService implements ISessionService {
// ...
updateSessionConfig: ISessionService['updateSessionConfig'] = (id, config, signal) => {
export class SessionService {
updateSessionConfig = (id: string, config: PartialDeep<LobeAgentConfig>, signal?: AbortSignal) => {
return lambdaClient.session.updateSessionConfig.mutate({ id, value: config }, { signal });
};
}
```
Jumping to the implementation of `lambdaClient.session.updateSessionConfig`, we find that it simply **merges** the new config with the old config.
**tRPC Router** is the backend entry point (`src/server/routers/lambda/`), validates input and calls Server Service for business logic:
```typescript
export const sessionRouter = router({
// ..
updateSessionConfig: sessionProcedure
.input(
z.object({
@@ -154,13 +136,13 @@ export const sessionRouter = router({
});
```
Foreseeably, the frontend will add two inputs, calling updateSessionConfig upon user modification. As the current implementation lacks field-level granularity for the config update, the service and model layers remain unaffected.
For this feature, `updateSessionConfig` simply merges config without field-level granularity, so none of the layers need modification.
## 4. Frontend Implementation
### Data Flow Store Implementation
lobehub uses [zustand](https://zustand.docs.pmnd.rs/getting-started/introduction) as the global state management framework. For detailed practices on state management, you can refer to [📘 State Management Best Practices](/docs/development/state-management/state-management-intro).
LobeHub uses [zustand](https://zustand.docs.pmnd.rs/getting-started/introduction) as the global state management framework. For detailed practices on state management, refer to [State Management Best Practices](/docs/development/state-management/state-management-intro).
There are two stores related to the agent:
@@ -193,17 +175,9 @@ export const DEFAULT_AGENT_CONFIG: LobeAgentConfig = {
Actually, you don't even need to update this since the `openingQuestions` type is already optional. I'm not updating `openingMessage` here.
Because we've added two new fields, to facilitate access by components in the `src/features/AgentSetting/AgentOpening` folder and for performance optimization, we add related selectors in `src/features/AgentSetting/store/selectors.ts`:
Because we've added two new fields, to facilitate component access in `src/features/AgentSetting/AgentOpening` and for performance optimization, we add related selectors in `src/features/AgentSetting/store/selectors.ts`:
```diff
import { DEFAULT_AGENT_CHAT_CONFIG } from '@/const/settings';
import { LobeAgentChatConfig } from '@/types/agent';
import { Store } from './action';
const chatConfig = (s: Store): LobeAgentChatConfig =>
s.config.chatConfig || DEFAULT_AGENT_CHAT_CONFIG;
+export const DEFAULT_OPENING_QUESTIONS: string[] = [];
export const selectors = {
chatConfig,
@@ -212,7 +186,7 @@ export const selectors = {
};
```
We won't add additional actions to update the agent config here, as I've observed that other existing code also directly uses the unified `setChatConfig` in the existing code:
We won't add additional actions to update the agent config here, as existing code also directly uses the unified `setAgentConfig`:
```typescript
export const store: StateCreator<Store, [['zustand/devtools', never]]> = (set, get) => ({
@@ -224,14 +198,11 @@ export const store: StateCreator<Store, [['zustand/devtools', never]]> = (set, g
#### Update store/agent
In the component `src/app/[variants]/(main)/chat/(workspace)/@conversation/features/ChatList/WelcomeChatItem/WelcomeMessage.tsx`, we use this store to get the current agent configuration to render user-customized opening messages and guiding questions.
Since we only need to read two configuration items, we'll simply add two selectors:
In the display component we use `src/store/agent` to get the current agent configuration. Simply add two selectors:
Update `src/store/agent/slices/chat/selectors/agent.ts`:
```diff
// ...
+const openingQuestions = (s: AgentStoreState) =>
+ currentAgentConfig(s).openingQuestions || DEFAULT_OPENING_QUESTIONS;
+const openingMessage = (s: AgentStoreState) => currentAgentConfig(s).openingMessage || '';
@@ -244,63 +215,48 @@ export const agentSelectors = {
};
```
### i18n Handling
LobeHub is an internationalized project using [react-i18next](https://github.com/i18next/react-i18next). Newly added UI text needs to:
1. Add keys to the corresponding namespace file in `src/locales/default/` (default language is English):
```typescript
// src/locales/default/setting.ts
export default {
// ...
'settingOpening.title': 'Opening Settings',
'settingOpening.openingMessage.title': 'Opening Message',
'settingOpening.openingMessage.placeholder': 'Enter a custom opening message...',
'settingOpening.openingQuestions.title': 'Opening Questions',
'settingOpening.openingQuestions.placeholder': 'Enter a guiding question',
'settingOpening.openingQuestions.empty': 'No opening questions yet',
'settingOpening.openingQuestions.repeat': 'Question already exists',
};
```
2. If a new namespace is added, export it in `src/locales/default/index.ts`
3. For dev preview: manually translate the corresponding JSON files in `locales/zh-CN/` and `locales/en-US/`
4. CI will automatically run `pnpm i18n` to generate translations for other languages
Key naming convention uses flat dot notation: `{feature}.{context}.{action|status}`.
### UI Implementation and Action Binding
We're adding a new category of settings this time. In `src/features/AgentSetting`, various UI components for agent settings are defined. This time we're adding a setting type: opening settings. We'll add a folder `AgentOpening` to store opening settings-related components. The project uses:
We're adding a new category of settings. In `src/features/AgentSetting`, various UI components for agent settings are defined. We'll add an `AgentOpening` folder for opening settings components.
- [ant-design](https://ant.design/) and [lobe-ui](https://github.com/lobehub/lobe-ui): component libraries
- [antd-style](https://ant-design.github.io/antd-style): css-in-js solution
- [@lobehub/ui](https://ui.lobehub.com/): UI component library (includes Flexbox and Center for responsive layouts)
- [@ant-design/icons](https://ant.design/components/icon-cn) and [lucide](https://lucide.dev/icons/): icon libraries
- [react-i18next](https://github.com/i18next/react-i18next) and [lobe-i18n](https://github.com/lobehub/lobe-cli-toolbox/tree/master/packages/lobe-i18n): i18n framework and multi-language automatic translation tool
lobehub is an internationalized project, so newly added text needs to update the default `locale` file: `src/locales/default/setting.ts`.
Let's take the subcomponent `OpeningQuestion.tsx` as an example. Component implementation:
Taking the subcomponent `OpeningQuestions.tsx` as an example, here's the key logic (style code omitted):
```typescript
// src/features/AgentSetting/AgentOpening/OpeningQuestions.tsx
'use client';
import { DeleteOutlined, PlusOutlined } from '@ant-design/icons';
import { Flexbox, SortableList } from '@lobehub/ui';
import { Button, Empty, Input } from 'antd';
import { createStaticStyles } from 'antd-style';
import { memo, useCallback, useMemo, useState } from 'react';
import { useTranslation } from 'react-i18next';
import { useStore } from '../store';
import { selectors } from '../store/selectors';
const styles = createStaticStyles(({ css, cssVar }) => ({
empty: css`
margin-block: 24px;
margin-inline: 0;
`,
questionItemContainer: css`
margin-block-end: 8px;
padding-block: 2px;
padding-inline: 10px 0;
background: ${cssVar.colorBgContainer};
`,
questionItemContent: css`
flex: 1;
`,
questionsList: css`
width: 100%;
margin-block-start: 16px;
`,
repeatError: css`
margin: 0;
color: ${cssVar.colorErrorText};
`,
}));
interface QuestionItem {
content: string;
id: number;
}
const OpeningQuestions = memo(() => {
const { t } = useTranslation('setting');
const [questionInput, setQuestionInput] = useState('');
@@ -318,7 +274,6 @@ const OpeningQuestions = memo(() => {
const addQuestion = useCallback(() => {
if (!questionInput.trim()) return;
setQuestions([...openingQuestions, questionInput.trim()]);
setQuestionInput('');
}, [openingQuestions, questionInput, setQuestions]);
@@ -333,106 +288,27 @@ const OpeningQuestions = memo(() => {
[openingQuestions, setQuestions],
);
// Handle logic after drag and drop sorting
const handleSortEnd = useCallback(
(items: QuestionItem[]) => {
setQuestions(items.map((item) => item.content));
},
[setQuestions],
);
const items: QuestionItem[] = useMemo(() => {
return openingQuestions.map((item, index) => ({
content: item,
id: index,
}));
}, [openingQuestions]);
const isRepeat = openingQuestions.includes(questionInput.trim());
return (
<Flexbox gap={8}>
<Flexbox gap={4}>
<Input
addonAfter={
<Button
// don't allow repeat
disabled={openingQuestions.includes(questionInput.trim())}
icon={<PlusOutlined />}
onClick={addQuestion}
size="small"
type="text"
/>
}
onChange={(e) => setQuestionInput(e.target.value)}
onPressEnter={addQuestion}
placeholder={t('settingOpening.openingQuestions.placeholder')}
value={questionInput}
/>
{isRepeat && (
<p className={styles.repeatError}>{t('settingOpening.openingQuestions.repeat')}</p>
)}
</Flexbox>
<div className={styles.questionsList}>
{openingQuestions.length > 0 ? (
<SortableList
items={items}
onChange={handleSortEnd}
renderItem={(item) => (
<SortableList.Item className={styles.questionItemContainer} id={item.id}>
<SortableList.DragHandle />
<div className={styles.questionItemContent}>{item.content}</div>
<Button
icon={<DeleteOutlined />}
onClick={() => removeQuestion(item.content)}
type="text"
/>
</SortableList.Item>
)}
/>
) : (
<Empty
className={styles.empty}
description={t('settingOpening.openingQuestions.empty')}
/>
)}
</div>
</Flexbox>
);
// Render Input + SortableList, see component library docs for UI details
// ...
});
export default OpeningQuestions;
```
At the same time, we need to display the opening configuration set by the user, which is on the chat page. The corresponding component is in `src/app/[variants]/(main)/chat/(workspace)/@conversation/features/ChatList/WelcomeChatItem/WelcomeMessage.tsx`:
Key points:
- Read store config via `selectors`
- Update config via the `setAgentConfig` action
- Use `useTranslation('setting')` for i18n text
We also need to display the opening configuration set by the user on the chat page. The corresponding component is in `src/app/[variants]/(main)/chat/(workspace)/@conversation/features/ChatList/WelcomeChatItem/WelcomeMessage.tsx`:
```typescript
const WelcomeMessage = () => {
const { t } = useTranslation('chat');
// Get current opening configuration
// Get current opening configuration from store/agent
const openingMessage = useAgentStore(agentSelectors.openingMessage);
const openingQuestions = useAgentStore(agentSelectors.openingQuestions);
const meta = useSessionStore(sessionMetaSelectors.currentAgentMeta, isEqual);
const { isAgentEditable } = useServerConfigStore(featureFlagsSelectors);
const activeId = useChatStore((s) => s.activeId);
const agentSystemRoleMsg = t('agentDefaultMessageWithSystemRole', {
name: meta.title || t('defaultAgent'),
systemRole: meta.description,
});
const agentMsg = t(isAgentEditable ? 'agentDefaultMessage' : 'agentDefaultMessageWithoutEdit', {
name: meta.title || t('defaultAgent'),
url: `/chat/settings?session=${activeId}`,
});
const message = useMemo(() => {
// Use user-set message if available
@@ -440,38 +316,50 @@ const WelcomeMessage = () => {
return !!meta.description ? agentSystemRoleMsg : agentMsg;
}, [openingMessage, agentSystemRoleMsg, agentMsg, meta.description]);
const chatItem = (
<ChatItem
avatar={meta}
editing={false}
message={message}
placement={'left'}
type={type === 'chat' ? 'block' : 'pure'}
/>
);
return openingQuestions.length > 0 ? (
<Flexbox>
{chatItem}
<ChatItem avatar={meta} message={message} placement="left" />
{/* Render guiding questions */}
<OpeningQuestions mobile={mobile} questions={openingQuestions} />
<OpeningQuestions questions={openingQuestions} />
</Flexbox>
) : (
chatItem
<ChatItem avatar={meta} message={message} placement="left" />
);
};
export default WelcomeMessage;
```
## 5. Testing
The project uses vitest for unit testing.
The project uses Vitest for unit testing. See the [Testing Skill Guide](https://github.com/lobehub/lobe-chat/blob/main/.agents/skills/testing/SKILL.md) for details.
Since our two new configuration fields are both optional, theoretically you could pass the tests without updating them. However, since we added the `openingQuestions` field to the `DEFAULT_AGENT_CONFIG` mentioned earlier, this causes many tests to calculate configurations that include this field, so we still need to update some test snapshots.
**Running tests:**
For the current scenario, I recommend running the tests locally to see which tests fail, and then update them as needed. For example, for the test file `src/store/agent/slices/chat/selectors/agent.test.ts`, you need to run `bunx vitest -u src/store/agent/slices/chat/selectors/agent.test.ts` to update the snapshot.
```bash
# Run specific test file (never run bun run test — full suite is very slow)
bunx vitest run --silent='passed-only' '[file-path]'
# Database package tests
cd packages/database && bunx vitest run --silent='passed-only' '[file]'
```
**Testing suggestions for new features:**
Since our two new configuration fields are both optional, theoretically tests would pass without updates. However, if you modified default config (e.g., added `openingQuestions` to `DEFAULT_AGENT_CONFIG`), some test snapshots may become stale and need updating.
It's recommended to run related tests locally first to see which fail, then update as needed. For example:
```bash
bunx vitest run --silent='passed-only' 'src/store/agent/slices/chat/selectors/agent.test.ts'
```
If you just want to check whether existing tests pass without running locally, you can also check the GitHub Actions test results directly.
**More testing scenario guides:**
- DB Model testing: `.agents/skills/testing/references/db-model-test.md`
- Zustand Store Action testing: `.agents/skills/testing/references/zustand-store-action-test.md`
- Electron IPC testing: `.agents/skills/testing/references/electron-ipc-test.md`
## Summary
The above is the complete implementation process for the LobeHub opening settings feature. Developers can refer to this document for the development and testing of related features.
The above is the complete implementation process for the LobeHub opening settings feature, covering the full chain from database schema → data model → Service/Model → Store → i18n → UI → testing. Developers can refer to this document for developing related features.

View File

@@ -14,16 +14,14 @@ tags:
我们将以 [RFC 021 - 自定义助手开场引导](https://github.com/lobehub/lobehub/discussions/891) 为例,阐述完整的实现流程。
## 一、更新 schema
## 一、更新 Schema
lobehub 使用 postgres 数据库,浏览器端本地数据库使用 [pglite](https://pglite.dev/)wasm 版本 postgres项目使用 [drizzle](https://orm.drizzle.team/) ORM 用来操作数据库。
LobeHub 使用 PostgreSQL 数据库,项目使用 [Drizzle ORM](https://orm.drizzle.team/) 来操作数据库。
相比旧方案浏览器端使用 indexDB 来说,浏览器端和 server 端都使用 postgres 好处在于 model 层代码可以完全复用。
schemas 都统一放在 `src/database/schemas`,我们需要调整 `agents` 表增加两个配置项对应的字段:
Schemas 统一放在 `packages/database/src/schemas/` 下,我们需要调整 `agents` 表增加两个配置项对应的字段:
```diff
// src/database/schemas/agent.ts
// packages/database/src/schemas/agent.ts
export const agents = pgTable(
'agents',
{
@@ -52,22 +50,15 @@ export const agents = pgTable(
需要注意的是,有些时候我们可能还需要更新索引,但对于这个需求我们没有相关的性能瓶颈问题,所以不需要更新索引。
调整完 schema 后我们需要运行 `npm run db:generate` 使用 drizzle-kit 自带的数据库迁移能力生成对应的用于迁移到最新 schema 的 sql 代码。执行后会产生四个文件:
### 数据库迁移
- src/database/migrations/meta/\_journal.json保存每次迁移的相关信息
- src/database/migrations/0021\_add\_agent\_opening\_settings.sql此次迁移的 sql 命令
- src/database/client/migrations.jsonpglite 使用的此次迁移的 sql 命令
- src/database/migrations/meta/0021\_snapshot.json当前最新的完整数据库快照
注意脚本默认生成的迁移 sql 文件名不会像 `0021_add_agent_opening_settings.sql` 这样语义清晰,你需要自己手动对它重命名并且更新 `src/database/migrations/meta/_journal.json`。
以前客户端存储使用 indexDB 数据迁移相对麻烦,现在本地端使用 pglite 之后数据库迁移就是一条命令的事,非常简单快捷,你也可以检查生成的迁移 sql 是否有什么优化空间,手动调整。
调整完 schema 后需要生成并优化迁移文件,详细步骤请参阅 [数据库迁移指南](https://github.com/lobehub/lobe-chat/blob/main/.agents/skills/drizzle/references/db-migrations.md)。
## 二、更新数据模型
在 `src/types` 下定义了我们项目中使用到的各种数据模型,我们并没有直接使用 drizzle schema 导出的类型例如 `export type NewAgent = typeof agents.$inferInsert;`,而是根据前端需求和 db schema 定义中对应字段数据类型定义了对应的数据模型。
数据模型定义在 `packages/types/src/` 下,我们并没有直接使用 Drizzle schema 导出的类型例如 `typeof agents.$inferInsert`,而是根据前端需求定义了独立的数据模型。
数据模型定义都放在 `src/types` 文件夹下,更新 `src/types/agent/index.ts` 中 `LobeAgentConfig` 类型:
更新 `packages/types/src/agent/index.ts` 中 `LobeAgentConfig` 类型:
```diff
export interface LobeAgentConfig {
@@ -92,51 +83,42 @@ export interface LobeAgentConfig {
* 语言模型参数
*/
params: LLMParams;
/**
* 启用的插件
*/
plugins?: string[];
/**
* 模型供应商
*/
provider?: string;
/**
* 系统角色
*/
systemRole: string;
/**
* 语音服务
*/
tts: LobeAgentTTSConfig;
// ...
}
```
## 三、Service 实现 / Model 实现
## 三、Service / Model 各层实现
- `model` 层封装对 DB 的可复用操作
- `service` 层实现应用业务逻辑
项目按职责分为前端和后端多层,完整的分层如下:
在 `src` 目录下都有对应的顶层文件夹。
```plaintext
+-------------------+--------------------------------------+------------------------------------------------------+
| Layer | Location | Responsibility |
+-------------------+--------------------------------------+------------------------------------------------------+
| Client Service | src/services/ | 封装前端可复用的业务逻辑,一般涉及多个后端请求(tRPC) |
| WebAPI | src/app/(backend)/webapi/ | REST API 端点 |
| tRPC Router | src/server/routers/ | tRPC 入口,校验输入,路由到 service |
| Server Service | src/server/services/ | 服务端业务逻辑,可访问数据库 |
| Server Module | src/server/modules/ | 服务端模块,不直接访问数据库 |
| Repository | packages/database/src/repositories/ | 封装复杂查询、跨表操作 |
| DB Model | packages/database/src/models/ | 封装单表的 CRUD 操作 |
+-------------------+--------------------------------------+------------------------------------------------------+
```
我们需要查看是否需要更新其实现agent 配置在前端被抽象成 session 的配置,在 `src/services/session/server.ts` 中可以看到有个 service 函数 `updateSessionConfig`
**Client Service** 是前端代码,封装可复用的业务逻辑,通过 tRPC 客户端调用后端。例如 `src/services/session/index.ts`
```typescript
export class ServerService implements ISessionService {
// ...
updateSessionConfig: ISessionService['updateSessionConfig'] = (id, config, signal) => {
export class SessionService {
updateSessionConfig = (id: string, config: PartialDeep<LobeAgentConfig>, signal?: AbortSignal) => {
return lambdaClient.session.updateSessionConfig.mutate({ id, value: config }, { signal });
};
}
```
跳转 `lambdaClient.session.updateSessionConfig` 实现,发现它只是简单的 **merge** 了新的 config 和旧的 config。
**tRPC Router** 是后端入口(`src/server/routers/lambda/`),校验输入后调用 Server Service 处理业务逻辑:
```typescript
export const sessionRouter = router({
// ..
updateSessionConfig: sessionProcedure
.input(
z.object({
@@ -153,13 +135,13 @@ export const sessionRouter = router({
});
```
可以预想的到,我们前端会增加两个输入,用户修改的时候去调用这个 `updateSessionConfig`,而目前的时候都没细粒度到 config 中的具体字段因此service 层和 model 层不需要修改。
对于本次需求,`updateSessionConfig` 只是简单 merge config并没有细粒度到具体字段因此各层都不需要修改。
## 四、前端实现
### 数据流 store 实现
### 数据流 Store 实现
lobehub 使用 [zustand](https://zustand.docs.pmnd.rs/getting-started/introduction) 作为全局状态管理框架,对于状态管理的详细实践介绍,可以查阅 [📘 状态管理最佳实践](/zh/docs/development/state-management/state-management-intro)。
LobeHub 使用 [zustand](https://zustand.docs.pmnd.rs/getting-started/introduction) 作为全局状态管理框架,对于状态管理的详细实践介绍,可以查阅 [📘 状态管理最佳实践](/zh/docs/development/state-management/state-management-intro)。
和 agent 相关的 store 有两个:
@@ -195,14 +177,6 @@ export const DEFAULT_AGENT_CONFIG: LobeAgentConfig = {
因为我们增加了两个新字段,为了方便在 `src/features/AgentSetting/AgentOpening` 文件夹中组件访问和性能优化,我们在 `src/features/AgentSetting/store/selectors.ts` 增加相关的 selectors
```diff
import { DEFAULT_AGENT_CHAT_CONFIG } from '@/const/settings';
import { LobeAgentChatConfig } from '@/types/agent';
import { Store } from './action';
const chatConfig = (s: Store): LobeAgentChatConfig =>
s.config.chatConfig || DEFAULT_AGENT_CHAT_CONFIG;
+export const DEFAULT_OPENING_QUESTIONS: string[] = [];
export const selectors = {
chatConfig,
@@ -211,7 +185,7 @@ export const selectors = {
};
```
这里我们就不增加额外的 action 用于更新 agent config 了,因为我观察到已有的其它代码也是直接使用现有代码中统一的 `setChatConfig`
这里我们就不增加额外的 action 用于更新 agent config 了,因为已有的代码也是直接使用统一的 `setAgentConfig`
```typescript
export const store: StateCreator<Store, [['zustand/devtools', never]]> = (set, get) => ({
@@ -223,14 +197,11 @@ export const store: StateCreator<Store, [['zustand/devtools', never]]> = (set, g
#### 更新 store/agent
组件 `src/app/[variants]/(main)/chat/(workspace)/@conversation/features/ChatList/WelcomeChatItem/WelcomeMessage.tsx` 我们使用了这个 store 用于获取当前 agent 配置,用来渲染用户自定义的开场消息和引导性问题。
这里因为只需要读取两个配置项,我们就简单的加两个 selectors 就好了:
展示组件中我们使用 `src/store/agent` 获取当前 agent 配置,简单加两个 selectors
更新 `src/store/agent/slices/chat/selectors/agent.ts`
```diff
// ...
+const openingQuestions = (s: AgentStoreState) =>
+ currentAgentConfig(s).openingQuestions || DEFAULT_OPENING_QUESTIONS;
+const openingMessage = (s: AgentStoreState) => currentAgentConfig(s).openingMessage || '';
@@ -243,63 +214,48 @@ export const agentSelectors = {
};
```
### i18n 处理
LobeHub 是国际化项目,使用 [react-i18next](https://github.com/i18next/react-i18next) 作为 i18n 框架。新增的 UI 文案需要:
1. 在 `src/locales/default/` 对应的 namespace 文件中添加 key默认语言为英文
```typescript
// src/locales/default/setting.ts
export default {
// ...
'settingOpening.title': 'Opening Settings',
'settingOpening.openingMessage.title': 'Opening Message',
'settingOpening.openingMessage.placeholder': 'Enter a custom opening message...',
'settingOpening.openingQuestions.title': 'Opening Questions',
'settingOpening.openingQuestions.placeholder': 'Enter a guiding question',
'settingOpening.openingQuestions.empty': 'No opening questions yet',
'settingOpening.openingQuestions.repeat': 'Question already exists',
};
```
2. 如果新增了 namespace需要在 `src/locales/default/index.ts` 中导出
3. 开发预览时手动翻译 `locales/zh-CN/` 和 `locales/en-US/` 对应的 JSON 文件
4. CI 会自动运行 `pnpm i18n` 生成其他语言的翻译
key 的命名规范为扁平的 dot notation`{feature}.{context}.{action|status}`。
### UI 实现和 action 绑定
我们这次要新增一个类别的设置 在 `src/features/AgentSetting` 中定义了 agent 的各种设置 UI 组件,这次我们要增加一个设置类型:开场设置。增加一个文件夹 `AgentOpening` 存放开场设置相关的组件。项目使用了:
我们这次要新增一个类别的设置在 `src/features/AgentSetting` 中定义了 agent 的各种设置 UI 组件,增加一个文件夹 `AgentOpening` 存放开场设置相关的组件。
- [ant-design](https://ant.design/) 和 [lobe-ui](https://github.com/lobehub/lobe-ui)组件库
- [antd-style](https://ant-design.github.io/antd-style) css-in-js 方案
- [@lobehub/ui](https://ui.lobehub.com/)UI 组件库(包含 Flexbox 和 Center 用于响应式布局)
- [@ant-design/icons](https://ant.design/components/icon-cn) 和 [lucide](https://lucide.dev/icons/): 图标库
- [react-i18next](https://github.com/i18next/react-i18next) 和 [lobe-i18n](https://github.com/lobehub/lobe-cli-toolbox/tree/master/packages/lobe-i18n) i18n 框架和多语言自动翻译工具
lobehub 是个国际化项目,新加的文案需要更新默认的 `locale` 文件: `src/locales/default/setting.ts` 。
我们以子组件 `OpeningQuestion.tsx` 为例,组件实现:
以子组件 `OpeningQuestions.tsx` 为例,展示关键逻辑(省略样式代码):
```typescript
// src/features/AgentSetting/AgentOpening/OpeningQuestions.tsx
'use client';
import { DeleteOutlined, PlusOutlined } from '@ant-design/icons';
import { Flexbox, SortableList } from '@lobehub/ui';
import { Button, Empty, Input } from 'antd';
import { createStaticStyles } from 'antd-style';
import { memo, useCallback, useMemo, useState } from 'react';
import { useTranslation } from 'react-i18next';
import { useStore } from '../store';
import { selectors } from '../store/selectors';
const styles = createStaticStyles(({ css, cssVar }) => ({
empty: css`
margin-block: 24px;
margin-inline: 0;
`,
questionItemContainer: css`
margin-block-end: 8px;
padding-block: 2px;
padding-inline: 10px 0;
background: ${cssVar.colorBgContainer};
`,
questionItemContent: css`
flex: 1;
`,
questionsList: css`
width: 100%;
margin-block-start: 16px;
`,
repeatError: css`
margin: 0;
color: ${cssVar.colorErrorText};
`,
}));
interface QuestionItem {
content: string;
id: number;
}
const OpeningQuestions = memo(() => {
const { t } = useTranslation('setting');
const [questionInput, setQuestionInput] = useState('');
@@ -317,7 +273,6 @@ const OpeningQuestions = memo(() => {
const addQuestion = useCallback(() => {
if (!questionInput.trim()) return;
setQuestions([...openingQuestions, questionInput.trim()]);
setQuestionInput('');
}, [openingQuestions, questionInput, setQuestions]);
@@ -332,106 +287,27 @@ const OpeningQuestions = memo(() => {
[openingQuestions, setQuestions],
);
// 处理拖拽排序后的逻辑
const handleSortEnd = useCallback(
(items: QuestionItem[]) => {
setQuestions(items.map((item) => item.content));
},
[setQuestions],
);
const items: QuestionItem[] = useMemo(() => {
return openingQuestions.map((item, index) => ({
content: item,
id: index,
}));
}, [openingQuestions]);
const isRepeat = openingQuestions.includes(questionInput.trim());
return (
<Flexbox gap={8}>
<Flexbox gap={4}>
<Input
addonAfter={
<Button
// don't allow repeat
disabled={openingQuestions.includes(questionInput.trim())}
icon={<PlusOutlined />}
onClick={addQuestion}
size="small"
type="text"
/>
}
onChange={(e) => setQuestionInput(e.target.value)}
onPressEnter={addQuestion}
placeholder={t('settingOpening.openingQuestions.placeholder')}
value={questionInput}
/>
{isRepeat && (
<p className={styles.repeatError}>{t('settingOpening.openingQuestions.repeat')}</p>
)}
</Flexbox>
<div className={styles.questionsList}>
{openingQuestions.length > 0 ? (
<SortableList
items={items}
onChange={handleSortEnd}
renderItem={(item) => (
<SortableList.Item className={styles.questionItemContainer} id={item.id}>
<SortableList.DragHandle />
<div className={styles.questionItemContent}>{item.content}</div>
<Button
icon={<DeleteOutlined />}
onClick={() => removeQuestion(item.content)}
type="text"
/>
</SortableList.Item>
)}
/>
) : (
<Empty
className={styles.empty}
description={t('settingOpening.openingQuestions.empty')}
/>
)}
</div>
</Flexbox>
);
// 渲染 Input + SortableList具体 UI 参考组件库文档
// ...
});
export default OpeningQuestions;
```
关键点:
- 通过 `selectors` 读取 store 中的配置
- 通过 `setAgentConfig` action 更新配置
- 使用 `useTranslation('setting')` 获取 i18n 文案
同时我们需要将用户设置的开场配置展示出来,这个是在 chat 页面,对应组件在 `src/app/[variants]/(main)/chat/(workspace)/@conversation/features/ChatList/WelcomeChatItem/WelcomeMessage.tsx`
```typescript
const WelcomeMessage = () => {
const { t } = useTranslation('chat');
// 获取当前开场配置
// 从 store/agent 获取当前开场配置
const openingMessage = useAgentStore(agentSelectors.openingMessage);
const openingQuestions = useAgentStore(agentSelectors.openingQuestions);
const meta = useSessionStore(sessionMetaSelectors.currentAgentMeta, isEqual);
const { isAgentEditable } = useServerConfigStore(featureFlagsSelectors);
const activeId = useChatStore((s) => s.activeId);
const agentSystemRoleMsg = t('agentDefaultMessageWithSystemRole', {
name: meta.title || t('defaultAgent'),
systemRole: meta.description,
});
const agentMsg = t(isAgentEditable ? 'agentDefaultMessage' : 'agentDefaultMessageWithoutEdit', {
name: meta.title || t('defaultAgent'),
url: `/chat/settings?session=${activeId}`,
});
const message = useMemo(() => {
// 用户设置了就用用户设置的
@@ -439,38 +315,50 @@ const WelcomeMessage = () => {
return !!meta.description ? agentSystemRoleMsg : agentMsg;
}, [openingMessage, agentSystemRoleMsg, agentMsg, meta.description]);
const chatItem = (
<ChatItem
avatar={meta}
editing={false}
message={message}
placement={'left'}
type={type === 'chat' ? 'block' : 'pure'}
/>
);
return openingQuestions.length > 0 ? (
<Flexbox>
{chatItem}
<ChatItem avatar={meta} message={message} placement="left" />
{/* 渲染引导性问题 */}
<OpeningQuestions mobile={mobile} questions={openingQuestions} />
<OpeningQuestions questions={openingQuestions} />
</Flexbox>
) : (
chatItem
<ChatItem avatar={meta} message={message} placement="left" />
);
};
export default WelcomeMessage;
```
## 五、测试
项目使用 vitest 进行单元测试。
项目使用 Vitest 进行单元测试,相关指南详见 [测试技能文档](https://github.com/lobehub/lobe-chat/blob/main/.agents/skills/testing/SKILL.md)
由于我们目前两个新的配置字段都是可选的,所以理论上你不更新测试也能跑通,不过由于我们把前面提到的默认配置 `DEFAULT_AGENT_CONFIG` 增加了 `openingQuestions` 字段,这导致很多测试计算出的配置都是有这个字段的,因此我们还是需要更新一部分测试的快照。
**运行测试:**
对于当前这个场景,我建议是本地直接跑下测试,看哪些测试失败了,针对需要更新,例如测试文件 `src/store/agent/slices/chat/selectors/agent.test.ts` 需要执行一下 `bunx vitest -u src/store/agent/slices/chat/selectors/agent.test.ts` 更新快照。
```bash
# 运行指定测试文件(不要运行 bun run test全量测试耗时很长
bunx vitest run --silent='passed-only' '[file-path]'
# database 包的测试
cd packages/database && bunx vitest run --silent='passed-only' '[file]'
```
**添加新功能的测试建议:**
由于我们目前两个新的配置字段都是可选的,理论上不更新测试也能跑通。但如果修改了默认配置(如 `DEFAULT_AGENT_CONFIG` 增加了 `openingQuestions` 字段),可能导致一些测试快照不匹配,需要更新。
建议先本地跑下相关测试,看哪些失败了再针对性更新。例如:
```bash
bunx vitest run --silent='passed-only' 'src/store/agent/slices/chat/selectors/agent.test.ts'
```
如果只是想确认现有测试是否通过而不想本地跑,也可以直接查看 GitHub Actions 的测试结果。
**更多测试场景指南:**
- DB Model 测试:`.agents/skills/testing/references/db-model-test.md`
- Zustand Store Action 测试:`.agents/skills/testing/references/zustand-store-action-test.md`
- Electron IPC 测试:`.agents/skills/testing/references/electron-ipc-test.md`
## 总结
以上就是 LobeHub 开场设置功能的完整实现流程。开发者可以参考本文档进行相关功能的开发和测试
以上就是 LobeHub 开场设置功能的完整实现流程,涵盖了从数据库 schema → 数据模型 → Service/Model → Store → i18n → UI → 测试的全链路。开发者可以参考本文档进行相关功能的开发。

View File

@@ -13,103 +13,212 @@ tags:
# Directory Structure
The directory structure of LobeHub is as follows:
LobeHub uses a Monorepo architecture (`@lobechat/` namespace).
The top-level directory structure is as follows:
```bash
src
├── app # Next.js App Router implementation with route groups and API routes
├── business # Business logic modules (client and server)
├── components # Reusable UI components
├── config # Application configuration files, including client-side and server-side environment variables
├── const # Application constants and enums
├── envs # Environment variable definitions and validation (analytics, auth, llm, etc.)
├── features # Function modules related to business functions, such as agent settings, plugin development pop-ups, etc.
├── helpers # Utility helpers for tool engineering, placeholder parsing, etc.
├── hooks # Custom utility hooks reused throughout the application
├── layout # Application layout components, such as navigation bars, sidebars, etc.
├── libs # Third-party integrations (analytics, OIDC, etc.)
├── locales # Internationalization language files
├── server # Server-side modules and services
├── services # Encapsulated backend service interfaces, such as HTTP requests
├── store # Zustand store for state management
├── styles # Global styles and CSS-in-JS configurations
├── tools # Built-in tools (artifacts, inspectors, interventions, etc.)
── types # TypeScript type definition files
├── utils # Common utility functions
lobe-chat/
├── apps/
│ └── desktop/ # Electron desktop app
├── packages/ # Shared packages (@lobechat/*)
│ ├── agent-runtime/ # Agent runtime
│ ├── database/ # Database schemas, models, repositories
│ ├── model-runtime/ # Model runtime (AI provider adapters)
│ ├── builtin-tool-*/ # Built-in tool packages
│ ├── business/ # Cloud business slot packages
│ ├── context-engine/ # Context engine
│ ├── conversation-flow/ # Conversation flow
│ ├── editor-runtime/ # Editor runtime
│ ├── file-loaders/ # File loaders
│ ├── prompts/ # Prompt templates
│ └── ... # More shared packages
├── src/ # Main app source code (see below)
├── locales/ # i18n translation files (zh-CN, en-US, etc.)
├── e2e/ # E2E tests (Cucumber + Playwright)
── docs/ # Documentation
```
## src Directory
```bash
src/
├── app/ # Next.js App Router (route groups and API routes)
├── business/ # Cloud-only business logic (client/server)
├── components/ # Reusable UI components
├── config/ # App configuration (client and server env vars)
├── const/ # Application constants and enums
├── envs/ # Environment variable definitions and validation
├── features/ # Business feature modules (Agent settings, plugin dev, etc.)
├── helpers/ # Utility helper functions
├── hooks/ # Reusable custom Hooks
├── layout/ # Global layout components (AuthProvider, GlobalProvider)
├── libs/ # Third-party integrations (better-auth, OIDC, tRPC, MCP, etc.)
├── locales/ # i18n default language files (English)
├── server/ # Server-side modules
│ ├── featureFlags/ # Feature flags
│ ├── globalConfig/ # Global configuration
│ ├── modules/ # Server modules (no DB access)
│ ├── routers/ # tRPC routers (async, lambda, mobile, tools)
│ └── services/ # Server services (with DB access)
├── services/ # Client-side service interfaces
├── store/ # Zustand state management
├── styles/ # Global styles and CSS-in-JS configurations
├── tools/ # Built-in tools (artifacts, inspectors, etc.)
├── types/ # TypeScript type definitions
├── utils/ # General utility functions
├── auth.ts # Authentication configuration (Better Auth)
├── instrumentation.ts # Application monitoring and telemetry setup
├── instrumentation.node.ts # Node.js-specific instrumentation
├── instrumentation.ts # App monitoring and telemetry setup
└── proxy.ts # Next.js middleware proxy configuration
```
## app
## app Directory
The `app` directory follows Next.js 13+ App Router conventions with a sophisticated architecture using [Route Groups](https://nextjs.org/docs/app/building-your-application/routing/route-groups) to organize backend services, platform variants, and application routes:
The `app` directory follows Next.js App Router conventions,
using [Route Groups](https://nextjs.org/docs/app/building-your-application/routing/route-groups)
to organize backend services, platform variants,
and application routes:
```bash
app
app/
├── (backend)/ # Backend API routes and services
│ ├── api/ # REST API endpoints
│ ├── auth/ # Authentication routes
│ └── webhooks/ # Webhook handlers
│ ├── api/ # REST API endpoints (auth, webhooks)
│ ├── f/ # File service
├── market/ # Market service
│ ├── middleware/ # Request middleware
│ ├── oidc/ # OpenID Connect routes
│ ├── trpc/ # tRPC API endpoints
│ │ ├── async/ # Async tRPC routes
│ │ ├── desktop/ # Desktop-specific tRPC routes
│ │ ├── edge/ # Edge runtime tRPC routes
│ │ ├── desktop/ # Desktop tRPC routes
│ │ ├── lambda/ # Lambda tRPC routes
│ │ └── tools/ # Tools tRPC routes
│ └── webapi/ # Web API endpoints
│ ├── chat/ # Chat-related APIs
│ ├── models/ # Model management APIs
│ ├── tts/ # Text-to-speech APIs
│ └── ...
│ └── webapi/ # Web API endpoints (chat, models, tts, etc.)
├── [variants]/ # Platform and device variants
│ ├── (auth)/ # Authentication pages
│ ├── login/
│ └── signup/
│ ├── (main)/ # Main application routes
│ │ ├── (mobile)/ # Mobile-specific routes
│ │ │ └── me/ # Mobile profile pages
│ ├── (auth)/ # Auth pages (login, signup)
│ ├── (desktop)/ # Desktop-specific routes
├── (main)/ # Main application routes (SPA)
│ │ ├── _layout/ # Layout components
│ │ ├── chat/ # Chat interface
│ │ ├── discover/ # Discovery pages
│ │ ├── files/ # File management
│ │ ├── agent/ # Agent pages
│ │ ├── home/ # Home page
│ │ ├── image/ # Image generation
│ │ ├── profile/ # User profile
│ │ ├── repos/ # Repository management
│ │ ├── memory/ # Memory management
│ │ ├── resource/ # Resource management
│ │ └── settings/ # Application settings
── @modal/ # Parallel modal routes
├── (.)changelog/
└── _layout/
├── desktop/ # Desktop-specific routes
│ └── devtools/
── (mobile)/ # Mobile-specific routes
├── onboarding/ # Onboarding flow
├── router/ # SPA router configuration
│ └── share/ # Share pages
├── manifest.ts # PWA manifest
├── robots.tsx # Robots.txt generation
├── sitemap.tsx # Sitemap generation
└── sw.ts # Service worker
└── sw.ts # Service Worker
```
### Architecture Explanation
**Route Groups:**
- `(backend)` - Contains all server-side API routes, middleware, and backend services
- `[variants]` - Dynamic route group handling different platform variants and main application pages
- `@modal` - Parallel routes for modal dialogs using Next.js parallel routing
- `(backend)` — All server-side API routes, middleware, and backend services
- `[variants]` Dynamic route group for platform variants
- `(main)` — Main SPA using React Router DOM
**Platform Organization:**
- The architecture supports multiple platforms (web, desktop, mobile) through route organization
- Desktop-specific routes are in the `desktop/` directory
- Mobile-specific routes are organized under `(main)/(mobile)/`
- Shared layouts and components are in `_layout/` directories
- Supports multiple platforms (web, desktop, mobile) through route organization
- Desktop-specific routes under `(desktop)/`
- Mobile-specific routes under `(mobile)/`
- Shared layout components in `_layout/` directories
**API Architecture:**
- REST APIs in `(backend)/api/` and `(backend)/webapi/`
- tRPC endpoints organized by runtime environment (edge, lambda, async, desktop)
- Authentication and OIDC handling in dedicated route groups
- REST APIs: `(backend)/api/` and `(backend)/webapi/`
- tRPC endpoints (`src/server/routers/`): grouped by runtime
- `lambda/` — Main business routers (agent, session, message,
topic, file, knowledge, settings, etc.)
- `async/` — Long-running async operations
(file processing, image generation, RAG evaluation)
- `tools/` — Tool invocations (search, MCP, market, klavis)
- `mobile/` — Mobile-specific routers
This architecture provides clear separation of concerns while maintaining flexibility for different deployment targets and runtime environments.
**Data Flow:**
Using a typical user action (e.g., updating Agent config) as an example, here's how data flows through each layer:
```plaintext
React UI (src/features/, src/app/)
│ User interaction triggers event
Store Actions (src/store/)
│ Zustand action updates local state, calls service
Client Service (src/services/)
│ Wraps tRPC client call, prepares request params
tRPC Router (src/server/routers/lambda/)
│ Validates input (zod), routes to service
Server Service (src/server/services/)
│ Executes business logic, calls DB model
DB Model (packages/database/src/models/)
│ Wraps Drizzle ORM queries
PostgreSQL
```
For data reads, the flow is reversed: UI consumes data via store selectors, and stores fetch from the backend via SWR + tRPC queries.
### Routing Architecture
The project uses hybrid routing:
Next.js App Router handles static pages (e.g., auth pages),
while React Router DOM powers the main SPA.
**Entry**: `src/app/[variants]/page.tsx`
dispatches to desktop or mobile router based on device type.
**Key configuration files:**
- Desktop router: `src/app/[variants]/router/desktopRouter.config.tsx`
- Mobile router: `src/app/[variants]/(mobile)/router/mobileRouter.config.tsx`
- Router utilities: `src/utils/router.tsx`
**Desktop SPA Routes (React Router DOM):**
```bash
/ # Home
/agent/:aid # Agent conversation
/agent/:aid/profile # Agent profile
/agent/:aid/cron/:cronId # Cron task detail
/group/:gid # Group conversation
/group/:gid/profile # Group profile
/community # Community discovery (agent, model, provider, mcp)
/community/agent/:slug # Agent detail page
/community/model/:slug # Model detail page
/community/provider/:slug # Provider detail page
/community/mcp/:slug # MCP detail page
/resource # Resource management
/resource/library/:id # Knowledge base detail
/settings/:tab # Settings (profile, provider, etc.)
/settings/provider/:id # Model provider configuration
/memory # Memory management
/image # Image generation
/page/:id # Page detail
/share/t/:id # Share topic
/onboarding # Onboarding flow
```
**Mobile SPA Routes (React Router DOM):**
```bash
/ # Home
/agent/:aid # Agent conversation
/community # Community discovery
/settings # Settings home
/settings/:tab # Settings detail
/settings/provider/:id # Model provider configuration
/me # Personal center
/me/profile # Profile
/me/settings # Personal settings
/share/t/:id # Share topic
/onboarding # Onboarding flow
```

View File

@@ -11,78 +11,99 @@ tags:
# 目录架构
LobeHub 的文件夹目录架构如下:
LobeHub 采用 Monorepo 架构(`@lobechat/` 命名空间),
顶层目录结构如下:
```bash
src
├── app # Next.js App Router 实现,包含路由组和 API 路由
├── business # 业务逻辑模块(客户端和服务端)
├── components # 可复用的 UI 组件
├── config # 应用的配置文件,包含客户端环境变量与服务端环境变量
├── const # 应用常量和枚举
├── envs # 环境变量定义和校验分析、认证、LLM 等
├── features # 与业务功能相关的功能模块,如 Agent 设置、插件开发弹窗等
├── helpers # 工具辅助函数,用于工具工程、占位符解析等
├── hooks # 全应用复用自定义的工具 Hooks
├── layout # 应用的布局组件,如导航栏、侧边栏等
├── libs # 第三方集成分析、OIDC 等)
├── locales # 国际化的语言文件
├── server # 服务端模块和服务
├── services # 封装的后端服务接口,如 HTTP 请求
├── store # 用于状态管理的 zustand store
├── styles # 全局样式和 CSS-in-JS 配置
├── tools # 内置工具artifacts、inspectors、interventions 等
── types # TypeScript 的类型定义文件
├── utils # 通用的工具函数
lobe-chat/
├── apps/
│ └── desktop/ # Electron 桌面应用
├── packages/ # 共享包(@lobechat/*
│ ├── agent-runtime/ # Agent 运行时
│ ├── database/ # 数据库 schemas、models、repositories
│ ├── model-runtime/ # 模型运行时(各 AI 提供商适配
│ ├── builtin-tool-*/ # 内置工具包
│ ├── business/ # Cloud 业务插槽 packages
│ ├── context-engine/ # 上下文引擎
│ ├── conversation-flow/ # 会话流程
│ ├── editor-runtime/ # 编辑器运行时
│ ├── file-loaders/ # 文件加载器
│ ├── prompts/ # Prompt 模板
│ └── ... # 更多共享包
├── src/ # 主应用源码(见下方详细说明)
├── locales/ # i18n 翻译文件zh-CN、en-US 等)
├── e2e/ # E2E 测试Cucumber + Playwright
── docs/ # 文档
```
## src 目录
```bash
src/
├── app/ # Next.js App Router路由组和 API 路由)
├── business/ # Cloud 版专用业务逻辑(客户端/服务端)
├── components/ # 可复用的 UI 组件
├── config/ # 应用配置(客户端与服务端环境变量)
├── const/ # 应用常量和枚举
├── envs/ # 环境变量定义和校验
├── features/ # 业务功能模块Agent 设置、插件开发弹窗等)
├── helpers/ # 工具辅助函数
├── hooks/ # 全应用复用的自定义 Hooks
├── layout/ # 全局布局组件AuthProvider、GlobalProvider
├── libs/ # 第三方集成better-auth、OIDC、tRPC、MCP 等)
├── locales/ # 国际化默认语言文件(英文)
├── server/ # 服务端模块
│ ├── featureFlags/ # Feature Flags
│ ├── globalConfig/ # 全局配置
│ ├── modules/ # 服务端模块(不访问数据库)
│ ├── routers/ # tRPC 路由async、lambda、mobile、tools
│ └── services/ # 服务端服务(可访问数据库)
├── services/ # 客户端服务接口
├── store/ # zustand 状态管理
├── styles/ # 全局样式和 CSS-in-JS 配置
├── tools/ # 内置工具artifacts、inspectors 等)
├── types/ # TypeScript 类型定义
├── utils/ # 通用工具函数
├── auth.ts # 认证配置Better Auth
├── instrumentation.ts # 应用监控和遥测设置
├── instrumentation.node.ts # Node.js 专用的 instrumentation
└── proxy.ts # Next.js 中间件代理配置
```
## app
## app 目录
`app` 目录遵循 Next.js 13+ App Router 约定,采用复杂的架构,使用 [路由组](https://nextjs.org/docs/app/building-your-application/routing/route-groups) 来组织后端服务、平台变体和应用路由:
`app` 目录遵循 Next.js App Router 约定,
使用[路由组](https://nextjs.org/docs/app/building-your-application/routing/route-groups)
来组织后端服务、平台变体和应用路由:
```bash
app
app/
├── (backend)/ # 后端 API 路由和服务
│ ├── api/ # REST API 端点
│ ├── auth/ # 身份验证路由
│ └── webhooks/ # Webhook 处理器
│ ├── api/ # REST API 端点auth、webhooks
│ ├── f/ # 文件服务
├── market/ # 市场服务
│ ├── middleware/ # 请求中间件
│ ├── oidc/ # OpenID Connect 路由
│ ├── trpc/ # tRPC API 端点
│ │ ├── async/ # 异步 tRPC 路由
│ │ ├── desktop/ # 桌面端专用 tRPC 路由
│ │ ├── edge/ # Edge 运行时 tRPC 路由
│ │ ├── desktop/ # 桌面端 tRPC 路由
│ │ ├── lambda/ # Lambda tRPC 路由
│ │ └── tools/ # 工具 tRPC 路由
│ └── webapi/ # Web API 端点
│ ├── chat/ # 聊天相关 API
│ ├── models/ # 模型管理 API
│ ├── tts/ # 文本转语音 API
│ └── ...
│ └── webapi/ # Web API 端点chat、models、tts 等)
├── [variants]/ # 平台和设备变体
│ ├── (auth)/ # 身份验证页面
│ ├── login/
│ └── signup/
│ ├── (main)/ # 主应用路由
│ │ ├── (mobile)/ # 移动端专用路由
│ │ │ └── me/ # 移动端个人资料页面
│ ├── (auth)/ # 身份验证页面login、signup
│ ├── (desktop)/ # 桌面端专用路由
├── (main)/ # 主应用路由SPA
│ │ ├── _layout/ # 布局组件
│ │ ├── chat/ # 聊天界
│ │ ├── discover/ # 发现页面
│ │ ├── files/ # 文件管理
│ │ ├── agent/ # Agent 页
│ │ ├── home/ # 首页
│ │ ├── image/ # 图像生成
│ │ ├── profile/ # 用户资料
│ │ ├── repos/ # 仓库管理
│ │ ├── memory/ # 记忆管理
│ │ ├── resource/ # 资源管理
│ │ └── settings/ # 应用设置
── @modal/ # 并行模态框路由
├── (.)changelog/
└── _layout/
├── desktop/ # 桌面端专用路由
│ └── devtools/
── (mobile)/ # 移动端专用路由
├── onboarding/ # 新用户引导
├── router/ # SPA 路由配置
│ └── share/ # 分享页面
├── manifest.ts # PWA 清单
├── robots.tsx # Robots.txt 生成
├── sitemap.tsx # 站点地图生成
@@ -93,21 +114,107 @@ app
**路由组:**
- `(backend)` - 包含所有服务端 API 路由、中间件和后端服务
- `[variants]` - 处理不同平台变体和主应用页面的动态路由组
- `@modal` - 使用 Next.js 并行路由的模态框对话框并行路由
- `(backend)` 所有服务端 API 路由、中间件和后端服务
- `[variants]` 处理不同平台变体的动态路由组
- `(main)` — 主应用 SPA使用 React Router DOM
**平台组织:**
- 架构通过路由组织支持多平台Web、桌面端、移动端
- 桌面端专用路由位于 `desktop/` 目录中
- 移动端专用路由组织在 `(main)/(mobile)/` 下
- 共享布局组件位于 `_layout/` 目录中
- 通过路由组织支持多平台Web、桌面端、移动端
- 桌面端专用路由 `(desktop)/`
- 移动端专用路由在 `(mobile)/` 下
- 共享布局组件 `_layout/` 目录中
**API 架构:**
- `(backend)/api/` 和 `(backend)/webapi/` 中的 REST API
- 按运行时环境组织的 tRPC 端点edge、lambda、async、desktop
- 专用路由组中的身份验证和 OIDC 处理
- REST API`(backend)/api/` 和 `(backend)/webapi/`
- tRPC 端点(`src/server/routers/`):按运行时分组
- `lambda/` — 主要业务路由agent、session、message、
topic、file、knowledge、settings 等)
- `async/` — 耗时异步操作文件处理、图像生成、RAG 评估)
- `tools/` — 工具调用search、MCP、market、klavis
- `mobile/` — 移动端专用路由
这种架构在保持不同部署目标和运行时环境灵活性的同时,提供了清晰的关注点分离。
**数据流:**
以一个典型的用户操作(如更新 Agent 配置)为例,数据在各层之间的流转:
```plaintext
React UI (src/features/, src/app/)
│ 用户交互触发事件
Store Actions (src/store/)
│ zustand action 更新本地状态,调用 service
Client Service (src/services/)
│ 封装 tRPC 客户端调用,处理请求参数
tRPC Router (src/server/routers/lambda/)
│ 校验输入zod路由到对应 service
Server Service (src/server/services/)
│ 执行业务逻辑,调用 DB model
DB Model (packages/database/src/models/)
│ 封装 Drizzle ORM 查询
PostgreSQL
```
读取数据的流程方向相反UI 通过 store selector 消费数据store 通过 SWR + tRPC query 从后端拉取。
### 路由架构
项目采用混合路由:
Next.js App Router 处理静态页面(如认证页),
React Router DOM 承载主应用 SPA。
**入口**`src/app/[variants]/page.tsx`
根据设备类型分发到桌面端或移动端路由。
**关键配置文件:**
- 桌面端路由:`src/app/[variants]/router/desktopRouter.config.tsx`
- 移动端路由:`src/app/[variants]/(mobile)/router/mobileRouter.config.tsx`
- 路由工具:`src/utils/router.tsx`
**桌面端 SPA 路由React Router DOM**
```bash
/ # 首页
/agent/:aid # Agent 会话
/agent/:aid/profile # Agent 详情
/agent/:aid/cron/:cronId # 定时任务详情
/group/:gid # 群组会话
/group/:gid/profile # 群组详情
/community # 社区发现agent、model、provider、mcp
/community/agent/:slug # Agent 详情页
/community/model/:slug # 模型详情页
/community/provider/:slug # 提供商详情页
/community/mcp/:slug # MCP 详情页
/resource # 资源管理
/resource/library/:id # 知识库详情
/settings/:tab # 设置profile、provider 等)
/settings/provider/:id # 模型提供商配置
/memory # 记忆管理
/image # 图像生成
/page/:id # 页面详情
/share/t/:id # 分享话题
/onboarding # 新用户引导
```
**移动端 SPA 路由React Router DOM**
```bash
/ # 首页
/agent/:aid # Agent 会话
/community # 社区发现
/settings # 设置首页
/settings/:tab # 设置详情
/settings/provider/:id # 模型提供商配置
/me # 个人中心
/me/profile # 个人资料
/me/settings # 个人设置
/share/t/:id # 分享话题
/onboarding # 新用户引导
```

View File

@@ -23,4 +23,23 @@ The design and development of LobeHub would not have been possible without the e
6. **Next.js Documentation**: Our project is built on Next.js, and you can refer to the [Next.js Documentation](https://nextjs.org/docs) for more information about Next.js.
7. **FlowGPT**: FlowGPT is currently the world's largest Prompt community, and some of the agents in LobeHub come from active authors in FlowGPT. You can visit [FlowGPT](https://flowgpt.com/) to learn more about it.
## LobeHub Official Ecosystem
- [🍭 LobeUI](https://github.com/lobehub/lobe-ui) (`@lobehub/ui`): LobeHub UI component library
- [🥨 LobeIcons](https://github.com/lobehub/lobe-icons) (`@lobehub/icons`): AI / LLM brand SVG icon library
- [📊 LobeCharts](https://github.com/lobehub/lobe-charts) (`@lobehub/charts`): Chart component library
- [✒️ LobeEditor](https://github.com/lobehub/lobe-editor) (`@lobehub/editor`): Editor components
- [🎤 LobeTTS](https://github.com/lobehub/lobe-tts) (`@lobehub/tts`): TTS / STT voice processing library
- [📐 LobeLint](https://github.com/lobehub/lobe-lint) (`@lobehub/lint`): ESLint / Prettier / Commitlint config presets
- [🌐 Lobe i18n](https://github.com/lobehub/lobe-cli-toolbox/tree/master/packages/lobe-i18n): AI-powered i18n auto-translation CLI tool
- [🔌 MCP Mark](https://mcpmark.ai/): MCP tool discovery and evaluation platform
## LobeHub Community & Platforms
- [🤖 Agent Market](https://lobehub.com/agent): Discover and share AI Agents
- [🔌 MCP Market](https://lobehub.com/mcp): Discover and share MCP tools
- [🎬 YouTube](https://www.youtube.com/@lobehub): Official video tutorials and demos
- [🐦 X (Twitter)](https://x.com/lobehub): Project updates and announcements
- [💬 Discord](https://discord.com/invite/AYFPHvv2jT): Community discussion and support
We will continue to update and supplement this list to provide developers with more reference resources.

View File

@@ -21,4 +21,23 @@ LobeHub 的设计和开发离不开社区和生态中的优秀项目。我们在
6. **Next.js 文档**:我们的项目是基于 Next.js 构建的,你可以查看 [Next.js 文档](https://nextjs.org/docs) 来了解更多关于 Next.js 的信息。
7. **FlowGPT**FlowGPT 是目前全球最大的 Prompt 社区LobeHub 中的一些 Agent 来自 FlowGPT 的活跃作者。你可以访问 [FlowGPT](https://flowgpt.com/) 来了解更多关于它的信息。
## LobeHub 官方生态
- [🍭 LobeUI](https://github.com/lobehub/lobe-ui) (`@lobehub/ui`)LobeHub UI 组件库
- [🥨 LobeIcons](https://github.com/lobehub/lobe-icons) (`@lobehub/icons`)AI / LLM 品牌 SVG 图标库
- [📊 LobeCharts](https://github.com/lobehub/lobe-charts) (`@lobehub/charts`):图表组件库
- [✒️ LobeEditor](https://github.com/lobehub/lobe-editor) (`@lobehub/editor`):编辑器组件
- [🎤 LobeTTS](https://github.com/lobehub/lobe-tts) (`@lobehub/tts`)TTS / STT 语音处理库
- [📐 LobeLint](https://github.com/lobehub/lobe-lint) (`@lobehub/lint`)ESLint / Prettier / Commitlint 等配置预设
- [🌐 Lobe i18n](https://github.com/lobehub/lobe-cli-toolbox/tree/master/packages/lobe-i18n)AI 驱动的 i18n 自动翻译 CLI 工具
- [🔌 MCP Mark](https://mcpmark.ai/)MCP 工具发现与评测平台
## LobeHub 社区与平台
- [🤖 Agent 市场](https://lobehub.com/zh/agent):发现和分享 AI Agent
- [🔌 MCP 市场](https://lobehub.com/zh/mcp):发现和分享 MCP 工具
- [🎬 YouTube](https://www.youtube.com/@lobehub):官方视频教程与演示
- [🐦 X (Twitter)](https://x.com/lobehub):项目动态与公告
- [💬 Discord](https://discord.com/invite/AYFPHvv2jT):社区讨论与技术支持
我们会持续更新和补充这个列表,为开发者提供更多的参考资源。

View File

@@ -19,78 +19,63 @@ Welcome to the LobeHub Technical Development Getting Started Guide. LobeHub is a
The core technology stack of LobeHub is as follows:
- **Framework**: We chose [Next.js](https://nextjs.org/), a powerful React framework that provides key features such as server-side rendering, routing framework, and Router Handler.
- **Component Library**: We use [Ant Design (antd)](https://ant.design/) as the basic component library, along with [lobe-ui](https://github.com/lobehub/lobe-ui) as our business component library.
- **State Management**: We selected [zustand](https://github.com/pmndrs/zustand), a lightweight and easy-to-use state management library.
- **Network Requests**: We use [swr](https://swr.vercel.app/), a React Hooks library for data fetching.
- **Routing**: For routing management, we directly use the solution provided by [Next.js](https://nextjs.org/).
- **Internationalization**: We use [i18next](https://www.i18next.com/) to support multiple languages in the application.
- **Styling**: We use [antd-style](https://github.com/ant-design/antd-style), a CSS-in-JS library that complements Ant Design.
- **Unit Testing**: We use [vitest](https://github.com/vitest-dev/vitest) for unit testing.
- **Framework**: [Next.js](https://nextjs.org/) 16 + [React](https://react.dev/) 19, providing server-side rendering, Router Handler, and other key features.
- **Component Library**: [Ant Design (antd)](https://ant.design/) as the base component library, [@lobehub/ui](https://github.com/lobehub/lobe-ui) as the business component library.
- **State Management**: [zustand](https://github.com/pmndrs/zustand), a lightweight and easy-to-use state management library.
- **Data Fetching**: [SWR](https://swr.vercel.app/) for client-side data fetching.
- **Routing**: Hybrid routing architecture — [Next.js App Router](https://nextjs.org/) for static pages (e.g., auth pages), [React Router DOM](https://reactrouter.com/) for the main SPA.
- **API**: [tRPC](https://trpc.io/) for end-to-end type-safe API communication.
- **Database**: [Drizzle ORM](https://orm.drizzle.team/) + PostgreSQL.
- **Internationalization**: [react-i18next](https://react.i18next.com/) for multilingual support.
- **Styling**: [antd-style](https://github.com/ant-design/antd-style), a CSS-in-JS library that complements Ant Design.
- **Unit Testing**: [Vitest](https://github.com/vitest-dev/vitest) for unit testing.
## Folder Directory Structure
The folder directory structure of LobeHub is as follows:
LobeHub uses a Monorepo architecture
(`@lobechat/` namespace).
The top-level directory structure is as follows:
```bash
src
├── app # Next.js App Router implementation with route groups and API routes
├── business # Business logic modules (client and server)
├── components # Reusable UI components
├── config # Application configuration files, including client and server environment variables
├── const # Application constants and enums
├── envs # Environment variable definitions and validation (analytics, auth, llm, etc.)
├── features # Business-related feature modules, such as Agent settings, plugin development pop-ups, etc.
├── helpers # Utility helpers for tool engineering, placeholder parsing, etc.
├── hooks # Custom utility Hooks reusable across the application
├── layout # Application layout components, such as navigation bars, sidebars, etc.
├── libs # Third-party integrations (analytics, OIDC, etc.)
├── locales # Language files for internationalization
├── server # Server-side modules and services
├── services # Encapsulated backend service interfaces, such as HTTP requests
├── store # Zustand store for state management
├── styles # Global styles and CSS-in-JS configurations
├── tools # Built-in tools (artifacts, inspectors, interventions, etc.)
├── types # TypeScript type definition files
└── utils # General utility functions
lobe-chat/
├── apps/desktop/ # Electron desktop app
├── packages/ # Shared packages (@lobechat/*)
│ ├── database/ # Database schemas, models, repositories
│ ├── agent-runtime/ # Agent runtime
│ ├── model-runtime/ # Model runtime
│ └── ... # More shared packages
├── src/ # Main application source code
│ ├── app/ # Next.js App Router with route groups and API routes
│ ├── components/ # Reusable UI components
│ ├── config/ # App configuration, client and server env vars
│ ├── const/ # Application constants and enums
│ ├── envs/ # Env var definitions and validation (analytics, auth, LLM, etc.)
│ ├── features/ # Business feature modules (Agent settings, plugin dev, etc.)
│ ├── helpers/ # Utility helper functions
│ ├── hooks/ # Reusable custom Hooks
│ ├── layout/ # Layout components (AuthProvider, GlobalProvider, etc.)
│ ├── libs/ # Third-party integrations (better-auth, OIDC, tRPC, etc.)
│ ├── locales/ # Internationalization language files
│ ├── server/ # Server-side modules, routers, and services
│ ├── services/ # Client-side service interfaces
│ ├── store/ # Zustand state management
│ ├── styles/ # Global styles and CSS-in-JS configurations
│ ├── tools/ # Built-in tools (artifacts, inspectors, interventions, etc.)
│ ├── types/ # TypeScript type definitions
│ └── utils/ # General utility functions
├── locales/ # i18n translation files (zh-CN, en-US, etc.)
└── e2e/ # E2E tests (Cucumber + Playwright)
```
For a detailed introduction to the directory structure, see: [Folder Directory Structure](/docs/development/basic/folder-structure)
## Local Development Environment Setup
This section outlines setting up the development environment and local development. Before starting, please ensure that Node.js, Git, and your chosen package manager (Bun or PNPM) are installed in your local environment.
We recommend using WebStorm as your integrated development environment (IDE).
1. **Get the code**: Clone the LobeHub code repository locally:
```bash
git clone https://github.com/lobehub/lobehub.git
```
2. **Install dependencies**: Enter the project directory and install the required dependencies:
```bash
cd lobehub
# If you use Bun
bun install
# If you use PNPM
pnpm install
```
3. **Run and debug**: Start the local development server and begin your development journey:
```bash
# Start the development server with Bun
bun run dev
# Visit http://localhost:3010 to view the application
```
> \[!IMPORTANT]\
> If you encounter the error "Could not find 'stylelint-config-recommended'" when installing dependencies with `npm`, please reinstall the dependencies using `pnpm` or `bun`.
Now, you should be able to see the welcome page of LobeHub in your browser. For a detailed environment setup guide, please refer to [Development Environment Setup Guide](/docs/development/basic/setup-development).
Please refer to the
[Environment Setup Guide](/docs/development/basic/setup-development)
for the complete setup process,
including software installation, project configuration,
Docker service startup, and database migrations.
## Code Style and Contribution Guide
@@ -105,9 +90,14 @@ For detailed code style and contribution guidelines, please refer to [Code Style
## Internationalization Implementation Guide
LobeHub uses `i18next` and `lobe-i18n` to implement multilingual support, ensuring a global user experience.
LobeHub uses `react-i18next` for multilingual support,
ensuring a global user experience.
Internationalization files are located in `src/locales`, containing the default language (Chinese). We generate other language JSON files automatically through `lobe-i18n`.
Default language files are located in `src/locales/default/`
(English). Translation files are in the `locales/` directory.
During development, you only need to edit keys in
`src/locales/default/` — CI automatically generates
translation files for other languages.
If you want to add a new language, follow specific steps detailed in [New Language Addition Guide](/docs/development/internationalization/add-new-locale). We encourage you to participate in our internationalization efforts to provide better services to global users.

View File

@@ -17,78 +17,60 @@ tags:
LobeHub 的核心技术栈如下:
- **框架**我们选择了 [Next.js](https://nextjs.org/),这是一款强大的 React 框架,为我们的项目提供服务端渲染、路由框架、Router Handler 等关键功能。
- **组件库**我们使用了 [Ant Design (antd)](https://ant.design/) 作为基础组件库,同时引入了 [lobe-ui](https://github.com/lobehub/lobe-ui) 作为我们的业务组件库。
- **状态管理**我们选用了 [zustand](https://github.com/pmndrs/zustand),一款轻量级且易于使用的状态管理库。
- **网络请求**:我们采用 [swr](https://swr.vercel.app/),这是一款用于数据获取的 React Hooks 库
- **路由**路由管理我们直接使用 [Next.js](https://nextjs.org/) 自身提供的解决方案
- **国际化**:我们使用 [i18next](https://www.i18next.com/) 实现应用的多语言支持
- **样式**:我们使用 [antd-style](https://github.com/ant-design/antd-style),这是一款与 Ant Design 配套的 CSS-in-JS 库
- **单元测试**:我们使用 [vitest](https://github.com/vitest-dev/vitest) 进行单元测试
- **框架**[Next.js](https://nextjs.org/) 16 + [React](https://react.dev/) 19项目提供服务端渲染、Router Handler 等关键功能。
- **组件库**[Ant Design (antd)](https://ant.design/) 作为基础组件库,[@lobehub/ui](https://github.com/lobehub/lobe-ui) 作为业务组件库。
- **状态管理**[zustand](https://github.com/pmndrs/zustand),一款轻量级且易于使用的状态管理库。
- **数据获取**[SWR](https://swr.vercel.app/) 用于客户端数据获取
- **路由**采用混合路由架构 —— [Next.js App Router](https://nextjs.org/) 处理静态页面(如认证页),[React Router DOM](https://reactrouter.com/) 承载主应用 SPA
- **API**[tRPC](https://trpc.io/) 实现端到端类型安全的 API 通信
- **数据库**[Drizzle ORM](https://orm.drizzle.team/) + PostgreSQL
- **国际化**[react-i18next](https://react.i18next.com/) 实现多语言支持
- **样式**[antd-style](https://github.com/ant-design/antd-style),与 Ant Design 配套的 CSS-in-JS 库。
- **单元测试**[Vitest](https://github.com/vitest-dev/vitest) 进行单元测试。
## 文件夹目录架构
LobeHub 的文件夹目录构如下:
LobeHub 采用 Monorepo 架构(`@lobechat/` 命名空间),顶层目录构如下:
```bash
src
├── app # Next.js App Router 实现,包含路由组和 API 路由
├── business # 业务逻辑模块(客户端和服务端
├── components # 可复用的 UI 组件
├── config # 应用的配置文件,包含客户端环境变量与服务端环境变量
├── const # 应用常量和枚举
├── envs # 环境变量定义和校验分析、认证、LLM 等)
├── features # 与业务功能相关的功能模块,如 Agent 设置、插件开发弹窗等
├── helpers # 工具辅助函数,用于工具工程、占位符解析等
├── hooks # 全应用复用自定义的工具 Hooks
├── layout # 应用的布局组件,如导航栏、侧边栏等
├── libs # 第三方集成分析、OIDC 等)
├── locales # 国际化的语言文件
├── server # 服务端模块和服务
├── services # 封装的后端服务接口,如 HTTP 请求
├── store # 用于状态管理的 zustand store
├── styles # 全局样式和 CSS-in-JS 配置
├── tools # 内置工具artifacts、inspectors、interventions 等)
├── types # TypeScript 的类型定义文件
└── utils # 通用的工具函数
lobe-chat/
├── apps/desktop/ # Electron 桌面应用
├── packages/ # 共享包(@lobechat/*
│ ├── database/ # 数据库 schemas、models、repositories
│ ├── agent-runtime/ # Agent 运行时
│ ├── model-runtime/ # 模型运行时
│ └── ... # 更多共享包
├── src/ # 主应用源码
│ ├── app/ # Next.js App Router包含路由组和 API 路由
│ ├── components/ # 可复用的 UI 组件
│ ├── config/ # 应用配置文件,包含客户端与服务端环境变量
│ ├── const/ # 应用常量和枚举
│ ├── envs/ # 环境变量定义和校验分析、认证、LLM 等)
│ ├── features/ # 业务功能模块,如 Agent 设置、插件开发弹窗等
│ ├── helpers/ # 工具辅助函数
│ ├── hooks/ # 全应用复用的自定义 Hooks
│ ├── layout/ # 布局组件AuthProvider、GlobalProvider 等)
│ ├── libs/ # 第三方集成better-auth、OIDC、tRPC 等)
│ ├── locales/ # 国际化的语言文件
│ ├── server/ # 服务端模块、路由和服务
│ ├── services/ # 客户端服务接口
│ ├── store/ # zustand 状态管理
│ ├── styles/ # 全局样式和 CSS-in-JS 配置
│ ├── tools/ # 内置工具artifacts、inspectors、interventions 等)
│ ├── types/ # TypeScript 类型定义
│ └── utils/ # 通用工具函数
├── locales/ # 国际化翻译文件zh-CN、en-US 等)
└── e2e/ # E2E 测试Cucumber + Playwright
```
有关目录架构的详细介绍,详见: [文件夹目录架构](/zh/docs/development/basic/folder-structure)
## 本地开发环境设置
本节将概述搭建开发环境并进行本地开发。 在开始之前,请确保你的本地环境中已安装 Node.js、Git 以及你选择的包管理器Bun 或 PNPM
我们推荐使用 WebStorm 作为你的集成开发环境IDE
1. **获取代码**:克隆 LobeHub 的代码库到本地:
```bash
git clone https://github.com/lobehub/lobehub.git
```
2. **安装依赖**:进入项目目录,并安装所需依赖:
```bash
cd lobehub
# 如果你使用 Bun
bun install
# 如果你使用 PNPM
pnpm install
```
3. **运行与调试**:启动本地开发服务器,开始你的开发之旅:
```bash
# 使用 Bun 启动开发服务器
bun run dev
# 访问 http://localhost:3010 查看应用
```
> \[!IMPORTANT]\
> 如果使用`npm`安装依赖出现`Could not find "stylelint-config-recommended"`错误,请使用 `pnpm` 或者 `bun` 重新安装依赖。
现在,你应该可以在浏览器中看到 LobeHub 的欢迎页面。详细的环境配置指南,请参考 [开发环境设置指南](/zh/docs/development/basic/setup-development)。
请参考
[开发环境设置指南](/zh/docs/development/basic/setup-development)
了解完整的环境搭建流程,
包括软件安装、项目配置、Docker 服务启动和数据库迁移等步骤。
## 代码风格与贡献指南
@@ -103,9 +85,12 @@ bun run dev
## 国际化实现指南
LobeHub 采用 `i18next` 和 `lobe-i18n` 实现多语言支持,确保用户全球化体验。
LobeHub 采用 `react-i18next` 实现多语言支持,确保用户全球化体验。
国际化文件位于 `src/locales`,包含默认语言(中文)。 我们会通过 `lobe-i18n` 自动生成其他的语言 JSON 文件。
默认语言文件位于 `src/locales/default/`(英文),
翻译文件位于 `locales/` 目录。
开发时只需编辑 `src/locales/default/` 中的 key
CI 会自动生成其他语言的翻译文件。
如果要添加新语种,需遵循特定步骤,详见 [新语种添加指南](/zh/docs/development/internationalization/add-new-locale)。 我们鼓励你参与我们的国际化努力,共同为全球用户提供更好的服务。

View File

@@ -35,12 +35,12 @@
"prebuild": "tsx scripts/prebuild.mts && npm run lint",
"build": "cross-env NODE_OPTIONS=--max-old-space-size=8192 next build --webpack",
"postbuild": "npm run build-sitemap && npm run build-migrate-db",
"build-migrate-db": "bun run db:migrate",
"build-sitemap": "tsx ./scripts/buildSitemapIndex/index.ts",
"build:analyze": "NODE_OPTIONS=--max-old-space-size=81920 ANALYZE=true next build --webpack",
"build:docker": "npm run prebuild && NODE_OPTIONS=--max-old-space-size=8192 DOCKER=true next build --webpack && npm run build-sitemap",
"build:electron": "cross-env NODE_OPTIONS=--max-old-space-size=8192 NEXT_PUBLIC_IS_DESKTOP_APP=1 tsx scripts/electronWorkflow/buildNextApp.mts",
"build:vercel": "npm run prebuild && cross-env NODE_OPTIONS=--max-old-space-size=6144 next build --webpack && npm run postbuild",
"build-migrate-db": "bun run db:migrate",
"build-sitemap": "tsx ./scripts/buildSitemapIndex/index.ts",
"clean:node_modules": "bash -lc 'set -e; echo \"Removing all node_modules...\"; rm -rf node_modules; pnpm -r exec rm -rf node_modules; rm -rf apps/desktop/node_modules; echo \"All node_modules removed.\"'",
"db:generate": "drizzle-kit generate && npm run workflow:dbml",
"db:migrate": "MIGRATION_DB=1 tsx ./scripts/migrateServerDB/index.ts",
@@ -91,11 +91,11 @@
"start": "next start -p 3210",
"stylelint": "stylelint \"src/**/*.{js,jsx,ts,tsx}\" --fix",
"test": "npm run test-app && npm run test-server",
"test-app": "vitest run",
"test-app:coverage": "vitest --coverage --silent='passed-only'",
"test:e2e": "pnpm --filter @lobechat/e2e-tests test",
"test:e2e:smoke": "pnpm --filter @lobechat/e2e-tests test:smoke",
"test:update": "vitest -u",
"test-app": "vitest run",
"test-app:coverage": "vitest --coverage --silent='passed-only'",
"tunnel:cloudflare": "cloudflared tunnel --url http://localhost:3010",
"tunnel:ngrok": "ngrok http http://localhost:3011",
"type-check": "tsgo --noEmit",
@@ -210,7 +210,7 @@
"@lobehub/icons": "^4.0.3",
"@lobehub/market-sdk": "0.29.1",
"@lobehub/tts": "^4.0.2",
"@lobehub/ui": "^4.32.1",
"@lobehub/ui": "4.33.4",
"@modelcontextprotocol/sdk": "^1.25.3",
"@napi-rs/canvas": "^0.1.88",
"@neondatabase/serverless": "^1.0.2",
@@ -404,7 +404,7 @@
"@types/unist": "^3.0.3",
"@types/ws": "^8.18.1",
"@types/xast": "^2.0.4",
"@typescript/native-preview": "7.0.0-dev.20260122.4",
"@typescript/native-preview": "7.0.0-dev.20260207.1",
"@vitest/coverage-v8": "^3.2.4",
"ajv-keywords": "^5.1.0",
"code-inspector-plugin": "1.3.3",
@@ -460,6 +460,7 @@
},
"pnpm": {
"overrides": {
"@lobehub/ui": "4.33.4",
"better-auth": "1.4.6",
"better-call": "1.1.8"
}

View File

@@ -18,7 +18,7 @@ export const AWSBedrockClaudeStream = (
inputStartAt?: number;
payload?: Parameters<typeof transformAnthropicStream>[2];
},
): ReadableStream<string> => {
): ReadableStream<Uint8Array> => {
const streamStack: StreamContext = { id: 'chat_' + nanoid() };
const stream = res instanceof ReadableStream ? res : createBedrockStream(res);

View File

@@ -27,9 +27,8 @@ describe('AWSBedrockLlamaStream', () => {
});
const decoder = new TextDecoder();
const chunks = [];
const chunks: string[] = [];
// @ts-ignore
for await (const chunk of protocolStream) {
chunks.push(decoder.decode(chunk, { stream: true }));
}
@@ -77,9 +76,8 @@ describe('AWSBedrockLlamaStream', () => {
});
const decoder = new TextDecoder();
const chunks = [];
const chunks: string[] = [];
// @ts-ignore
for await (const chunk of protocolStream) {
chunks.push(decoder.decode(chunk, { stream: true }));
}
@@ -139,9 +137,8 @@ describe('AWSBedrockLlamaStream', () => {
});
const decoder = new TextDecoder();
const chunks = [];
const chunks: string[] = [];
// @ts-ignore
for await (const chunk of protocolStream) {
chunks.push(decoder.decode(chunk, { stream: true }));
}
@@ -174,9 +171,8 @@ describe('AWSBedrockLlamaStream', () => {
const protocolStream = AWSBedrockLlamaStream(mockBedrockStream);
const decoder = new TextDecoder();
const chunks = [];
const chunks: string[] = [];
// @ts-ignore
for await (const chunk of protocolStream) {
chunks.push(decoder.decode(chunk, { stream: true }));
}

View File

@@ -39,7 +39,7 @@ export const transformLlamaStream = (
export const AWSBedrockLlamaStream = (
res: InvokeModelWithResponseStreamResponse | ReadableStream,
cb?: ChatStreamCallbacks,
): ReadableStream<string> => {
): ReadableStream<Uint8Array> => {
const streamStack: StreamContext = { id: 'chat_' + nanoid() };
const stream = res instanceof ReadableStream ? res : createBedrockStream(res);

View File

@@ -32,9 +32,8 @@ describe('OllamaStream', () => {
const protocolStream = OllamaStream(mockOllamaStream);
const decoder = new TextDecoder();
const chunks = [];
const chunks: string[] = [];
// @ts-ignore
for await (const chunk of protocolStream) {
chunks.push(decoder.decode(chunk, { stream: true }));
}
@@ -90,9 +89,8 @@ describe('OllamaStream', () => {
});
const decoder = new TextDecoder();
const chunks = [];
const chunks: string[] = [];
// @ts-ignore
for await (const chunk of protocolStream) {
chunks.push(decoder.decode(chunk, { stream: true }));
}
@@ -135,9 +133,8 @@ describe('OllamaStream', () => {
});
const decoder = new TextDecoder();
const chunks = [];
const chunks: string[] = [];
// @ts-ignore
for await (const chunk of protocolStream) {
chunks.push(decoder.decode(chunk, { stream: true }));
}
@@ -212,9 +209,8 @@ describe('OllamaStream', () => {
});
const decoder = new TextDecoder();
const chunks = [];
const chunks: string[] = [];
// @ts-ignore
for await (const chunk of protocolStream) {
chunks.push(decoder.decode(chunk, { stream: true }));
}
@@ -282,9 +278,8 @@ describe('OllamaStream', () => {
});
const decoder = new TextDecoder();
const chunks = [];
const chunks: string[] = [];
// @ts-ignore
for await (const chunk of protocolStream) {
chunks.push(decoder.decode(chunk, { stream: true }));
}
@@ -314,9 +309,8 @@ describe('OllamaStream', () => {
const protocolStream = OllamaStream(mockOllamaStream);
const decoder = new TextDecoder();
const chunks = [];
const chunks: string[] = [];
// @ts-ignore
for await (const chunk of protocolStream) {
chunks.push(decoder.decode(chunk, { stream: true }));
}

View File

@@ -54,7 +54,7 @@ const transformOllamaStream = (chunk: ChatResponse, stack: StreamContext): Strea
export const OllamaStream = (
res: ReadableStream<ChatResponse>,
cb?: ChatStreamCallbacks,
): ReadableStream<string> => {
): ReadableStream<Uint8Array> => {
const streamStack: StreamContext = { id: 'chat_' + nanoid() };
return res

View File

@@ -272,7 +272,7 @@ export function createCallbacksTransformer(cb: ChatStreamCallbacks | undefined)
let currentType = '' as unknown as StreamProtocolChunk['type'];
const callbacks = cb || {};
return new TransformStream({
return new TransformStream<string, Uint8Array>({
async flush(): Promise<void> {
const data = {
grounding,

View File

@@ -5,6 +5,7 @@
"lib": [
"dom",
"dom.iterable",
"dom.asynciterable",
"esnext",
"webworker"
],