mirror of
https://github.com/n8n-io/n8n-docs.git
synced 2026-03-27 09:28:43 +07:00
Add documentation for using vector stores as tools
This commit is contained in:
@@ -1,6 +1,6 @@
|
||||
### Operation Mode
|
||||
|
||||
This Vector Store node has four modes: **Get Many**, **Insert Documents**, **Retrieve Documents**, and **Update Documents**. The mode you select determines the operations you can perform with the node and what inputs and outputs are available.
|
||||
This Vector Store node has five modes: **Get Many**, **Insert Documents**, **Retrieve Documents (As Vector Store for Chain/Tool)**, **Retrieve Documents (As Tool for AI Agent)**, and **Update Documents**. The mode you select determines the operations you can perform with the node and what inputs and outputs are available.
|
||||
|
||||
<!-- vale off -->
|
||||
#### Get Many
|
||||
@@ -11,9 +11,13 @@ In this mode, you can retrieve multiple documents from your vector database by p
|
||||
|
||||
Use Insert Documents mode to insert new documents into your vector database.
|
||||
|
||||
#### Retrieve Documents (For Agent/Chain)
|
||||
#### Retrieve Documents (As Vector Store for Chain/Tool)
|
||||
|
||||
Use Retrieve Documents mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node.
|
||||
Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node.
|
||||
|
||||
#### Retrieve Documents (As Tool for AI Agent)
|
||||
|
||||
Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details.
|
||||
|
||||
#### Update Documents
|
||||
|
||||
|
||||
@@ -1,16 +1,21 @@
|
||||
### Operation Mode
|
||||
|
||||
This Vector Store node has three modes: **Get Many**, **Insert Documents**, and **Retrieve Documents**. The mode you select determines the operations you can perform with the node and what inputs and outputs are available.
|
||||
This Vector Store node has four modes: **Get Many**, **Insert Documents**, **Retrieve Documents (As Vector Store for Chain/Tool)**, and **Retrieve Documents (As Tool for AI Agent)**. The mode you select determines the operations you can perform with the node and what inputs and outputs are available.
|
||||
|
||||
<!-- vale off -->
|
||||
#### Get Many
|
||||
|
||||
In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt will be embedded and used for similarity search. The node will return the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context.
|
||||
<!-- vale on -->
|
||||
|
||||
#### Insert Documents
|
||||
|
||||
Use Insert Documents mode to insert new documents into your vector database.
|
||||
|
||||
#### Retrieve Documents (For Agent/Chain)
|
||||
#### Retrieve Documents (As Vector Store for Chain/Tool)
|
||||
|
||||
Use Retrieve Documents mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node.
|
||||
Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node.
|
||||
|
||||
#### Retrieve Documents (As Tool for AI Agent)
|
||||
|
||||
Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details.
|
||||
|
||||
@@ -20,6 +20,34 @@ The in-memory storage described here is different to the AI memory nodes such as
|
||||
This node creates a vector database in the app memory.
|
||||
///
|
||||
|
||||
## Node usage patterns
|
||||
|
||||
You can use the In-Memory Vector Store node in the following patterns.
|
||||
|
||||
### Use as a regular node to insert and retrieve documents
|
||||
|
||||
You can use the In-Memory Vector Store as a regular node to insert or get documents. This pattern places the In-Memory Vector Store in the regular connection flow without using an agent.
|
||||
|
||||
You can see an example of in step 2 of [this template](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/).
|
||||
|
||||
### Connect directly to an AI agent as a tool
|
||||
|
||||
You can connect the In-Memory Vector Store node directly to the tool connector of an [AI agent](/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/) to use vector store as a resource when answering queries.
|
||||
|
||||
Here, the connection would be: AI agent (tools connector) -> In-Memory Vector Store node.
|
||||
|
||||
### Use a retriever to fetch documents
|
||||
|
||||
You can use the [Vector Store Retriever](/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.retrievervectorstore/) node with the In-Memory Vector Store node to fetch documents from the In-Memory Vector Store node. This is often used with the [Question and Answer Chain](/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainretrievalqa/) node to fetch documents from the vector store that match the given chat input.
|
||||
|
||||
An [example of the connection flow](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) (the linked example uses Pinecone, but the pattern in the same) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> In-Memory Vector Store.
|
||||
|
||||
### Use the Vector Store Question Answer Tool to answer questions
|
||||
|
||||
Another pattern uses the [Vector Store Question Answer Tool](/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolvectorstore/) to summarize results and answer questions from the In-Memory Vector Store node. Rather than connecting the In-Memory Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data to formulate an answer to questions.
|
||||
|
||||
The [connections flow](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> In-Memory Vector store.
|
||||
|
||||
## Node parameters
|
||||
|
||||
--8<-- "_snippets/integrations/builtin/cluster-nodes/vector-store-mode.md"
|
||||
@@ -36,10 +64,17 @@ This node creates a vector database in the app memory.
|
||||
* **Memory Key**: Enter the key to use to store the vector memory in the workflow data. n8n prefixes the key with the workflow ID to avoid collisions.
|
||||
* **Clear Store**: Use this parameter to control whether to wipe the vector store for the given memory key for this workflow before inserting data (turned on).
|
||||
|
||||
### Retrieve Documents (For Agent/Chain) parameters
|
||||
### Retrieve Documents (As Vector Store for Chain/Tool) parameters
|
||||
|
||||
* **Memory Key**: Enter the key to use to store the vector memory in the workflow data. n8n prefixes the key with the workflow ID to avoid collisions.
|
||||
|
||||
### Retrieve Documents (As Tool for AI Agent) parameters
|
||||
|
||||
* **Name**: The name of the vector store.
|
||||
* **Description**: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often.
|
||||
* **Memory Key**: Enter the key to use to store the vector memory in the workflow data. n8n prefixes the key with the workflow ID to avoid collisions.
|
||||
* **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results.
|
||||
|
||||
## Templates and examples
|
||||
|
||||
<!-- see https://www.notion.so/n8n/Pull-in-templates-for-the-integrations-pages-37c716837b804d30a33b47475f6e3780 -->
|
||||
|
||||
@@ -7,7 +7,7 @@ priority: medium
|
||||
|
||||
# PGVector Vector Store node
|
||||
|
||||
PGVector is an extension of Postgresql. Use this node to interact with the PGVector tables in your Postgresql database. You can insert documents into a vector table, get documents from a vector table, and retrieve documents to provide them to a retriever connected to a chain.
|
||||
PGVector is an extension of Postgresql. Use this node to interact with the PGVector tables in your Postgresql database. You can insert documents into a vector table, get documents from a vector table, retrieve documents to provide them to a retriever connected to a chain, or connect directly to an agent as a tool.
|
||||
|
||||
On this page, you'll find the node parameters for the PGVector node, and links to more resources.
|
||||
|
||||
@@ -16,7 +16,35 @@ You can find authentication information for this node [here](/integrations/built
|
||||
///
|
||||
|
||||
--8<-- "_snippets/integrations/builtin/cluster-nodes/sub-node-expression-resolution.md"
|
||||
|
||||
|
||||
## Node usage patterns
|
||||
|
||||
You can use the PGVector Vector Store node in the following patterns.
|
||||
|
||||
### Use as a regular node to insert and retrieve documents
|
||||
|
||||
You can use the PGVector Vector Store as a regular node to insert or get documents. This pattern places the PGVector Vector Store in the regular connection flow without using an agent.
|
||||
|
||||
You can see an example of this in scenario 1 of [this template](https://n8n.io/workflows/2621-ai-agent-to-chat-with-files-in-supabase-storage/) (the template uses the Supabase Vector Store, but the pattern is the same).
|
||||
|
||||
### Connect directly to an AI agent as a tool
|
||||
|
||||
You can connect the PGVector Vector Store node directly to the tool connector of an [AI agent](/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/) to use vector store as a resource when answering queries.
|
||||
|
||||
Here, the connection would be: AI agent (tools connector) -> PGVector Vector Store node.
|
||||
|
||||
### Use a retriever to fetch documents
|
||||
|
||||
You can use the [Vector Store Retriever](/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.retrievervectorstore/) node with the PGVector Vector Store node to fetch documents from the PGVector Vector Store node. This is often used with the [Question and Answer Chain](/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainretrievalqa/) node to fetch documents from the vector store that match the given chat input.
|
||||
|
||||
An [example of the connection flow](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) (the linked example uses Pinecone, but the pattern in the same) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> PGVector Vector Store.
|
||||
|
||||
### Use the Vector Store Question Answer Tool to answer questions
|
||||
|
||||
Another pattern uses the [Vector Store Question Answer Tool](/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolvectorstore/) to summarize results and answer questions from the PGVector Vector Store node. Rather than connecting the PGVector Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data to formulate an answer to questions.
|
||||
|
||||
The [connections flow](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) (the linked example uses the In-Memory Vector Store, but the pattern is the same) in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> In-Memory Vector store.
|
||||
|
||||
## Node parameters
|
||||
|
||||
--8<-- "_snippets/integrations/builtin/cluster-nodes/vector-store-mode.md"
|
||||
@@ -33,10 +61,17 @@ You can find authentication information for this node [here](/integrations/built
|
||||
|
||||
* **Table name**: Enter the name of the table you want to query.
|
||||
|
||||
### Retrieve Documents parameters (for Agent/Chain)
|
||||
### Retrieve Documents parameters (As Vector Store for Chain/Tool)
|
||||
|
||||
* **Table name**: Enter the name of the table you want to query.
|
||||
|
||||
### Retrieve Documents (As Tool for AI Agent) parameters
|
||||
|
||||
* **Name**: The name of the vector store.
|
||||
* **Description**: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often.
|
||||
* **Table Name**: Enter the PGVector table to use.
|
||||
* **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results.
|
||||
|
||||
## Node options
|
||||
|
||||
### Collection
|
||||
@@ -63,7 +98,7 @@ The following options specify the names of the columns to store the vectors and
|
||||
## Templates and examples
|
||||
|
||||
<!-- see https://www.notion.so/n8n/Pull-in-templates-for-the-integrations-pages-37c716837b804d30a33b47475f6e3780 -->
|
||||
[[ templatesWidget(page.title, 'pgvector-vector-store') ]]
|
||||
[[ templatesWidget(page.title, 'postgres-pgvector-store') ]]
|
||||
|
||||
## Related resources
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ priority: medium
|
||||
|
||||
# Pinecone Vector Store node
|
||||
|
||||
Use the Pinecone node to interact with your Pinecone database as vector store. You can insert documents into a vector database, get documents from a vector database, and retrieve documents to provide them to a retriever connected to a chain.
|
||||
Use the Pinecone node to interact with your Pinecone database as vector store. You can insert documents into a vector database, get documents from a vector database, retrieve documents to provide them to a retriever connected to a chain, or connect directly to an agent as a tool.
|
||||
|
||||
On this page, you'll find the node parameters for the Pinecone node, and links to more resources.
|
||||
|
||||
@@ -17,6 +17,34 @@ You can find authentication information for this node [here](/integrations/built
|
||||
///
|
||||
|
||||
--8<-- "_snippets/integrations/builtin/cluster-nodes/sub-node-expression-resolution.md"
|
||||
|
||||
## Node usage patterns
|
||||
|
||||
You can use the Pinecone Vector Store node in the following patterns.
|
||||
|
||||
### Use as a regular node to insert, update, and retrieve documents
|
||||
|
||||
You can use the Pinecone Vector Store as a regular node to insert, update, or get documents. This pattern places the Pinecone Vector Store in the regular connection flow without using an agent.
|
||||
|
||||
You can see an example of this in scenario 1 of [this template](https://n8n.io/workflows/2165-chat-with-pdf-docs-using-ai-quoting-sources/).
|
||||
|
||||
### Connect directly to an AI agent as a tool
|
||||
|
||||
You can connect the Pinecone Vector Store node directly to the tool connector of an [AI agent](/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/) to use vector store as a resource when answering queries.
|
||||
|
||||
Here, the connection would be: AI agent (tools connector) -> Pinecone Vector Store node.
|
||||
|
||||
### Use a retriever to fetch documents
|
||||
|
||||
You can use the [Vector Store Retriever](/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.retrievervectorstore/) node with the Pinecone Vector Store node to fetch documents from the Pinecone Vector Store node. This is often used with the [Question and Answer Chain](/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainretrievalqa/) node to fetch documents from the vector store that match the given chat input.
|
||||
|
||||
An [example of the connection flow](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> Pinecone Vector Store.
|
||||
|
||||
### Use the Vector Store Question Answer Tool to answer questions
|
||||
|
||||
Another pattern uses the [Vector Store Question Answer Tool](/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolvectorstore/) to summarize results and answer questions from the Pinecone Vector Store node. Rather than connecting the Pinecone Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data to formulate an answer to questions.
|
||||
|
||||
The [connections flow](https://n8n.io/workflows/2705-chat-with-github-api-documentation-rag-powered-chatbot-with-pinecone-and-openai/) in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Pinecone Vector store.
|
||||
|
||||
## Node parameters
|
||||
|
||||
@@ -32,10 +60,17 @@ You can find authentication information for this node [here](/integrations/built
|
||||
|
||||
* **Pinecone Index**: Select or enter the Pinecone Index to use.
|
||||
|
||||
### Retrieve Documents (For Agent/Chain) parameters
|
||||
### Retrieve Documents (As Vector Store for Chain/Tool) parameters
|
||||
|
||||
* **Pinecone Index**: Select or enter the Pinecone Index to use.
|
||||
|
||||
### Retrieve Documents (As Tool for AI Agent) parameters
|
||||
|
||||
* **Name**: The name of the vector store.
|
||||
* **Description**: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often.
|
||||
* **Pinecone Index**: Select or enter the Pinecone Index to use.
|
||||
* **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results.
|
||||
|
||||
## Node options
|
||||
|
||||
### Pinecone Namespace
|
||||
|
||||
@@ -8,7 +8,7 @@ priority: medium
|
||||
|
||||
# Qdrant Vector Store node
|
||||
|
||||
Use the Qdrant node to interact with your Qdrant collection as a vector store. You can insert documents into a vector database, get documents from a vector database, and retrieve documents to provide them to a retriever connected to a chain.
|
||||
Use the Qdrant node to interact with your Qdrant collection as a vector store. You can insert documents into a vector database, get documents from a vector database, retrieve documents to provide them to a retriever connected to a chain or connect it directly to an agent to use as a tool..
|
||||
|
||||
On this page, you'll find the node parameters for the Qdrant node, and links to more resources.
|
||||
|
||||
@@ -17,6 +17,34 @@ You can find authentication information for this node [here](/integrations/built
|
||||
///
|
||||
|
||||
--8<-- "_snippets/integrations/builtin/cluster-nodes/sub-node-expression-resolution.md"
|
||||
|
||||
## Node usage patterns
|
||||
|
||||
You can use the Qdrant Vector Store node in the following patterns.
|
||||
|
||||
### Use as a regular node to insert and retrieve documents
|
||||
|
||||
You can use the Qdrant Vector Store as a regular node to insert or get documents. This pattern places the Qdrant Vector Store in the regular connection flow without using an agent.
|
||||
|
||||
You can see an example of this in the first part of [this template](https://n8n.io/workflows/2440-building-rag-chatbot-for-movie-recommendations-with-qdrant-and-open-ai/).
|
||||
|
||||
### Connect directly to an AI agent as a tool
|
||||
|
||||
You can connect the Qdrant Vector Store node directly to the tool connector of an [AI agent](/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/) to use vector store as a resource when answering queries.
|
||||
|
||||
Here, the connection would be: AI agent (tools connector) -> Qdrant Vector Store node.
|
||||
|
||||
### Use a retriever to fetch documents
|
||||
|
||||
You can use the [Vector Store Retriever](/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.retrievervectorstore/) node with the Qdrant Vector Store node to fetch documents from the Qdrant Vector Store node. This is often used with the [Question and Answer Chain](/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainretrievalqa/) node to fetch documents from the vector store that match the given chat input.
|
||||
|
||||
An [example of the connection flow](https://n8n.io/workflows/2183-ai-crew-to-automate-fundamental-stock-analysis-qanda-workflow/) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> Qdrant Vector Store.
|
||||
|
||||
### Use the Vector Store Question Answer Tool to answer questions
|
||||
|
||||
Another pattern uses the [Vector Store Question Answer Tool](/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolvectorstore/) to summarize results and answer questions from the Qdrant Vector Store node. Rather than connecting the Qdrant Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data to formulate an answer to questions.
|
||||
|
||||
The [connections flow](https://n8n.io/workflows/2464-scale-deal-flow-with-a-pitch-deck-ai-vision-chatbot-and-qdrant-vector-store/) in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Qdrant Vector store.
|
||||
|
||||
## Node parameters
|
||||
|
||||
@@ -38,12 +66,19 @@ This Operation Mode includes one **Node option**:
|
||||
|
||||
* **Collection Config**: Enter JSON options for creating a Qdrant collection creation configuration. Refer to the Qdrant [Collections](https://qdrant.tech/documentation/concepts/collections/){:target=_blank .external-link} documentation for more information.
|
||||
|
||||
### Retrieve Documents (For Agent/Chain) parameters
|
||||
### Retrieve Documents (As Vector Store for Chain/Tool) parameters
|
||||
|
||||
* **Qdrant collection name**: Enter the name of the Qdrant collection to use.
|
||||
* **Qdrant Collection**: Enter the name of the Qdrant collection to use.
|
||||
|
||||
This Operation Mode includes one **Node option**, the [Metadata Filter](#metadata-filter).
|
||||
|
||||
### Retrieve Documents (As Tool for AI Agent) parameters
|
||||
|
||||
* **Name**: The name of the vector store.
|
||||
* **Description**: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often.
|
||||
* **Qdrant Collection**: Enter the name of the Qdrant collection to use.
|
||||
* **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results.
|
||||
|
||||
## Node options
|
||||
|
||||
### Metadata Filter
|
||||
|
||||
@@ -8,7 +8,7 @@ priority: medium
|
||||
|
||||
# Supabase Vector Store node
|
||||
|
||||
Use the Supabase Vector Store to interact with your Supabase database as vector store. You can insert documents into a vector database, get many documents from a vector database, and retrieve documents to provide them to a retriever connected to a chain.
|
||||
Use the Supabase Vector Store to interact with your Supabase database as vector store. You can insert documents into a vector database, get many documents from a vector database, retrieve documents to provide them to a retriever connected to a chain, or connect it directly to an agent to use as a tool.
|
||||
|
||||
On this page, you'll find the node parameters for the Supabase node, and links to more resources.
|
||||
|
||||
@@ -20,6 +20,34 @@ You can find authentication information for this node [here](/integrations/built
|
||||
|
||||
Supabase provides a [quickstart for setting up your vector store](https://supabase.com/docs/guides/ai/langchain?database-method=sql){:target=_blank .external-link}. If you use settings other than the defaults in the quickstart, this may affect parameter settings in n8n. Make sure you understand what you're doing.
|
||||
|
||||
## Node usage patterns
|
||||
|
||||
You can use the Supabase Vector Store node in the following patterns.
|
||||
|
||||
### Use as a regular node to insert, update, and retrieve documents
|
||||
|
||||
You can use the Supabase Vector Store as a regular node to insert, update, or get documents. This pattern places the Supabase Vector Store in the regular connection flow without using an agent.
|
||||
|
||||
You can see an example of this in scenario 1 of [this template](https://n8n.io/workflows/2621-ai-agent-to-chat-with-files-in-supabase-storage/).
|
||||
|
||||
### Connect directly to an AI agent as a tool
|
||||
|
||||
You can connect the Supabase Vector Store node directly to the tool connector of an [AI agent](/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/) to use vector store as a resource when answering queries.
|
||||
|
||||
Here, the connection would be: AI agent (tools connector) -> Supabase Vector Store node.
|
||||
|
||||
### Use a retriever to fetch documents
|
||||
|
||||
You can use the [Vector Store Retriever](/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.retrievervectorstore/) node with the Supabase Vector Store node to fetch documents from the Supabase Vector Store node. This is often used with the [Question and Answer Chain](/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainretrievalqa/) node to fetch documents from the vector store that match the given chat input.
|
||||
|
||||
An [example of the connection flow](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) (the example uses Pinecone, but the pattern in the same) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> Supabase Vector Store.
|
||||
|
||||
### Use the Vector Store Question Answer Tool to answer questions
|
||||
|
||||
Another pattern uses the [Vector Store Question Answer Tool](/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolvectorstore/) to summarize results and answer questions from the Supabase Vector Store node. Rather than connecting the Supabase Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data to formulate an answer to questions.
|
||||
|
||||
The [connections flow](https://n8n.io/workflows/2621-ai-agent-to-chat-with-files-in-supabase-storage/) in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Supabase Vector store.
|
||||
|
||||
## Node parameters
|
||||
|
||||
--8<-- "_snippets/integrations/builtin/cluster-nodes/vector-store-mode-with-update.md"
|
||||
@@ -34,10 +62,22 @@ Supabase provides a [quickstart for setting up your vector store](https://supaba
|
||||
|
||||
* **Table Name**: Enter the Supabase table to use.
|
||||
|
||||
### Retrieve Documents (For Agent/Chain) parameters
|
||||
### Retrieve Documents (As Vector Store for Chain/Tool) parameters
|
||||
|
||||
* **Table Name**: Enter the Supabase table to use.
|
||||
|
||||
### Retrieve Documents (As Tool for AI Agent) parameters
|
||||
|
||||
* **Name**: The name of the vector store.
|
||||
* **Description**: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often.
|
||||
* **Table Name**: Enter the Supabase table to use.
|
||||
* **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results.
|
||||
|
||||
### Update Documents
|
||||
|
||||
* **Table Name**: Enter the Supabase table to use.
|
||||
* **ID**: The ID of an embedding entry.
|
||||
|
||||
## Node options
|
||||
|
||||
### Query Name
|
||||
|
||||
@@ -7,7 +7,7 @@ contentType: integration
|
||||
|
||||
# Zep Vector Store node
|
||||
|
||||
Use the Zep Vector Store to interact with Zep vector databases. You can insert documents into a vector database, get many documents from a vector database, and retrieve documents to provide them to a retriever connected to a chain.
|
||||
Use the Zep Vector Store to interact with Zep vector databases. You can insert documents into a vector database, get many documents from a vector database, retrieve documents to provide them to a retriever connected to a chain, or connect it directly to an agent to use as a tool.
|
||||
|
||||
On this page, you'll find the node parameters for the Zep Vector Store node, and links to more resources.
|
||||
|
||||
@@ -20,6 +20,34 @@ For usage examples and templates to help you get started, refer to n8n's [Zep Ve
|
||||
///
|
||||
|
||||
--8<-- "_snippets/integrations/builtin/cluster-nodes/sub-node-expression-resolution.md"
|
||||
|
||||
## Node usage patterns
|
||||
|
||||
You can use the Zep Vector Store node in the following patterns.
|
||||
|
||||
### Use as a regular node to insert, update, and retrieve documents
|
||||
|
||||
You can use the Zep Vector Store as a regular node to insert or get documents. This pattern places the Zep Vector Store in the regular connection flow without using an agent.
|
||||
|
||||
You can see an example of this in scenario 1 of [this template](https://n8n.io/workflows/2621-ai-agent-to-chat-with-files-in-supabase-storage/) (the example uses Supabase, but the pattern is the same).
|
||||
|
||||
### Connect directly to an AI agent as a tool
|
||||
|
||||
You can connect the Zep Vector Store node directly to the tool connector of an [AI agent](/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/) to use vector store as a resource when answering queries.
|
||||
|
||||
Here, the connection would be: AI agent (tools connector) -> Zep Vector Store node.
|
||||
|
||||
### Use a retriever to fetch documents
|
||||
|
||||
You can use the [Vector Store Retriever](/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.retrievervectorstore/) node with the Zep Vector Store node to fetch documents from the Zep Vector Store node. This is often used with the [Question and Answer Chain](/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainretrievalqa/) node to fetch documents from the vector store that match the given chat input.
|
||||
|
||||
An [example of the connection flow](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) (the example uses Pinecone, but the pattern in the same) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> Zep Vector Store.
|
||||
|
||||
### Use the Vector Store Question Answer Tool to answer questions
|
||||
|
||||
Another pattern uses the [Vector Store Question Answer Tool](/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolvectorstore/) to summarize results and answer questions from the Zep Vector Store node. Rather than connecting the Zep Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data to formulate an answer to questions.
|
||||
|
||||
The [connections flow](https://n8n.io/workflows/2621-ai-agent-to-chat-with-files-in-supabase-storage/) (this example uses Supabase, but the pattern is the same) in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Zep Vector store.
|
||||
|
||||
## Node parameters
|
||||
|
||||
@@ -35,10 +63,17 @@ For usage examples and templates to help you get started, refer to n8n's [Zep Ve
|
||||
* **Prompt**: Enter the search query.
|
||||
* **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results.
|
||||
|
||||
### Retrieve Documents (For Agent/Chain) parameters
|
||||
### Retrieve Documents (As Vector Store for Chain/Tool) parameters
|
||||
|
||||
* **Collection Name**: Enter the collection name where the data is stored.
|
||||
|
||||
### Retrieve Documents (As Tool for AI Agent) parameters
|
||||
|
||||
* **Name**: The name of the vector store.
|
||||
* **Description**: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often.
|
||||
* **Collection Name**: Enter the collection name where the data is stored.
|
||||
* **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results.
|
||||
|
||||
## Node options
|
||||
|
||||
### Embedding Dimensions
|
||||
|
||||
@@ -1,31 +1,39 @@
|
||||
---
|
||||
#https://www.notion.so/n8n/Frontmatter-432c2b8dff1f43d4b1c8d20075510fe4
|
||||
title: Vector Store Tool node documentation
|
||||
description: Learn how to use the Vector Store Tool node in n8n. Follow technical documentation to integrate Vector Store Tool node into your workflows.
|
||||
title: Vector Store Question Answer Tool node documentation
|
||||
description: Learn how to use the Vector Store Question Answer Tool node in n8n. Follow technical documentation to integrate Vector Store Question Answer Tool node into your workflows.
|
||||
contentType: integration
|
||||
---
|
||||
|
||||
# Vector Store Tool node
|
||||
# Vector Store Question Answer Tool node
|
||||
|
||||
The Vector Store node is a tool that allows an agent to access content from a vector store.
|
||||
The Vector Store Question Answer node is a tool that allows an agent to summarize results and answer questions based on chunks from a vector store.
|
||||
|
||||
On this page, you'll find the node parameters for the Vector Store node, and links to more resources.
|
||||
On this page, you'll find the node parameters for the Vector Store Question Answer node, and links to more resources.
|
||||
|
||||
/// note | Examples and templates
|
||||
For usage examples and templates to help you get started, refer to n8n's [Vector Store Tool integrations](https://n8n.io/integrations/vector-store-tool/){:target=_blank .external-link} page.
|
||||
For usage examples and templates to help you get started, refer to n8n's [Vector Store Question Answer Tool integrations](https://n8n.io/integrations/vector-store-tool/){:target=_blank .external-link} page.
|
||||
///
|
||||
|
||||
--8<-- "_snippets/integrations/builtin/cluster-nodes/sub-node-expression-resolution.md"
|
||||
|
||||
## Node parameters
|
||||
|
||||
### Name
|
||||
### Data Name
|
||||
|
||||
Enter the name of the tool for the agent to use.
|
||||
Enter the name of the data in the vector store.
|
||||
|
||||
### Description
|
||||
### Description of Data
|
||||
|
||||
Enter a description of what the vector store contains.
|
||||
Enter a description of the data in the vector store.
|
||||
|
||||
n8n uses the **Data Name** and **Description of Data** parameters to populate the tool description for AI agents using the following format:
|
||||
|
||||
> Useful for when you need to answer questions about [Data Name]. Whenever you need information about [Description of Data], you should ALWAYS use this. Input should be a fully formed question.
|
||||
|
||||
### Limit
|
||||
|
||||
The maximum number of results to return.
|
||||
|
||||
## Related resources
|
||||
|
||||
|
||||
@@ -937,7 +937,7 @@ nav:
|
||||
- Custom Code Tool: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolcode.md
|
||||
- HTTP Request Tool: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolhttprequest.md
|
||||
- SerpApi (Google Search): integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolserpapi.md
|
||||
- Vector Tool Store: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolvectorstore.md
|
||||
- Vector Store Question Answer Tool: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolvectorstore.md
|
||||
- Wikipedia: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolwikipedia.md
|
||||
- Wolfram|Alpha: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolwolframalpha.md
|
||||
- Custom n8n Workflow Tool: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolworkflow.md
|
||||
|
||||
Reference in New Issue
Block a user