From 6b5b1468d640a675dae1f46b885fecf91e548a7a Mon Sep 17 00:00:00 2001 From: Marcel Klehr Date: Fri, 26 Jul 2024 11:11:48 +0200 Subject: [PATCH] docs(admin/ai): How to improve task throughput Signed-off-by: Marcel Klehr --- admin_manual/ai/app_assistant.rst | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/admin_manual/ai/app_assistant.rst b/admin_manual/ai/app_assistant.rst index c7a71bd9a..42714047a 100644 --- a/admin_manual/ai/app_assistant.rst +++ b/admin_manual/ai/app_assistant.rst @@ -160,3 +160,17 @@ This field is appended to the block of chat messages, i.e. attached after the me The number of latest messages to consider for generating the next message. This does not include the user instructions, which is always considered in addition to this. This value should be adjusted in case you are hitting the token limit in your conversations too often. The AI text generation provider should ideally handle the max token limit case. + +Improve AI processing throughput +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Most AI tasks will be run as part of the background job system in Nextcloud which only runs jobs every 5 minutes by default. +To pick up scheduled jobs faster you can set up background job workers that process AI tasks as soon as they are scheduled: + +run the following occ commands a daemon (you can also spawn multiple, for parallel processing): + +.. code-block:: + + occ background-job:worker 'OC\TaskProcessing\SynchronousBackgroundJob' + +Make sure to restart these daemons regularly. For example once a day.