You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have configured an external API for use with Nextcloud Assistant. In this case Mistral.
When I tried a prompt today I get the below error:
My promt has a total of: 1009 characters and 182 words.
Text generation error: An exception occurred while executing a query: SQLSTATE[22001]: String data, right truncated: 7 ERROR: value too long for type character varying(1000)
In the logs I see below error:
{"reqId":"V85FoTrYUb0uA2GqxpJj","level":3,"time":"2024-10-01T16:46:29+00:00","remoteAddr":"127.0.0.1","user":"bisu","app":"PHP","method":"POST","url":"/apps/assistant/f/process_prompt","message":"Undefined array key 7 at /var/www/html/lib/private/AppFramework/Http.php#128","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36","version":"29.0.7.1","data":{"app":"PHP"},"id":"66fc27c87b130"}
General googling suggests that it is likely a limitation in number of charecters that can be stored in the DB for that field? (I could be wrong)
If so, the newer models, have a context size of 128k, roughly 80k English words or even 2Millon tokens (Gemini Pro)
I have set 18000 as the prompt token limit.
When I edit my promt to 991 characters and 178 words, I do not get an error.
Expected Behavior
There should not be any errors.
It Should be able to handle large token sizes in relation to the models being used.
To Reproduce
Try a prompt with more than 1000 characters.
The text was updated successfully, but these errors were encountered:
Hey, this looks like a limitation of the Mistral API or the model you're using. I tried with OpenAI/GPT-3.5-turbo and it accepted a prompt with 3K chars. The Assistant+the Nextcloud task processing API are fine with long prompts
Which model are you using?
The Text generation error: An exception occurred... error you get is actually the response of the network request to MistralAI.
Hi, I am using the Mistral-Large model. I tried this agaiin today and the prompt larger than 1k failed.
Text generation error: An exception occurred while executing a query: SQLSTATE[22001]: String data, right truncated: 7 ERROR: value too long for type character varying(1000)
Also, I am using nextcloud-aio, and have configured it to start Local-AI container. although in the admin page for AI, i use the mistral api url and api key, and have promt tokens set to 18000. Not sure if that matters.
Anyhow, I will try this in a couple of days with a different frontend that also interacts with the Mistral api and report back. (eg: lollms)
Which version of assistant are you using?
1.1.0 as configured by Nextcloud AIO
Which version of Nextcloud are you using?
29.0.7
Which browser are you using? In case you are using the phone App, specify the Android or iOS version and device please.
Chrome or Firefox latest
Describe the Bug
May be connected to #59 .
I have configured an external API for use with Nextcloud Assistant. In this case Mistral.
When I tried a prompt today I get the below error:
My promt has a total of: 1009 characters and 182 words.
In the logs I see below error:
General googling suggests that it is likely a limitation in number of charecters that can be stored in the DB for that field? (I could be wrong)
If so, the newer models, have a context size of 128k, roughly 80k English words or even 2Millon tokens (Gemini Pro)
I have set 18000 as the prompt token limit.
When I edit my promt to 991 characters and 178 words, I do not get an error.
Expected Behavior
There should not be any errors.
It Should be able to handle large token sizes in relation to the models being used.
To Reproduce
Try a prompt with more than 1000 characters.
The text was updated successfully, but these errors were encountered: