You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[code]
task = "Who was the Miami Heat player with the highest points in the 2006-2007 season, and what was the percentage change in his total rebounds between the 2007-2008 and 2008-2009 seasons?"
Use asyncio.run(...) if you are running this in a script.
await Console(team.run_stream(task=task))
[error]
File "/root/miniconda3/envs/llama-factory/lib/python3.11/site-packages/openai/_base_client.py", line 1666, in _retry_request
return await self._request(
^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/llama-factory/lib/python3.11/site-packages/openai/_base_client.py", line 1634, in _request
raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'Your account crobsbebi7sb70r8ack0 request reached max request: 3, please try again after 1 seconds', 'type': 'rate_limit_reached_error'}}
---------- Summary ----------
Number of messages: 4
Finish reason: None
Total prompt tokens: 379
Total completion tokens: 114
Duration: 6.39 seconds
What did you expect to happen?
not error
How can we reproduce it (as minimally and precisely as possible)?
use kimi free api
AutoGen version
0.4.11
Which package was this bug in
AgentChat
Model used
kimi
Python version
No response
Operating system
No response
Any additional info you think would be helpful for fixing this bug
No response
The text was updated successfully, but these errors were encountered:
There is not currently functionality for rate limiting the AutoGen OpenAI model client.
However, it would be easy to add a model client which wraps the OpenAI one and either implements rate limiting or backoff in the presence of rate limit errors.
jackgerrits
changed the title
autogen rate_limit_reached_error,how to set reach llm api per second
Autogen rate_limit_reached_error - how to set a rate limit?
Dec 27, 2024
What happened?
[code]
task = "Who was the Miami Heat player with the highest points in the 2006-2007 season, and what was the percentage change in his total rebounds between the 2007-2008 and 2008-2009 seasons?"
Use asyncio.run(...) if you are running this in a script.
await Console(team.run_stream(task=task))
[error]
File "/root/miniconda3/envs/llama-factory/lib/python3.11/site-packages/openai/_base_client.py", line 1666, in _retry_request
return await self._request(
^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/llama-factory/lib/python3.11/site-packages/openai/_base_client.py", line 1634, in _request
raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'Your account crobsbebi7sb70r8ack0 request reached max request: 3, please try again after 1 seconds', 'type': 'rate_limit_reached_error'}}
---------- Summary ----------
Number of messages: 4
Finish reason: None
Total prompt tokens: 379
Total completion tokens: 114
Duration: 6.39 seconds
What did you expect to happen?
not error
How can we reproduce it (as minimally and precisely as possible)?
use kimi free api
AutoGen version
0.4.11
Which package was this bug in
AgentChat
Model used
kimi
Python version
No response
Operating system
No response
Any additional info you think would be helpful for fixing this bug
No response
The text was updated successfully, but these errors were encountered: