You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Instead of setting API_BASE set OPENAI_API_BASE=http://host.docker.internal:11434 in .env file.
Alternatively, add in crewai/agent.py: 167 the check for API_BASE:
167 api_base = (
168 os.environ.get("OPENAI_API_BASE")
169 or os.environ.get( "OPENAI_BASE_URL")
170 or os.environ.get( "API_BASE")
171 )
Additional context
I would prefer the code addition from the possible solution, because semantically it is not the OPENAI_BASE_URL ;-)
The text was updated successfully, but these errors were encountered:
Description
Running crewai on docker with Ollama on host configuration generation is wrong.
Steps to Reproduce
Setup of crewai in a docker container running ubuntu. Ollama installed on the host (MacOS).
crewai create crew demo
asks for llm where I selected 5. ollama and 1. ollama/llama3.1 as model. This step produced the .env file with the content:
MODEL=ollama/llama3.1 API_BASE=http://localhost:11434
The correct url is
host.docker.internal
which I changed and also adopted my model.Following error when starting
crewai run
:Expected behavior
Run the example printing output of LLM.
Screenshots/Code snippets
Operating System
Ubuntu 24.04
Python Version
3.12
crewAI Version
crewai version: 0.86.0
crewAI Tools Version
n.A.
Virtual Environment
Venv
Evidence
see screenshot above
Possible Solution
Instead of setting
API_BASE
setOPENAI_API_BASE=http://host.docker.internal:11434
in .env file.Alternatively, add in
crewai/agent.py: 167
the check forAPI_BASE
:Additional context
I would prefer the code addition from the possible solution, because semantically it is not the OPENAI_BASE_URL ;-)
The text was updated successfully, but these errors were encountered: