Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] #1834

Open
rschwegler opened this issue Dec 31, 2024 · 1 comment
Open

[BUG] #1834

rschwegler opened this issue Dec 31, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@rschwegler
Copy link

Description

Running crewai on docker with Ollama on host configuration generation is wrong.

Steps to Reproduce

Setup of crewai in a docker container running ubuntu. Ollama installed on the host (MacOS).
crewai create crew demo

asks for llm where I selected 5. ollama and 1. ollama/llama3.1 as model. This step produced the .env file with the content:

MODEL=ollama/llama3.1 API_BASE=http://localhost:11434

The correct url is host.docker.internal which I changed and also adopted my model.

Following error when starting crewai run:

ERROR:root:LiteLLM call failed: litellm.APIConnectionError: OllamaException - [Errno 99] Cannot assign requested address

Expected behavior

Run the example printing output of LLM.

Screenshots/Code snippets

image

Operating System

Ubuntu 24.04

Python Version

3.12

crewAI Version

crewai version: 0.86.0

crewAI Tools Version

n.A.

Virtual Environment

Venv

Evidence

see screenshot above

Possible Solution

Instead of setting API_BASE set OPENAI_API_BASE=http://host.docker.internal:11434 in .env file.
Alternatively, add in crewai/agent.py: 167 the check for API_BASE:

167             api_base = (
168                 os.environ.get("OPENAI_API_BASE")
169                 or os.environ.get( "OPENAI_BASE_URL")
170                 or os.environ.get( "API_BASE")
171             )

Additional context

I would prefer the code addition from the possible solution, because semantically it is not the OPENAI_BASE_URL ;-)

@rschwegler rschwegler added the bug Something isn't working label Dec 31, 2024
@imsharukh1994
Copy link

imsharukh1994 commented Jan 1, 2025

Solution 1: Correct the .env File

Update the .env file as follows:


MODEL=ollama/llama3.1
OPENAI_API_BASE=http://host.docker.internal:11434

This ensures compatibility since crewai is expecting the OPENAI_API_BASE variable.

Solution 2: Code Update in crewai/agent.py

You can modify the agent.py file to include a fallback for API_BASE. Update the code around line 167 to:


api_base = (
    os.environ.get("OPENAI_API_BASE")
    or os.environ.get("OPENAI_BASE_URL")
    or os.environ.get("API_BASE")  # Added support for API_BASE
)

This change ensures that API_BASE is recognized as a valid variable without needing to rename it in the .env file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants