Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Agent() Parameter Name Inconsistency (llm vs LLM) #1817

Open
srishrachamalla7 opened this issue Dec 30, 2024 · 0 comments · May be fixed by #1830
Open

Agent() Parameter Name Inconsistency (llm vs LLM) #1817

srishrachamalla7 opened this issue Dec 30, 2024 · 0 comments · May be fixed by #1830
Labels
bug Something isn't working

Comments

@srishrachamalla7
Copy link

Description

There is a bug in the Agent() class where the parameter name for specifying the language model (llm) is case-sensitive, leading to unexpected behavior:

  1. Using llm=llm works correctly with a custom LLM (e.g., Ollama).
  2. Using LLM=llm causes the system to fallback to OpenAI's LLM, prompting for an API key (liteLLM error).

This inconsistency causes confusion and may lead to unintended fallback behavior.

Steps to Reproduce

  1. Define a custom LLM instance:
llm = LLM(
model="ollama/phi3:latest",
base_url="http://localhost:11434"
)
  1. Create an Agent() using:
  •   Correct Usage: llm=llm (works as expected).
    
  •   Incorrect Usage: LLM=llm (causes fallback to OpenAI).
    
  1. Observe the behavior.

Expected behavior

The Agent() class should:

  1. Enforce a consistent parameter naming convention.
  2. Throw an error or warning if an unsupported parameter (LLM) is passed.

Screenshots/Code snippets

  • When LLM=llm is used, the system ignores the custom LLM and defaults to OpenAI's LLM, prompting for an API key.

Operating System

Windows 11

Python Version

3.12

crewAI Version

0.86.0

crewAI Tools Version

0.17.0

Virtual Environment

Venv

Evidence

With LLM used

the code:
image
terminal output:
image

With llm used

The code:
image

terminal output:
** The code is working
image

Possible Solution

The issue can be resolved by:

  1. Standardizing the parameter name in the Agent() class to llm across all usage scenarios.
  2. Adding a validation step in the Agent() initialization to raise an error or warning if an unsupported parameter like LLM is used.
  3. Updating the documentation to clearly state that llm is the correct parameter name.

Additional context

This issue is critical for users integrating custom LLMs like Ollama and can lead to unnecessary fallback to OpenAI if not addressed.

@srishrachamalla7 srishrachamalla7 added the bug Something isn't working label Dec 30, 2024
devin-ai-integration bot added a commit that referenced this issue Dec 31, 2024
- Add case normalization for 'LLM' parameter with deprecation warning
- Add comprehensive type conversion for LLM parameters
- Add proper error handling for parameter conversion
- Add tests to verify parameter handling

Fixes #1817

Co-Authored-By: Joe Moura <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant