Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting a traceback when generating helptext #86

Closed
jherrman opened this issue Nov 7, 2024 · 4 comments
Closed

Getting a traceback when generating helptext #86

jherrman opened this issue Nov 7, 2024 · 4 comments
Assignees

Comments

@jherrman
Copy link

jherrman commented Nov 7, 2024

I'm getting the following error message when running logdetective --help :


Traceback (most recent call last):
File "/usr/bin/logdetective", line 5, in
from logdetective.logdetective import main
File "/usr/lib/python3.12/site-packages/logdetective/logdetective.py", line 6, in
from logdetective.utils import process_log, initialize_model, retrieve_log_content, format_snippets
File "/usr/lib/python3.12/site-packages/logdetective/utils.py", line 7, in
from llama_cpp import Llama
File "/usr/lib64/python3.12/site-packages/llama_cpp/init.py", line 1, in
from .llama_cpp import *
File "/usr/lib64/python3.12/site-packages/llama_cpp/llama_cpp.py", line 1434, in
@ctypes_function(
^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.12/site-packages/llama_cpp/llama_cpp.py", line 122, in decorator
func = getattr(lib, name)
^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.12/ctypes/init.py", line 392, in getattr
func = self.getitem(name)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.12/ctypes/init.py", line 397, in getitem
func = self._FuncPtr((name_or_ordinal, self))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: /usr/lib64/libllama.so: undefined symbol: llama_model_apply_lora_from_file


System: Fedora 40
Hardware: ThinkPad T14

@TomasTomecek
Copy link
Collaborator

Thanks for the report!

The issue is very simple :) python-llama-cpp library cannot find certain symbols inside llama-cpp library in F40 - these 2 libraries are out of sync. So I though I'd just update python-llama-cpp and we'd be good.

Unfortunately, it's not that easy.

llama-cpp was recently updated in f40:

commit 00ddbdbfead3eadaf58d849604254b1b6a2668f3 (HEAD -> f40, origin/f40)
Author: Tom Rix <[email protected]>
Date:   Sat Oct 26 09:00:20 2024 -0700

    Update to b3561

Even though when I update python-llama-cpp to "0.3.1", the latest upstream release, logdetective will still pick the older version due to our dependency setting:

[tool.poetry.dependencies]
llama-cpp-python = "^0.2.56,!=0.2.86"
...

Which causes:

Requirement already satisfied: llama-cpp-python!=0.2.86,<0.3.0,>=0.2.56 in /usr/lib64/python3.13/site-packages (from logdetective==0.2.7) (0.2.75)

Downgrading:
 python3-llama-cpp-python     x86_64     0.2.75-5.fc40   updates   280 k

So in order to fix F40, we need to:

  1. Update the llama-cpp-python dependency requirement here
  2. Check if we need to update llama-cpp-python itself in F40

@jpodivin
Copy link
Collaborator

jpodivin commented Nov 8, 2024

Should be helped by #87

@nikromen nikromen moved this from Needs triage to Someday in future in CPT Kanban Nov 13, 2024
@TomasTomecek TomasTomecek self-assigned this Nov 15, 2024
@TomasTomecek
Copy link
Collaborator

TomasTomecek commented Nov 15, 2024

Finally figured it out, thanks everyone for help!

https://src.fedoraproject.org/rpms/python-llama-cpp-python/pull-request/10

Will merge, build and do a bodhi update soon.

Smoke-tested in a f40 container and all worked well:

[root@1f5dc2461be6 src]# logdetective --help
usage: logdetective [-h] [-M MODEL] [-F FILENAME_SUFFIX] [-S SUMMARIZER] [-N N_LINES] [-C N_CLUSTERS] [-v] [-q] file

positional arguments:
  file                  The URL or path to the log file to be analyzed.

options:
  -h, --help            show this help message and exit
  -M MODEL, --model MODEL
                        The path or Hugging Face name of the language model for analysis.
  -F FILENAME_SUFFIX, --filename_suffix FILENAME_SUFFIX
                        Suffix of the model file name to be retrieved from Hugging Face. Makes sense only if the model is specified with Hugging Face name.
  -S SUMMARIZER, --summarizer SUMMARIZER
                        Choose between LLM and Drain template miner as the log summarizer. LLM must be specified as path to a model, URL or local file.
  -N N_LINES, --n_lines N_LINES
                        The number of lines per chunk for LLM analysis. This only makes sense when you are summarizing with LLM.
  -C N_CLUSTERS, --n_clusters N_CLUSTERS
                        Number of clusters for Drain to organize log chunks into. This only makes sense when you are summarizing with Drain
  -v, --verbose
  -q, --quiet
  Installing       : llama-cpp-b3561-1.fc40.x86_64
  Installing       : python3-llama-cpp-python-0.2.87-1.fc40.x86_64
  Installing       : python3-logdetective-0.2.5-1.fc40.noarch

@TomasTomecek
Copy link
Collaborator

https://bodhi.fedoraproject.org/updates/FEDORA-2024-25db690b63

please provide karma if the update fixes this problem

@jherrman thanks again for reporting!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Someday in future
Development

No branches or pull requests

3 participants