-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: fix the chat stuck in infinite loop #1755
base: develop
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @zoe27! Welcome to the ai16z community. Thanks for submitting your first pull request; your efforts are helping us accelerate towards AGI. We'll review it shortly. You are now a ai16z contributor!
@zoe27 The PR is failing with the check against the https://github.com/elizaOS/eliza/actions/runs/12597379513/job/35110258487?pr=1755#step:2:8 |
const tokens = this.model!.tokenize(context); | ||
|
||
// tokenize the words to punish | ||
const wordsToPunishTokens = wordsToPunish |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems like functionality we'd want to keep, is this critical to remove to fix the infinite loop functionality?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, updated it and keep the function
const responseTokens: Token[] = []; | ||
|
||
for await (const token of this.sequence.evaluate(tokens, { | ||
temperature: Number(temperature), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
definitely need it to pay attention to temperature
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated it, and keep the temperature in the generating function
docs: Add DAO donation ask & dev discord
Relates to:
the chat stuck in infinite loop when using model_local #1213
Risks
Low
Background
What does this PR do?
This PR aim to fix the loop chat of the ai agent self when using the model_local
What kind of change is this?
modify the way of response generate in llama.ts
before this PR:
in this PR:
investigation step:
have not go to deeply to see what is the exactly different between
sequence.evaluate
andchatsession
, maybe can fix loop bug first, and for now, have not found any risk.Documentation changes needed?
Testing
Where should a reviewer start?
Detailed testing steps
Before
it loop the response and sometimes, the response is hard to understood
After
seems in a normal way to chat