-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Issues: huggingface/peft
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
a question about input_ids and attention_mask after prefix-tuning
#2304
opened Jan 6, 2025 by
MaTengSYSU
4 tasks done
Bug in
get_peft_model_state_dict
when using vblora
#2302
opened Dec 31, 2024 by
KaiyangLi1992
1 of 4 tasks
How to pass in an attention _ mask that is one dimension more than input _ ids
#2301
opened Dec 31, 2024 by
Chinesehou97
2 of 4 tasks
Error of load_adapter of Target module is not supported when using Qwen2-VL
#2296
opened Dec 24, 2024 by
bigmouthbabyguo-530
1 of 4 tasks
PEFT model doesn't update params when having changed LoRA config
#2295
opened Dec 23, 2024 by
d-kleine
4 tasks done
Cannot import name 'EncoderDecoderCache' from 'transformers'
#2292
opened Dec 21, 2024 by
Huang-jia-xuan
4 tasks
Inconsistent Parameter Mismatches After Merging PEFT and Base Models
#2289
opened Dec 19, 2024 by
enhulu-ms
2 of 4 tasks
TypeError when inference with different LoRA adapters in the same batch
#2283
opened Dec 15, 2024 by
yuxiang-guo
2 of 4 tasks
Incompatibility of X-LoRA and MistralForSequenceClassification
#2281
opened Dec 13, 2024 by
cyx96
2 of 4 tasks
Different Results When Predicting with Multiple LoRA Adapters in a Loop VS. Using only One LoRA
#2270
opened Dec 10, 2024 by
beyondguo
4 tasks
Can't PromptTuning in Multi-GPU with DeepSpeed and Qwen2.5-14B-Instruct
#2266
opened Dec 9, 2024 by
dongshou
2 of 4 tasks
Could you provide example code for AdaLoRA finetuning decoder-only model?
#2262
opened Dec 5, 2024 by
SpeeeedLee
Is it possible to support the transformer engine when using Lora in Megatron?
#2260
opened Dec 5, 2024 by
liulong11
Request for adding the lora implementation for Conv1d rather than transormers.utils.Conv1d
contributions-welcome
#2241
opened Nov 28, 2024 by
HelloWorldLTY
Deprecation: Transformers will no longer support Extra attention is needed
wip
past_key_values
to be tuples
contributions-welcome
help wanted
#1962
opened Jul 26, 2024 by
BenjaminBossan
Inference with different LoRA adapters in the same batch does not use the correct module_to_save classifier
contributions-welcome
wip
#1960
opened Jul 26, 2024 by
saeid93
2 of 4 tasks
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.