You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An officially supported task in the examples folder
My own task or dataset (give details below)
Reproduction
The adapters are fine-tuned mistral 7b v0.1 on xnli dataset.
I used the following script to load an xlora version of mistral 7b with 3 pre-trained adapters:
importtorchfromtransformersimportAutoModelForSequenceClassification, AutoConfigfrompeftimportXLoraConfig, get_peft_model# Load model configurationmodel_config=AutoConfig.from_pretrained("mistralai/Mistral-7B-v0.1")
# XLora Configurationlora_config=XLoraConfig(
task_type="SEQ_CLS",
hidden_size=model_config.hidden_size,
xlora_depth=2,
adapters={
"0": "./mistral_xnli_ckpt/de",
"1": "./mistral_xnli_ckpt/en",
"2": "./mistral_xnli_ckpt/fr",
}
)
# Load and configure modelmodel=AutoModelForSequenceClassification.from_pretrained(
"mistralai/Mistral-7B-v0.1",
num_labels=3, # XNLI has 3 labels: entailment, neutral, contradictiontrust_remote_code=True,
torch_dtype=torch.bfloat16,
use_cache=False,
)
# Explicitly move the model to GPUdevice=torch.device("cuda:0")
model=model.to(device)
# Apply XLoramodel=get_peft_model(model, lora_config).to(device)
Executing above will result in errors:
Some weights of MistralForSequenceClassification were not initialized from the model checkpoint at mistralai/Mistral-7B-v0.1 and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
File "/home/chenyuxu/XLMoE/mistral_xlora_ft.py", line 51, in<module>
model = get_peft_model(model, lora_config).to(device)
File "/opt/conda/envs/handbook/lib/python3.10/site-packages/peft/mapping.py", line 193, in get_peft_model
return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](
File "/opt/conda/envs/handbook/lib/python3.10/site-packages/peft/peft_model.py", line 1378, in __init__
super().__init__(model, peft_config, adapter_name, **kwargs)
File "/opt/conda/envs/handbook/lib/python3.10/site-packages/peft/peft_model.py", line 171, in __init__
self.base_model = cls(model, {adapter_name: peft_config}, adapter_name)
File "/opt/conda/envs/handbook/lib/python3.10/site-packages/peft/tuners/xlora/model.py", line 279, in __init__
_load_adapter_into_lora_model(
File "/opt/conda/envs/handbook/lib/python3.10/site-packages/peft/tuners/xlora/model.py", line 148, in _load_adapter_into_lora_model
raise ValueError(
ValueError: Got unexpected keys! Please raise an issue and tag @EricLBuehler.
unexpected_keys=['model.model.score.modules_to_save.0.weight']
Expected behavior
Reading the above error message, it seems like the MistralForSequenceClassification created and initialized some extra weights aside from the ones provided by "mistralai/Mistral-7B-v0.1". Registering the newly added weights to X-LoRA should solve the issue? Any advice or feedback regarding this is greatly appreciated, thanks!
The text was updated successfully, but these errors were encountered:
System Info
peft version: 0.13.2
accelerate version: 1.1.1
transformers version: 4.46.3
Python version: 3.10.15
Platform: Linux-5.10.0-33-cloud-amd64-x86_64-with-glibc2.31
Who can help?
@BenjaminBossan @EricLBuehler
Information
Tasks
examples
folderReproduction
The adapters are fine-tuned mistral 7b v0.1 on xnli dataset.
I used the following script to load an xlora version of mistral 7b with 3 pre-trained adapters:
Executing above will result in errors:
Expected behavior
Reading the above error message, it seems like the
MistralForSequenceClassification
created and initialized some extra weights aside from the ones provided by"mistralai/Mistral-7B-v0.1"
. Registering the newly added weights to X-LoRA should solve the issue? Any advice or feedback regarding this is greatly appreciated, thanks!The text was updated successfully, but these errors were encountered: