Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Invalidate trace cache @ step 10: expected module 11, but got module 19 #6870

Open
yafuly opened this issue Dec 14, 2024 · 6 comments
Open
Labels
bug Something isn't working training

Comments

@yafuly
Copy link

yafuly commented Dec 14, 2024

Describe the bug
I'm training Llama-3.1-70B-SFT with DPO using lora, equipped with Zero3. And the training log consitently output and stucks in this line "Invalidate trace cache @ step 10: expected module 11, but got module 19".

Yet the same training configuration work fine with 7B models, completely bug-free.

Hardware
8 *A100 (80G)

Deepspeed Config
{
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"zero_allow_untested_optimizer": true,
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_gather_16bit_weights_on_model_save": true,
"stage3_prefetch_bucket_size": 0,
"stage3_max_live_parameters": 0,
"stage3_max_reuse_distance": 0
}
}

@yafuly yafuly added bug Something isn't working training labels Dec 14, 2024
@JinXins
Copy link

JinXins commented Dec 15, 2024

same issue.

@tjruwase
Copy link
Contributor

@yafuly, @JinXins can you provide full repro steps, including scripts and command line? Thanks!

@liranringel
Copy link

Same here

@DW934
Copy link

DW934 commented Dec 26, 2024

same issue

@tjruwase
Copy link
Contributor

@liranringel and @DW934 can you share full repro steps?

@DW934
Copy link

DW934 commented Dec 30, 2024

@liranringel and @DW934 can you share full repro steps?

model

model_name_or_path: /home/models/qwen25_32B_lora
trust_remote_code: true

method

stage: dpo
do_train: true
finetuning_type: lora
lora_target: all
pref_beta: 0.1
pref_loss: sigmoid # choices: [sigmoid (dpo), orpo, simpo]
pref_ftx: 0.1

dataset

dataset: wmtbio24
template: qwen
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16

output

output_dir: /home/models/qwen25_32B-lora-dpo-1epoch-bs1-half-pref
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
report_to: wandb
run_name: dpo-1epoch-bs1-half-pref

train

per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 5.0e-6
num_train_epochs: 1
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000

eval

val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
eval_on_start: True

flash_attn: auto
deepspeed: /home/dw/RLHF/LLaMA-Factory/examples/deepspeed/ds_z3_config.json

结果:
Image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working training
Projects
None yet
Development

No branches or pull requests

5 participants