Skip to content

Pull requests: PaddlePaddle/PaddleNLP

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Reviews
Assignee
Filter by who’s assigned
Sort

Pull requests list

Bugfix update predictor.py
#9742 opened Jan 3, 2025 by ZHUI Loading…
[Unified Checkpoint] Fix expert parallel
#9741 opened Jan 3, 2025 by DesmonDay Loading…
[CI]fix requirements
#9740 opened Jan 3, 2025 by Liujie0926 Loading…
[CI]fix requirements&codestyle
#9739 opened Jan 3, 2025 by Liujie0926 Loading…
[LLM] Add DeepseekV3
#9738 opened Jan 3, 2025 by DrownFish19 Loading…
Adapt to new npu flash_attention api
#9735 opened Jan 3, 2025 by will-jl944 Loading…
Update ci_unit.sh
#9733 opened Jan 2, 2025 by ZHUI Loading…
[llm]add adam
#9732 opened Jan 2, 2025 by lugimzzz Loading…
[New Features]Add lorapro
#9729 opened Jan 2, 2025 by greycooker Loading…
Auto sft
#9728 opened Jan 2, 2025 by blacksheep-Aristotle Loading…
fix auto tokenizer
#9726 opened Jan 2, 2025 by lyuwenyu Loading…
[Embedding] update embedding document
#9724 opened Jan 2, 2025 by DesmonDay Loading…
support HF tokenizer and make compatible with vllm
#9723 opened Jan 2, 2025 by ming1753 Loading…
[LLM Benchmark]optimize runtime
#9722 opened Dec 31, 2024 by Liujie0926 Loading…
add XLM-RoBERTa in paddlenlp
#9720 opened Dec 31, 2024 by jie-z-0607 Loading…
add enable_offload_queue to PipelineParallel
#9708 opened Dec 27, 2024 by GuoxiaWang Loading…
mergekit gpu 1226
#9702 opened Dec 26, 2024 by Mangodadada Loading…
Unified amp strategy in auto_trainner
#9696 opened Dec 25, 2024 by From00 Loading…
Add Llama2 and Qwen2.5 Pretrain Configurations
#9694 opened Dec 25, 2024 by sneaxiy Loading…
ProTip! Mix and match filters to narrow down what you’re looking for.