Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable sequence packing with FlashAttention-2 #41

Draft
wants to merge 4 commits into
base: v0.2
Choose a base branch
from

Conversation

taha-yassine
Copy link

Currently, datasets are prepared for caching using transformer_lens' tokenize_and_concatenate(). This is problematic because sequences are concatenated as is, and no special handling is done to avoid cross-sequence attention contamination. In addition, sequences are separated by EOS tokens, which is not ideal when training SAEs.

An alternative can be to have one sequence per sample in each batch, but this requires padding which wastes GPU resources and is thus sub-optimal.

This PR brings the ability to "pack" sequences together, meaning that sequences in a batch are concatenated to form a single long sample containing all sequences and avoid padding. To avoid attention contamination, FlashAttension-2 is used with the ability to pass a position_ids argument to bypass the need to materialize attention masks in memory (which is impractical). Using FA2 also brings a speed boost which is always welcome.
For additional details, see: https://huggingface.co/blog/packing-with-FA2

Currently, FA2 with position_ids is only implemented for some models in transformers. I'm working on a patch to bring it to more models, specifically GPT-NeoX-based models (e.g., Pythia) and GPT-2. Until it's upstreamed, this PR uses my fork of the library.

Things done

  • Add the dataloader for handling packing sequences load_dataset()
  • Eventually remove load_tokenized_data()
  • Upstream FA2 patch to transformers
  • Dynamically pack sequences to have a consistent number of tokens per batch (challenging)
  • Update caching to work with the new dataset format
  • Update examples

@taha-yassine taha-yassine marked this pull request as draft December 17, 2024 21:57
@CLAassistant
Copy link

CLAassistant commented Dec 17, 2024

CLA assistant check
All committers have signed the CLA.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants