Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TE integration via full TransformerLayer #1297

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

tf-nv
Copy link
Contributor

@tf-nv tf-nv commented Sep 30, 2024

This is a sketch of using the attention picking mechanism ("global", "flash", NEW: "TE") to use the high level TransformerLayer from TransformerEngine. This is more of a prototype to show that integration with deepspeed is possible and what perf to expect.

Things that work:

  1. Training an 22B GPT2 style model on multiple DGXH100 with zero 1 and TP2 (BF16)
  2. TE attention TFLOPS are 5% higher than flash attention in BF16, and 70% higher in FP8 (for the 22B model)
  3. Activation checkpointing from TE

Many aspects are hardcoded, e.g. RoPE and activation checkpointing can not be reconfigured from the config files. #1282 is much more elaborate in that it exposes TE layers on a much lower level. Meanwhile this PR could serve as a benchmark, showing what is possible with TE on a classic GPT2 style network.

I kept the implementation as minimal as possible, there is room for further performance depending on the workload. There is e.g. sequence parallelism and different memory layouts.

The dockerfile now uses a later ngc pytorch container and installs a later deepspeed tag from source for compatibility.

@Quentin-Anthony
Copy link
Member

Will merge this after the finegrained TE PR

"The mask will be discarded")
hidden_states, attention_mask = args

fp8_format = Format.HYBRID # E4M3 during forward pass, E5M2 during backward pass
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should instead accept neox_arg from the new te_fp8_format

@@ -271,6 +272,24 @@ def init_specs(self):
layer_number=i,
)
)
elif layer_type in ["TE"]:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

needs tested with PP and TP, since we'd be relying on two external codebases (deepspeed for PP, TE for TP) whose topologies probably don't play nicely together.


RUN DS_BUILD_FUSED_LAMB=1 DS_BUILD_FUSED_ADAM=1 DS_BUILD_TRANSFORMER=1 DS_BUILD_STOCHASTIC_TRANSFORMER=1 DS_BUILD_UTILS=1 \
TORCH_CUDA_ARCH_LIST="8.0 9.0+PTX" \
python -m pip install git+https://github.com/microsoft/[email protected]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should probably instead be latest deeperspeed. Can't hardcode the arch list.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants