Does transformers utilize PyTorch SDPA's flash_attention for openai/gpt-oss-20b?
I'm investigating if the flash_attention backend from PyTorch's scaled_dot_product_attention (SDPA) is leveraged when running the openai/gpt-oss-20b model via the transformers library. How can we verify this behavior? I'm looking for methods, code snippets, or official documentation that confirm whether this optimization is active by default or if specific configurations are required to enable it.
You can switch between various attn_implementation
s like here: https://huggingface.co/docs/transformers/en/main_classes/model#transformers.PreTrainedModel.from_pretrained.attn_implementation
You can switch between various
attn_implementation
s like here: https://huggingface.co/docs/transformers/en/main_classes/model#transformers.PreTrainedModel.from_pretrained.attn_implementation
it looks like gpt-oss model doesn't support sdpa,
i got
raise ValueError(
ValueError: GptOssForCausalLM does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please request the support for this architecture: https://github.com/huggingface/transformers/issues/28005. If you believe this error is a bug, please open an issue in Transformers GitHub repository and load your model with the argument attn_implementation="eager"
meanwhile. Example: model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="eager")