url
stringlengths
66
66
text
stringlengths
141
41.9k
num_labels
sequencelengths
1
8
arr_labels
sequencelengths
82
82
labels
sequencelengths
1
8
https://api.github.com/repos/huggingface/transformers/issues/36111
TITLE Add Deepseek-VL2 COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description Deepseek-VL2: - [paper](https://arxiv.org/abs/2412.10302) - [code](https://github.com/deepseek-ai/DeepSeek-VL2) - [weights](https://huggingface.co/collections/deepseek-ai/deepseek-vl2-675c22accc456d3beb4613ab) ### Open source status - [ ] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation _No response_
[ 77 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/35778
TITLE Add ColQwen2 to 🤗 transformers COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 3 eyes: 0 BODY # What does this PR do? Add ColQwen2 in 🤗 `transformers`. ColQwen2 is a model that uses the [ColPali](https://doi.org/10.48550/arXiv.2407.01449) architecture with a Qwen2-VL backbone. **⚠️ This PR is still a WORK IN PROGRESS.** ## Who can review? @yonigozlan again 😉 (I'll add Arthur when the PR is functional) ## Additional details - This PR uses the new [Modular 🤗 transformers](https://huggingface.co/docs/transformers/main/en/modular_transformers#modular-transformers) - The ColPali is mainly inspired from the [colpali-engine](https://github.com/illuin-tech/colpali) repository I'm maintaining with my co-authors. The initial code was taken from `colpali-engine==v0.3.6`. - [WIP] The newly converted model weights are stored in [`vidore/colqwen-v1.0-hf`](https://huggingface.co/vidore/colqwen-v1.0-hf). ## Progress checklist ## TODO - [x] (Optional) Understood the model’s theoretical aspects - [x] Prepared 🤗 Transformers dev environment - [x] Set up debugging environment of the original repository - [ ] Created script that successfully runs the forward() pass using the original repository and checkpoint - [ ] Successfully added the model skeleton to 🤗 Transformers - [ ] Successfully converted original checkpoint to 🤗 Transformers checkpoint - [ ] Successfully ran forward() pass in 🤗 Transformers that gives identical output to original checkpoint - [ ] Finished model tests in 🤗 Transformers - [x] Successfully added tokenizer in 🤗 Transformers - [ ] Run end-to-end integration tests - [ ] Finished docs - [ ] Uploaded model weights to the Hub - [ ] Submitted the pull request - [ ] (Optional) Added a demo notebook → can be found in https://github.com/tonywu71/colpali-cookbooks
[ 77, 12 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ "New model", "Multimodal" ]
https://api.github.com/repos/huggingface/transformers/issues/36102
TITLE Traning loss not showing with trainer COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info Python 3.11.11 transformers 4.48.2 ### Who can help? @muellerzr @SunMarc ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction # Define the TrainingArguments for fine-tuning training_args = TrainingArguments( output_dir='/content/drive/MyDrive/Legal_Dataset/BartlargeFineTuned/', num_train_epochs=10, per_device_train_batch_size=10, gradient_accumulation_steps=8, evaluation_strategy="epoch", save_total_limit=1, save_steps=1000, learning_rate=1e-3, do_train=True, do_eval=True, remove_unused_columns=False, push_to_hub=False, report_to='tensorboard', load_best_model_at_end=False, lr_scheduler_type="cosine_with_restarts", warmup_steps=100, weight_decay=0.01, logging_dir='/content/drive/MyDrive/Legal_Dataset/BartlargeFineTuned/', logging_steps=200, ) # Create a data collator for sequence-to-sequence tasks data_collator = MyDataCollatorForSeq2Seq( tokenizer=tokenizer, model=model, padding=False, max_length=80, label_pad_token_id=tokenizer.pad_token_id, ) # Create Trainer trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=validation_dataset, optimizers=(custom_optimizer, None), ) trainer.train() ### Expected behavior I trained the model for 10 epochs but in every epochs I saw the validation loss only not training loss. Please help ![Image](https://github.com/user-attachments/assets/e535a5f3-4d6f-4c96-b92a-2e1e2723fb46)
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/35524
TITLE Warning 'The attention mask is not set' COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Having the same warning appearing in a closed pull request #33509 ### System Info - `transformers` version: 4.47.1 - Platform: Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.39 - Python version: 3.12.3 - Huggingface_hub version: 0.27.0 - Safetensors version: 0.5.0 - Accelerate version: 1.2.1 - Accelerate config: not found - PyTorch version (GPU?): 2.5.1+cu124 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: no - Using GPU in script?: yes - GPU type: NVIDIA RTX 4000 Ada Generation Laptop GPU ### Who can help? @ylacombe ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction code: ```python pipe = pipeline( "automatic-speech-recognition", model=self.model, torch_dtype=torch.float16, chunk_length_s=30, batch_size=24, return_timestamps=True, device=self.device, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, model_kwargs={"use_flash_attention_2": True}, generate_kwargs={ "max_new_tokens": 128, }, ) ``` warning: ``` The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Whisper did not predict an ending timestamp, which can happen if audio is cut off in the middle of a word. Also make sure WhisperTimeStampLogitsProcessor was used during generation. ``` ### Expected behavior No warning
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/34107
TITLE How to specific customized force_token_ids in whisper COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ``` ValueError: A custom logits processor of type <class 'transformers.generation.logits_process.ForceTokensLogitsProcessor'> with values <transformers.generation.logits_process.ForceTokensLogitsProcessor object at 0x7f4230cfac50> has been passed to `.generate()`, but it has already been created with the values <transformers.generation.logits_process.ForceTokensLogitsProcessor object at 0x7f422829c510>. <transformers.generation.logits_process.ForceTokensLogitsProcessor object at 0x7f422829c510> has been created by passing the corresponding arguments to generate or by the model's config default values. If you just want to change the default values of logits processor consider passing them as arguments to `.generate()` instead of using a custom logits processor ``` this way don't work: ``` inputs = inputs.to(self.model.dtype) with torch.no_grad(): if forced_decoder_ids is not None: generated_ids = self.model.generate( inputs, forced_decoder_ids=forced_decoder_ids ) else: generated_ids = self.model.generate(inputs) ```
[ 18, 43 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Generation", "Audio" ]
https://api.github.com/repos/huggingface/transformers/issues/34817
TITLE Mamba2 `torch_forward` reduction dimension possibly incorrect? COMMENTS 7 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 1 BODY ### System Info NA ### Who can help? @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction NA ### Expected behavior In the `torch_forward` part of Mamba2, it seems like the reduction dimension should be `dim=3` instead of `dim=2`? https://github.com/huggingface/transformers/blob/30335093276212ce74938bdfd85bfd5df31a668a/src/transformers/models/mamba2/modeling_mamba2.py#L560 with `dim=3`, the output seems to more or less match that of Mamba-2's [`ssd_minimal`](https://github.com/state-spaces/mamba/blob/main/mamba_ssm/modules/ssd_minimal.py) implementation, but not with `dim=2`
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/36194
TITLE AutoProcessor loading error COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info Related Issues and PR: #34307 https://github.com/huggingface/transformers/pull/36184 - `transformers` version: 4.49.0.dev0 - Platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35 - Python version: 3.10.16 - Huggingface_hub version: 0.27.1 - Safetensors version: 0.5.2 - Accelerate version: 1.0.1 - Accelerate config: not found - PyTorch version (GPU?): 2.6.0+cu126 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: NVIDIA H100 80GB HBM3 ### Who can help? @Rocketknight1 ### Information - [x] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Here are the reprodution steps 1. choose a mllm like qwen2.5vl, and download it's config file 2. derive its images processor, processor and model 3. modify the config file and try to use AutoProcessor to load_from_pretrain 4. and the error occurs like #34307 ```python from transformers import Qwen2_5_VLProcessor, Qwen2_5_VLImageProcessor, Qwen2_5_VLForConditionalGeneration, Qwen2_5_VLConfig class NewProcessor(Qwen2_5_VLProcessor): image_processor_class = "NewImageProcessor" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) class NewImageProcessor(Qwen2_5_VLImageProcessor): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) class NewConfig(Qwen2_5_VLConfig): model_type = "new_model" class NewModel(Qwen2_5_VLForConditionalGeneration): config_class = NewConfig def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) from transformers import AutoModel, AutoImageProcessor, AutoProcessor AutoImageProcessor.register(NewModel.config_class, NewImageProcessor) AutoProcessor.register(NewModel.config_class, NewProcessor) AutoModel.register(NewModel.config_class, NewModel) if __name__ == "__main__": processor = NewProcessor.from_pretrained("path/to/NewModel_config/") ``` modified config ``` config.json: "architectures": [ "NewModel" ], "model_type": "new_model", preprocessor_config.json: "image_processor_type": "NewImageProcessor", "processor_class": "NewProcessor" ``` I also check the pr https://github.com/huggingface/transformers/pull/36184, it didn't work, because the func _get_class_from_class_name use mapping but the key is string rather than Config class ### Expected behavior None
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/35108
TITLE Training config that worked with transformers v4.4.6.3 results in OOM error with v4.47.0 (using SFTTrainer) COMMENTS 6 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info ``` - `transformers` version: 4.47.0 - Platform: Linux-6.8.0-1015-aws-x86_64-with-glibc2.35 - Python version: 3.12.6 - Huggingface_hub version: 0.26.2 - Safetensors version: 0.4.5 - Accelerate version: 1.1.1 - Accelerate config: not found - PyTorch version (GPU?): 2.5.1+cu124 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: Yes - Using GPU in script?: Yes - GPU type: NVIDIA A100-SXM4-40GB ``` ### Who can help? @ArthurZucker @SunMarc @muellerz ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Training with transformers==4.46.3 runs as expected. Upgrading to transformers==4.47.0 (without changing anything else) leads to an OOM error in the very first training step (see stack trace below). Run command: `accelerate launch --config_file ./accelerate_config.yaml train.py training=path/to/training_config` ### Accelerate Config ``` compute_environment: LOCAL_MACHINE debug: false distributed_type: FSDP downcast_bf16: 'no' fsdp_config: fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP fsdp_backward_prefetch: BACKWARD_PRE fsdp_cpu_ram_efficient_loading: true fsdp_forward_prefetch: false fsdp_offload_params: false fsdp_sharding_strategy: FULL_SHARD fsdp_state_dict_type: FULL_STATE_DICT fsdp_sync_module_states: true fsdp_use_orig_params: false activation_checkpointing: true machine_rank: 0 main_training_function: main mixed_precision: 'bf16' num_machines: 1 num_processes: 8 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false ``` ### Training Config ``` {'accelerator_config': {'dispatch_batches': None, 'even_batches': True, 'gradient_accumulation_kwargs': None, 'non_blocking': False, 'split_batches': False, 'use_seedable_sampler': True}, 'adafactor': False, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'attn_implementation': 'flash_attention_2', 'auto_find_batch_size': False, 'average_tokens_across_devices': False, 'batch_eval_metrics': False, 'bf16': 'auto', 'bf16_full_eval': False, 'chars_per_token': '<CHARS_PER_TOKEN>', 'data_seed': None, 'dataloader_drop_last': False, 'dataloader_num_workers': 0, 'dataloader_persistent_workers': False, 'dataloader_pin_memory': True, 'dataloader_prefetch_factor': None, 'dataset_batch_size': 1000, 'dataset_kwargs': {'skip_prepare_dataset': False}, 'ddp_backend': None, 'ddp_broadcast_buffers': None, 'ddp_bucket_cap_mb': None, 'ddp_find_unused_parameters': None, 'ddp_timeout': 1800, 'debug': [], 'deepspeed': None, 'delete_ckpts': False, 'disable_tqdm': False, 'dispatch_batches': None, 'do_eval': True, 'do_predict': False, 'do_train': False, 'early_stopping_patience': 10, 'eval_accumulation_steps': None, 'eval_delay': 0, 'eval_do_concat_batches': True, 'eval_exampleset_info_path': '', 'eval_exampleset_path': '', 'eval_on_start': True, 'eval_packing': False, 'eval_steps': 10, 'eval_strategy': 'steps', 'eval_use_gather_object': False, 'evaluation_strategy': None, 'exampleset_info_path': '', 'exampleset_path': '', 'force_tokenize_data': False, 'fp16': False, 'fp16_backend': 'auto', 'fp16_full_eval': False, 'fp16_opt_level': 'O1', 'fsdp': [], 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False, 'xla_fsdp_v2': False}, 'fsdp_min_num_params': 0, 'fsdp_transformer_layer_cls_to_wrap': None, 'full_determinism': False, 'gradient_accumulation_steps': 4, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': {'use_reentrant': False}, 'greater_is_better': False, 'group_by_length': False, 'half_precision_backend': 'auto', 'hub_always_push': False, 'hub_model_id': None, 'hub_private_repo': None, 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'ignore_data_skip': False, 'include_for_metrics': [], 'include_inputs_for_metrics': False, 'include_num_input_tokens_seen': False, 'include_tokens_per_second': False, 'jit_mode_eval': False, 'label_names': ['labels'], 'label_smoothing_factor': 0.0, 'learning_rate': 0.0002, 'length_column_name': 'length', 'load_best_model_at_end': True, 'local_rank': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_first_step': False, 'logging_nan_inf_filter': True, 'logging_steps': 1, 'logging_strategy': 'steps', 'lora_alpha': 32, 'lora_dropout': 0.05, 'lora_r': 16, 'lora_target_modules': ['q_proj', 'k_proj', 'v_proj', 'o_proj', 'up_proj', 'down_proj', 'gate_proj'], 'lr_scheduler_kwargs': {}, 'lr_scheduler_type': 'cosine', 'mask_instructions': True, 'max_grad_norm': 1.0, 'max_seq_length': 1024, 'max_steps': 100, 'meta_data': {}, 'metric_for_best_model': 'loss', 'model_name_or_path': 'Qwen/Qwen2.5-7B-Instruct', 'mp_parameters': '', 'neftune_noise_alpha': None, 'no_cuda': False, 'num_of_sequences': 1024, 'num_train_epochs': 3, 'optim': 'adamw_torch', 'optim_args': None, 'optim_target_modules': None, 'overwrite_output_dir': False, 'packing': False, 'past_index': -1, 'per_device_eval_batch_size': 1, 'per_device_train_batch_size': 1, 'per_gpu_eval_batch_size': None, 'per_gpu_train_batch_size': None, 'prediction_loss_only': False, 'push_to_hub': False, 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'ray_scope': 'last', 'remove_unused_columns': True, 'restore_callback_states_from_checkpoint': False, 'resume_from_checkpoint': None, 'save_on_each_node': False, 'save_only_model': False, 'save_safetensors': True, 'save_steps': 20, 'save_strategy': 'steps', 'save_total_limit': None, 'seed': 42, 'skip_memory_metrics': True, 'smoke_test': False, 'split_batches': None, 'tf32': None, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'torch_dtype': 'bfloat16', 'torch_empty_cache_steps': None, 'torchdynamo': None, 'tpu_metrics_debug': False, 'tpu_num_cores': None, 'use_cpu': False, 'use_ipex': False, 'use_legacy_prediction_loop': False, 'use_liger_kernel': False, 'use_mps_device': False, 'use_peft': False, 'val_set_size': 0.0, 'warmup_ratio': 0.1, 'warmup_steps': 0, 'weight_decay': 0.0} ``` ### Training script ``` def main(cfg): accelerator = Accelerator() model_kwargs = dict( attn_implementation=sft_config.attn_implementation, torch_dtype=sft_config.torch_dtype, use_cache=False, ) model = AutoModelForCausalLM.from_pretrained(sft_config.model_name_or_path, **model_kwargs) tokenizer = AutoTokenizer.from_pretrained(sft_config.model_name_or_path, use_fast=True) tokenizer.pad_token = tokenizer.eos_token trainer = SFTTrainer( model=model, tokenizer=tokenizer, args=sft_config, train_dataset=train_dataset, eval_dataset=eval_dataset, peft_config=None, dataset_kwargs=sft_config.dataset_kwargs, ) trainer.train() trainer.save_model() if __name__ == "__main__": main() ``` ### Stack trace ``` Traceback (most recent call last): File "/home/ubuntu/***/train.py", line 233, in main trainer.train() File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 2164, in train return inner_training_loop( ^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 2522, in _inner_training_loop tr_loss_step = self.training_step(model, inputs, num_items_in_batch) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 3653, in training_step loss = self.compute_loss(model, inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 3709, in compute_loss outputs = model(**inputs) ^^^^^^^^^^^^^^^ File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 864, in forward output = self._fsdp_wrapped_module(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/accelerate/utils/operations.py", line 823, in forward return model_forward(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/accelerate/utils/operations.py", line 811, in __call__ return convert_to_fp32(self.model_forward(*args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 1184, in forward loss = self.loss_function(logits, labels, self.vocab_size, **loss_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/transformers/loss/loss_utils.py", line 36, in ForCausalLMLoss logits = logits.float() ^^^^^^^^^^^^^^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.97 GiB. GPU 5 has a total capacity of 39.38 GiB of which 1.53 GiB is free. Including non-PyTorch memory, this process has 37.84 GiB memory in use. Of the allocated memory 35.69 GiB is allocated by PyTorch, and 521.06 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) ``` ### Expected behavior Training should complete without errors.
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/33445
TITLE Whisper Beam Search doesn't work COMMENTS 4 REACTIONS +1: 1 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 1 BODY ### System Info ``` - `transformers` version: 4.45.0.dev0 - Platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - Huggingface_hub version: 0.24.7 - Safetensors version: 0.4.5 - Accelerate version: 0.34.2 - Accelerate config: not found - PyTorch version (GPU?): 2.4.1+cu121 (True) - Tensorflow version (GPU?): 2.15.1 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu) - Jax version: 0.4.13 - JaxLib version: 0.4.13 - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: NVIDIA GeForce GTX 1070 Ti ``` ### Who can help? @ylacombe @eustlb ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Download an audio sample https://drive.google.com/file/d/1eVeFUyfHWMpmFSRYxmBWaNe_JLEQqT8G/view?usp=sharing 2. Use transformers v4.41 + my fix from #32970 (it allows to output sequence_score) 3. Run the code below to get 5 hypotheses of Beam Search on audio transcription ```python from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq import torch import librosa # Load the processor and model processor = AutoProcessor.from_pretrained("openai/whisper-tiny") model = AutoModelForSpeechSeq2Seq.from_pretrained("openai/whisper-tiny") # Load and preprocess the audio file audio_path = "audio.mp3" audio, sr = librosa.load(audio_path, sr=16000) # Ensure the sample rate is 16kHz # Preprocess the audio to get the input features inputs = processor(audio, sampling_rate=16000, return_tensors="pt") # Generate the transcription using Beam Search with the model beam_outputs = model.generate( inputs["input_features"], num_beams=5, # Number of beams num_return_sequences=5, # Number of hypotheses to return early_stopping=True, output_scores=True, return_dict_in_generate=True, ) # Decode the generated transcriptions hypotheses = [processor.decode(output_ids, skip_special_tokens=True) for output_ids in beam_outputs.sequences] # Print out the hypotheses for i, hypothesis in enumerate(hypotheses): print(f"Hypothesis {i + 1}: {hypothesis}. Score: {beam_outputs.sequences_scores[i]}") ``` ### Expected behavior Together with @ylacombe we identified that after Pull Request #30984 Whisper Beam Search generation doesn't work as intended. See more detailed discussion on Pull Request #32970 The code above must return 5 unique hypotheses due to the core principle of the Beam Search - to select `num_beams` best tokens in a top_k sampling fashion. Instead, we are getting the same results with the highest probability. See below for how Beam Search used to work in version v4.25.1 and how it works now. transformers v4.25.1 ``` Hypothesis 1: How is Mozilla going to handle and be with this? Thank you.. Score: -0.4627407491207123 Hypothesis 2: How is Mozilla going to handle and be with this? Thank you and Q.. Score: -0.4789799749851227 Hypothesis 3: How is Mozilla going to handle and be with this? Thank you, and cute.. Score: -0.48414239287376404 Hypothesis 4: How is Mozilla going to handle and be with this? Thank you and cute.. Score: -0.4972183108329773 Hypothesis 5: How is Mozilla going to handle and be with this? Thank you, and Q.. Score: -0.5054414868354797 ``` transformers v4.44.1 + My Fix from #32970 ``` Hypothesis 1: How is Mozilla going to handle and be with this? Thank you.. Score: -0.5495038032531738 Hypothesis 2: How is Mozilla going to handle and be with this? Thank you.. Score: -0.5495040416717529 Hypothesis 3: How is Mozilla going to handle and be with this? Thank you.. Score: -0.5495036840438843 Hypothesis 4: How is Mozilla going to handle and be with this? Thank you.. Score: -0.5495036244392395 Hypothesis 5: How is Mozilla going to handle and be with this? Thank you.. Score: -0.5495033264160156 ``` @ylacombe has found the bug in [_expand_variables_for_generation](https://github.com/huggingface/transformers/blob/516ee6adc2a6ac2f4800790cabaad66a1cb4dcf4/src/transformers/models/whisper/generation_whisper.py#L1076-L1084) function. The function artificially expands the batch size to `num_return_sequences`, which causes an issue when this expanded batch size is passed to `GenerationMixin.generate`. Specifically, if `batch_size=5` and `num_return_sequences > 1`, the model generates `batch_size * num_beams` beams but retains only the most probable beam for each element of the original batch. ## Impact This bug results in the `num_return_sequences` parameter not being compatible with both short-form and long-form generation. Users expecting multiple return sequences will only receive the most probable sequence, which may not meet the intended use case. cc @eustlb
[ 64, 18, 43 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug", "Generation", "Audio" ]
https://api.github.com/repos/huggingface/transformers/issues/33689
TITLE llama `tie_word_embeddings` ignored on cpu and with auto dtype only COMMENTS 2 REACTIONS +1: 1 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info platform: linux: `ubuntu 22.04` python version: `3.10.12` transformers version: `4.44.2` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python3 import torch import pytest from transformers import AutoModelForCausalLM @pytest.mark.parametrize( "torch_dtype,tie_word_embeddings,device_map", [ (torch.float16, False, "cpu" ), # passes (torch.float32, False, "cpu" ), # fails (torch.float32, False, "cuda:0"), # passes (torch.float16, True, "cpu" ), # passes (torch.float32, True, "cpu" ), # passes (torch.float32, True, "cuda:0"), # passes ], ) def test_model_shared(torch_dtype, tie_word_embeddings, device_map, tmp_path): # load model model = AutoModelForCausalLM.from_pretrained( "Xenova/llama2.c-stories15M", torch_dtype=torch_dtype, tie_word_embeddings=tie_word_embeddings, device_map=device_map ) # modify lm head with torch.no_grad(): model.lm_head.weight += 1 # check that embed_tokens is not modified if tie_word_embeddings: assert torch.equal(model.lm_head.weight, model.model.embed_tokens.weight) else: assert not torch.equal(model.lm_head.weight, model.model.embed_tokens.weight) ``` ### Expected behavior I expect tied tensors should not be tied if `tie_word_embeddings=False`. Instead, the tensors are tied. Seems to be the root cause of #33688
[ 23, 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Core: Modeling", "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/34674
TITLE Vision Encoder-Decoder fails with LLaMA decoder due to missing cross-attention implementation COMMENTS 2 REACTIONS +1: 1 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.46.2 - Platform: Linux-6.1.85+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.24.7 - Safetensors version: 0.4.5 - Accelerate version: 0.34.2 - Accelerate config: not found - PyTorch version (GPU?): 2.5.0+cu121 (True) - Tensorflow version (GPU?): 2.17.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.8.5 (gpu) - Jax version: 0.4.33 - JaxLib version: 0.4.33 - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: NVIDIA L4 ### Who can help? Not sure for multi modal models: text models: @ArthurZucker vision models: @amyeroberts, @qubvel generate: @zucchini-nlp ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction #### Description When using a vision encoder-decoder model, there's an incompatibility when using LLaMA as the decoder. While both GPT-2 and LLaMA are decoder models, GPT-2 implements an output class that includes cross-attention outputs, whereas LLaMA's output class (`CausalLMOutputWithPast`) does not include this attribute. This causes the vision encoder-decoder forward pass to fail when attempting to access cross-attention outputs. #### Current Behavior The model raises an AttributeError because LLaMA's implementation of `CausalLMOutputWithPast` doesn't include cross-attention outputs, while the vision encoder-decoder expects this attribute to be present (as it exists in GPT-2's implementation). Error message: ```python AttributeError: 'CausalLMOutputWithPast' object has no attribute 'cross_attentions' ``` #### Technical Analysis 1. GPT-2's decoder implementation returns an output class that includes cross-attention information 2. LLaMA's decoder implementation returns `CausalLMOutputWithPast` which doesn't include cross-attention 3. The vision encoder-decoder architecture assumes the presence of cross-attention in the decoder outputs #### Steps to Reproduce 1. Initialize a vision encoder-decoder model with LLaMA as the decoder 2. Attempt to run a forward pass or generate 3. The error occurs in `modeling_vision_encoder_decoder.py` when trying to access `decoder_outputs.cross_attentions` The error occurs in modeling_vision_encoder_decoder.py around line 651: ```python decoder_hidden_states=decoder_outputs.hidden_states, decoder_attentions=decoder_outputs.attentions, cross_attentions=decoder_outputs.cross_attentions, # This line causes the error encoder_last_hidden_state=encoder_outputs.last_hidden_state, encoder_hidden_states=encoder_outputs.hidden_states, ``` #### Workaround Setting cross_attentions to None allows the model to work, suggesting that the architecture doesn't strictly require this information for functioning. #### Proposed Solutions 1. Short term: Modify the vision encoder-decoder implementation to handle decoders that don't provide cross-attention outputs: ```python cross_attentions = getattr(decoder_outputs, 'cross_attentions', None) ``` Happy to submit a PR if this is an appropriate solution ### Expected behavior modeling_vision_encoder_decoder.py should support different decoder models without custom causal lm cross attention output classes.
[ 30, 64, 62 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Core: Encoder-Decoder", "bug", "Vision" ]
https://api.github.com/repos/huggingface/transformers/issues/35749
TITLE Qwen2VL exhibits significant performance differences under different attention implementations. COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 1 BODY ### System Info `transformers=4.47.1 ` `pytorh=2.3.0` `flash-attn=2.7.2` `python=3.10` ### Who can help? @amyeroberts @qubvel @zucchini-nlp ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm using the lmms-eval framework to evaluate qwen2vl models on various of benchmarks. here are the scrips: ``` python3 -m accelerate.commands.launch \ --main_process_port=28175 \ --mixed_precision=bf16 \ --num_processes=2 \ -m lmms_eval \ --model qwen2_vl_with_kvcache \ --model_args pretrained=/share/home/models/Qwen2-VL-7B-Instruct,use_flash_attention_2=true\ --tasks chartqa \ --batch_size 1 \ --log_samples \ --log_samples_suffix chartqa \ --output_path ./logs/qwen2vl/chatqa/ ``` ### Expected behavior Recently, I've been using Qwen2VL-7B for evaluation under the lmms-eval framework and discovered some confusing phenomena. Taking the ChartQA task as an example, when both the vision and LLM utilize flash-attention2, I can achieve a score of 81.56. However, when both vision and LLM use eager attention, the score drops significantly to 72.64. To explore further, I conducted additional experiments and found that regardless of which attention implementation the vision module uses, the score remains around 82. However, when the vision module uses flash-attention2 while the LLM employs eager attention, the score drops to just 0.0008, and the model loses its generative ability, endlessly repeating one or two words. | LLM Attention | Vision: Flash | Vision: Eager | |---------------|---------------|---------------| | **Flash** | 81.56 | 82.00 | | **Eager** | **0.0008** | 72.64 | the model's response under 0.0008 setting: "The value of the the the the the the the the the the the the the" "````````````````````````````````````````````````" "A is a person assistant. A is a person assistant. A is a person" "The following are the the the the the the the the the the the the the" The above results are all based on BF16 precision. I also conducted a check regarding precision. For all modules use eager attention, I converted QKV to float to ensure that attention calculations during the forward pass were in FP32. Unfortunately, the final result remained the same as BF16 (72.64).
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/36155
TITLE `TFViTModel` and `interpolate_pos_encoding=True` COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.48.3 - Platform: Linux-5.15.0-1078-azure-x86_64-with-glibc2.35 - Python version: 3.11.0rc1 - Huggingface_hub version: 0.27.1 - Safetensors version: 0.4.2 - Accelerate version: 0.31.0 - Accelerate config: not found - PyTorch version (GPU?): 2.3.1+cu121 (True) - Tensorflow version (GPU?): 2.16.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: NO - Using GPU in script?: YES - GPU type: Tesla V100-PCIE-16GB ### Who can help? @amyeroberts, @qubvel, @gante, @Rocketknight1 ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction This simple script is used to create a Keras Model based on Vision Transformer `TFViTModel`. I want to use higher resolution images than the default value of 224, as described in the documentation. **Enabling `interpolate_pos_encoding=True` returns an error during fit.** Using the default resolution and `interpolate_pos_encoding=False` makes the script work. ``` from transformers import ViTConfig, TFViTModel config = ViTConfig(image_size=512) base_model = TFViTModel(config).from_pretrained('google/vit-base-patch16-224') inputs = tf.keras.Input((3, 512, 512), dtype='float32') x = base_model.vit(inputs, interpolate_pos_encoding=True, training=True).pooler_output output= tf.keras.layers.Dense(1, activation='sigmoid')(x) model = tf.keras.Model(inputs=[inputs], outputs=[output]) ``` **Error code:** ``` OperatorNotAllowedInGraphError: in user code: File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/training.py", line 1398, in train_function * return step_function(self, iterator) File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/training.py", line 1370, in run_step * outputs = model.train_step(data) File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/training.py", line 1147, in train_step * y_pred = self(x, training=True) File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/training.py", line 565, in error_handler * del filtered_tb File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/training.py", line 588, in __call__ * return super().__call__(*args, **kwargs) File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/training.py", line 565, in error_handler * del filtered_tb File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/base_layer.py", line 1136, in __call__ * outputs = call_fn(inputs, *args, **kwargs) File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/functional.py", line 514, in call * return self._run_internal_graph(inputs, training=training, mask=mask) File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/functional.py", line 671, in _run_internal_graph * outputs = node.layer(*args, **kwargs) File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/training.py", line 560, in error_handler * filtered_tb = _process_traceback_frames(e.__traceback__) File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/base_layer.py", line 1136, in __call__ * outputs = call_fn(inputs, *args, **kwargs) File "/tmp/__autograph_generated_filepnn_cad_.py", line 162, in error_handler ** raise ag__.converted_call(ag__.ld(new_e).with_traceback, (ag__.ld(e).__traceback__,), None, fscope_1) from None File "/tmp/__autograph_generated_filepnn_cad_.py", line 34, in error_handler retval__1 = ag__.converted_call(ag__.ld(fn), tuple(ag__.ld(args)), dict(**ag__.ld(kwargs)), fscope_1) OperatorNotAllowedInGraphError: Exception encountered when calling layer 'vit' (type TFViTMainLayer). in user code: File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/transformers/modeling_tf_utils.py", line 598, in run_call_with_unpacked_inputs * return func(self, **unpacked_inputs) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/transformers/models/vit/modeling_tf_vit.py", line 595, in call * embedding_output = self.embeddings( File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/training.py", line 560, in error_handler * filtered_tb = _process_traceback_frames(e.__traceback__) File "/databricks/python/lib/python3.11/site-packages/tf_keras/src/engine/base_layer.py", line 1136, in __call__ * outputs = call_fn(inputs, *args, **kwargs) File "/tmp/__autograph_generated_filepnn_cad_.py", line 162, in error_handler ** raise ag__.converted_call(ag__.ld(new_e).with_traceback, (ag__.ld(e).__traceback__,), None, fscope_1) from None File "/tmp/__autograph_generated_filepnn_cad_.py", line 34, in error_handler retval__1 = ag__.converted_call(ag__.ld(fn), tuple(ag__.ld(args)), dict(**ag__.ld(kwargs)), fscope_1) OperatorNotAllowedInGraphError: Exception encountered when calling layer 'embeddings' (type TFViTEmbeddings). in user code: File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/transformers/models/vit/modeling_tf_vit.py", line 128, in call * batch_size, num_channels, height, width = shape_list(pixel_values) OperatorNotAllowedInGraphError: Iterating over a symbolic `tf.Tensor` is not allowed. You can attempt the following resolutions to the problem: If you are running in Graph mode, use Eager execution mode or decorate this function with @tf.function. If you are using AutoGraph, you can try decorating this function with @tf.function. If that does not work, then you may be using an unsupported feature or your source code may not be visible to AutoGraph. See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/limitations.md#access-to-source-code for more information. Call arguments received by layer 'embeddings' (type TFViTEmbeddings): • pixel_values=tf.Tensor(shape=<unknown>, dtype=float32) • interpolate_pos_encoding=True • training=True Call arguments received by layer 'vit' (type TFViTMainLayer): • pixel_values=tf.Tensor(shape=<unknown>, dtype=float32) • head_mask=None • output_attentions=None • output_hidden_states=None • interpolate_pos_encoding=True • return_dict=None • training=True File <command-6957984842183233>, line 18 8 #base_model.trainable = False 10 model.compile(optimizer=tf.keras.optimizers.AdamW(learning_rate=1e-3, weight_decay=1e-6), 11 loss={'output_qualidade': tf.keras.losses.BinaryCrossentropy(label_smoothing=0.1), 12 'output_armario': tf.keras.losses.CategoricalCrossentropy(label_smoothing=0.1), (...) 15 'output_armario': tf.keras.metrics.AUC(curve='PR', multi_label=True, name='auc'), 16 'output_dano': tf.keras.metrics.AUC(curve='PR', multi_label=True, name='auc')}) ---> 18 train_history = model.fit(x=train_generator, 19 epochs=110, 20 validation_data=val_generator, 21 validation_freq=1, 22 callbacks=[merge_metrics, early_stoping], 23 verbose=2) File /databricks/python/lib/python3.11/site-packages/mlflow/utils/autologging_utils/safety.py:578, in safe_patch.<locals>.safe_patch_function(*args, **kwargs) 568 try_log_autologging_event( 569 AutologgingEventLogger.get_logger().log_patch_function_start, 570 session, (...) 574 kwargs, 575 ) 577 if patch_is_class: --> 578 patch_function.call(call_original, *args, **kwargs) 579 else: 580 patch_function(call_original, *args, **kwargs) File /databricks/python/lib/python3.11/site-packages/mlflow/utils/autologging_utils/safety.py:165, in PatchFunction.call(cls, original, *args, **kwargs) 163 @classmethod 164 def call(cls, original, *args, **kwargs): --> 165 return cls().__call__(original, *args, **kwargs) File /databricks/python/lib/python3.11/site-packages/mlflow/utils/autologging_utils/safety.py:176, in PatchFunction.__call__(self, original, *args, **kwargs) 172 self._on_exception(e) 173 finally: 174 # Regardless of what happens during the `_on_exception` callback, reraise 175 # the original implementation exception once the callback completes --> 176 raise e File /databricks/python/lib/python3.11/site-packages/mlflow/utils/autologging_utils/safety.py:169, in PatchFunction.__call__(self, original, *args, **kwargs) 167 def __call__(self, original, *args, **kwargs): 168 try: --> 169 return self._patch_implementation(original, *args, **kwargs) 170 except (Exception, KeyboardInterrupt) as e: 171 try: File /databricks/python/lib/python3.11/site-packages/mlflow/utils/autologging_utils/safety.py:227, in with_managed_run.<locals>.PatchWithManagedRun._patch_implementation(self, original, *args, **kwargs) 224 if not mlflow.active_run(): 225 self.managed_run = create_managed_run() --> 227 result = super()._patch_implementation(original, *args, **kwargs) 229 if self.managed_run: 230 mlflow.end_run(RunStatus.to_string(RunStatus.FINISHED)) File /databricks/python/lib/python3.11/site-packages/mlflow/tensorflow/__init__.py:1334, in autolog.<locals>.FitPatch._patch_implementation(self, original, inst, *args, **kwargs) 1327 except Exception as e: 1328 _logger.warning( 1329 "Failed to log training dataset information to " 1330 "MLflow Tracking. Reason: %s", 1331 e, 1332 ) -> 1334 history = original(inst, *args, **kwargs) 1336 if log_models: 1337 _log_keras_model(history, args) File /databricks/python/lib/python3.11/site-packages/mlflow/utils/autologging_utils/safety.py:561, in safe_patch.<locals>.safe_patch_function.<locals>.call_original(*og_args, **og_kwargs) 558 original_result = original(*_og_args, **_og_kwargs) 559 return original_result --> 561 return call_original_fn_with_event_logging(_original_fn, og_args, og_kwargs) File /databricks/python/lib/python3.11/site-packages/mlflow/utils/autologging_utils/safety.py:496, in safe_patch.<locals>.safe_patch_function.<locals>.call_original_fn_with_event_logging(original_fn, og_args, og_kwargs) 487 try: 488 try_log_autologging_event( 489 AutologgingEventLogger.get_logger().log_original_function_start, 490 session, (...) 494 og_kwargs, 495 ) --> 496 original_fn_result = original_fn(*og_args, **og_kwargs) 498 try_log_autologging_event( 499 AutologgingEventLogger.get_logger().log_original_function_success, 500 session, (...) 504 og_kwargs, 505 ) 506 return original_fn_result File /databricks/python/lib/python3.11/site-packages/mlflow/utils/autologging_utils/safety.py:558, in safe_patch.<locals>.safe_patch_function.<locals>.call_original.<locals>._original_fn(*_og_args, **_og_kwargs) 550 # Show all non-MLflow warnings as normal (i.e. not as event logs) 551 # during original function execution, even if silent mode is enabled 552 # (`silent=True`), since these warnings originate from the ML framework 553 # or one of its dependencies and are likely relevant to the caller 554 with set_non_mlflow_warnings_behavior_for_current_thread( 555 disable_warnings=False, 556 reroute_warnings=False, 557 ): --> 558 original_result = original(*_og_args, **_og_kwargs) 559 return original_result File /databricks/python/lib/python3.11/site-packages/tf_keras/src/utils/traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs) 67 filtered_tb = _process_traceback_frames(e.__traceback__) 68 # To get the full stack trace, call: 69 # `tf.debugging.disable_traceback_filtering()` ---> 70 raise e.with_traceback(filtered_tb) from None 71 finally: 72 del filtered_tb File /databricks/python/lib/python3.11/site-packages/tensorflow/python/eager/polymorphic_function/autograph_util.py:52, in py_func_from_autograph.<locals>.autograph_handler(*args, **kwargs) 50 except Exception as e: # pylint:disable=broad-except 51 if hasattr(e, "ag_error_metadata"): ---> 52 raise e.ag_error_metadata.to_exception(e) 53 else: 54 raise ``` ### Expected behavior Expected behavior consists in running the model fit/training.
[ 13, 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "TensorFlow", "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/35412
TITLE Qwen2VLProcessor cannot handle odd number of video frames COMMENTS 3 REACTIONS +1: 1 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 1 BODY ### System Info ``` - `transformers` version: 4.47.1 - Platform: Linux-5.4.0-174-generic-x86_64-with-glibc2.31 - Python version: 3.9.20 - Huggingface_hub version: 0.26.2 - Safetensors version: 0.4.5 - Accelerate version: 1.0.1 - Accelerate config: not found - PyTorch version (GPU?): 2.5.1+cu124 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: No - Using GPU in script?: Yes - GPU type: NVIDIA A10 ``` ### Who can help? @ArthurZucker @zucchini-nlp ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction I found that the processor for Qwen2-VL cannot handle input videos with an odd number of frames (except for videos with a single frame). This occurs regardless of the channel format and image dimensions of each frame. ``` import numpy as np from transformers import AutoProcessor # The processor fails when num_frames = 3, 5, 7, ... num_frames = 3 video = np.random.randint(0, 255, size=(num_frames, 256, 256, 3), dtype=np.uint8) processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct") processor(text="<|vision_start|><|video_pad|><|vision_end|>", videos=[video]) ``` Error when `num_frames = 3` ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/cyrus/miniconda3/envs/vllm/lib/python3.9/site-packages/transformers/models/qwen2_vl/processing_qwen2_vl.py", line 124, in __call__ videos_inputs = self.image_processor(images=None, videos=videos, **output_kwargs["videos_kwargs"]) File "/home/cyrus/miniconda3/envs/vllm/lib/python3.9/site-packages/transformers/image_processing_utils.py", line 41, in __call__ return self.preprocess(images, **kwargs) File "/home/cyrus/miniconda3/envs/vllm/lib/python3.9/site-packages/transformers/models/qwen2_vl/image_processing_qwen2_vl.py", line 439, in preprocess patches, video_grid_thw = self._preprocess( File "/home/cyrus/miniconda3/envs/vllm/lib/python3.9/site-packages/transformers/models/qwen2_vl/image_processing_qwen2_vl.py", line 299, in _preprocess patches = patches.reshape( ValueError: cannot reshape array of size 571536 into shape (1,2,3,9,2,14,9,2,14) ``` ### Expected behavior The processor should be able to handle videos with an odd number of frames.
[ 64, 19 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug", "VLM" ]
https://api.github.com/repos/huggingface/transformers/issues/33312
TITLE Fix qwen2vl float16 inference bug COMMENTS 7 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes [https://github.com/huggingface/transformers/issues/33294](https://github.com/huggingface/transformers/issues/33294) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ✅] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [ ✅] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ### Infer res This image shows a table comparing the performance of three precision modes—fp32, bf16, and fp16—in text generation: fp32: Memory usage is 36629MiB, and it generated a description of a beach scene. bf16: Memory usage is 44105MiB, and it generated a similar description. fp16: An error occurred, and only exclamation marks were generated. fp16 (improved version): Memory usage is 31609MiB, and it generated a more detailed description of the beach scene. Each column displays the memory usage and the generated text or error information. <img width="1563" alt="image" src="https://github.com/user-attachments/assets/264e0bea-0a17-492a-8e44-b2cb04ff7423"> ### DeBug Process Using the hook mechanism, I exported the input and output of each layer. When both the mean and sum are nan, I consider it to be abnormal. ```python # fp16 hook Layer model.layers.0.self_attn.o_proj input - mean: nan, sum: nan, shape: torch.Size([1, 3602, 1536]), has inf: False, has nan: True, first 5 values: [-0.33251953125, 0.1829833984375, 0.064453125, -0.634765625, 0.5859375], last 5 values: [0.038909912109375, -0.08502197265625, 0.4794921875, 0.0762939453125, -0.1522216796875], Layer model.layers.0.self_attn.o_proj output - mean: nan, sum: nan, shape: torch.Size([1, 3602, 1536]), has inf: False, has nan: True, first 5 values: [0.0076446533203125, 0.019805908203125, 0.03533935546875, -0.045318603515625, -0.03057861328125], last 5 values: [-0.296875, 0.09539794921875, -0.0924072265625, 0.0084075927734375, -0.09539794921875], # fp32 hook Layer model.layers.0.self_attn.o_proj input - mean: 0.012129077687859535, sum: 67106.2109375, shape: torch.Size([1, 3602, 1536]), first 5 values: [-0.33251953125, 0.182861328125, 0.06439208984375, -0.63427734375, 0.5859375], last 5 values: [0.03431564196944237, -0.09479156881570816, 0.47655850648880005, 0.06808013468980789, -0.1560981571674347] Layer model.layers.0.self_attn.o_proj output - mean: 0.008136402815580368, sum: 45016.046875, shape: torch.Size([3602, 1536]), first 5 values: [0.007521241903305054, 0.019816547632217407, 0.03532881289720535, -0.04532395675778389, -0.03064415045082569], last 5 values: [-0.2958013117313385, 0.09288226813077927, -0.08813470602035522, 0.007658728398382664, -0.09701263904571533] ``` - Further debugging in the forward process revealed that the attn_weights contained inf values. After applying softmax, nan values appeared, which subsequently caused all the following results to become nan. ```python class Qwen2VLAttention(nn.Module): ... attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim) ... ``` ### Thoughts on bug: - First, the problematic code is identical to that in LLaMA, so it’s unlikely that the issue is related to the way Torch is written. - Second, LLaMA’s fp16 inference works without issues, making it difficult to suspect that the precision problem lies in the underlying operator called by torch.matmul. - Lastly, I cautiously suspect that the open-source weights of Qwen-2VL may not be suitable for float16 inference, especially since the Qwen documentation indicates that[ fp16 is not recommended.](https://qwen.readthedocs.io/en/latest/inference/chat.html) <img width="773" alt="image" src="https://github.com/user-attachments/assets/17265eeb-d3b1-48ae-a9dd-a7534d28067a"> ### Training Comparison **This modification is also effective for training.** - code` llama-factory` https://github.com/hiyouga/LLaMA-Factory #### fp32 ```python {"current_steps": 1, "total_steps": 100, "loss": 2.4317, "learning_rate": 1e-05, "epoch": 0.7272727272727273, "percentage": 1.0, "elapsed_time": "0:00:02", "remaining_time": "0:04:45"} {"current_steps": 2, "total_steps": 100, "loss": 2.3595, "learning_rate": 2e-05, "epoch": 1.4545454545454546, "percentage": 2.0, "elapsed_time": "0:00:05", "remaining_time": "0:04:15"} {"current_steps": 3, "total_steps": 100, "loss": 2.4036, "learning_rate": 3e-05, "epoch": 2.1818181818181817, "percentage": 3.0, "elapsed_time": "0:00:07", "remaining_time": "0:03:55"} {"current_steps": 4, "total_steps": 100, "loss": 2.3881, "learning_rate": 4e-05, "epoch": 2.909090909090909, "percentage": 4.0, "elapsed_time": "0:00:09", "remaining_time": "0:03:42"} {"current_steps": 5, "total_steps": 100, "loss": 2.3942, "learning_rate": 5e-05, "epoch": 3.6363636363636362, "percentage": 5.0, "elapsed_time": "0:00:11", "remaining_time": "0:03:34"} ``` ![image](https://github.com/user-attachments/assets/f2523ce2-bf7c-474e-aaf7-06c2bf0002b2) #### fp16 ```python {"current_steps": 1, "total_steps": 1000, "loss": 0.0, "learning_rate": 0.0, "epoch": 0.09195402298850575, "percentage": 0.1, "elapsed_time": "0:00:02", "remaining_time": "0:39:13"} {"current_steps": 2, "total_steps": 1000, "loss": 0.0, "learning_rate": 0.0, "epoch": 0.1839080459770115, "percentage": 0.2, "elapsed_time": "0:00:04", "remaining_time": "0:35:30"} {"current_steps": 3, "total_steps": 1000, "loss": 0.0, "learning_rate": 0.0, "epoch": 0.27586206896551724, "percentage": 0.3, "elapsed_time": "0:00:06", "remaining_time": "0:34:16"} {"current_steps": 4, "total_steps": 1000, "loss": 0.0, "learning_rate": 0.0, "epoch": 0.367816091954023, "percentage": 0.4, "elapsed_time": "0:00:08", "remaining_time": "0:33:24"} {"current_steps": 5, "total_steps": 1000, "loss": 0.0, "learning_rate": 0.0, "epoch": 0.45977011494252873, "percentage": 0.5, "elapsed_time": "0:00:09", "remaining_time": "0:32:48"} ``` ![image](https://github.com/user-attachments/assets/2d8967e2-849a-45db-87ae-2fe1c7570d63) #### fp16 this PR ```python {"current_steps": 1, "total_steps": 1000, "loss": 4.5531, "learning_rate": 1.0000000000000002e-06, "epoch": 0.09195402298850575, "percentage": 0.1, "elapsed_time": "0:00:02", "remaining_time": "0:40:28"} {"current_steps": 2, "total_steps": 1000, "loss": 4.5833, "learning_rate": 2.0000000000000003e-06, "epoch": 0.1839080459770115, "percentage": 0.2, "elapsed_time": "0:00:04", "remaining_time": "0:35:50"} {"current_steps": 3, "total_steps": 1000, "loss": 4.1749, "learning_rate": 3e-06, "epoch": 0.27586206896551724, "percentage": 0.3, "elapsed_time": "0:00:06", "remaining_time": "0:34:13"} {"current_steps": 4, "total_steps": 1000, "loss": 4.1741, "learning_rate": 4.000000000000001e-06, "epoch": 0.367816091954023, "percentage": 0.4, "elapsed_time": "0:00:08", "remaining_time": "0:33:19"} {"current_steps": 5, "total_steps": 1000, "loss": 4.8369, "learning_rate": 5e-06, "epoch": 0.45977011494252873, "percentage": 0.5, "elapsed_time": "0:00:09", "remaining_time": "0:32:58"} {"c ``` ![image](https://github.com/user-attachments/assets/d5bcbaad-48c3-45a0-ae61-767a385647f7) ### Personal Opinion on This Modification: - This is just a trick that replaces all inf values resulting from torch.matmul with zero. - This is not a fundamental solution to the problem, but it shows significant effects in fp16 inference and LoRA training. Feedback and appropriate suggestions from everyone are needed. @ArthurZucker @zucchini-nlp @hiyouga @simonJJJ
[ 34 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "float16" ]
https://api.github.com/repos/huggingface/transformers/issues/34773
TITLE Adding RTDETRv2 COMMENTS 17 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 1 heart: 0 rocket: 3 eyes: 0 BODY # What does this PR do? This PR add RTDETRv2 into the Transformers library. There is a new thing in transformers called **modular**, which adds new models by creating a `modeling_modelname.py` file. Since RTDETRv2 only updates the decoder part while keeping the rest of the model unchanged, it serves as an ideal use case for this modular approach. ### What’s Left: - [x] Fix the modular -> modeling cookie cutter setup - [x] Remove the `scratch` folder (auto-generated by the `add-model` cookie cutter) - [x] Add support for resnet Colab to replicate original author logits: https://colab.research.google.com/drive/1Vql-9JuFKz7N7l83NmHPP2E1ZyGZnpzX?usp=sharing
[ 77, 62, 73, 45 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ "New model", "Vision", "run-slow", "Modular" ]
https://api.github.com/repos/huggingface/transformers/issues/33671
TITLE Step shifting using total_batched_samples for gradient_accumulation_steps counting COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.39.0 - Platform: Linux-5.4.239-1.el7.elrepo.x86_64-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.24.6 - Safetensors version: 0.4.1 - Accelerate version: 0.28.0 - Accelerate config: not found - PyTorch version (GPU?): 2.2.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ### Issue Analysis The `on_step_begin` callback is invoked when the `step` is divisible by `args.gradient_accumulation_steps` (i.e.,` step % args.gradient_accumulation_steps == 0`). However, the `on_step_end` callback behaves differently. Its condition is as follows: `total_batched_samples % args.gradient_accumulation_steps == 0 or is_last_step_and_steps_less_than_grad_acc` Here, the `on_step_end` callback is triggered when `total_batched_samples` is divisible by `args.gradient_accumulation_steps`. It’s important to note that `step` is reset at the beginning of each epoch, whereas `total_batched_samples` is initialized to 0 at the start of training and persists across all epochs until training ends. ### Expected Behavior: When `gradient_accumulation_steps = N`, there should be exactly N sub-steps between the `on_step_begin` and `on_step_end` callbacks. This ensures that gradients are accumulated correctly before an optimization step occurs. The only exception to this rule is the last step in an epoch or the training run, where fewer sub-steps might exist. ### Problematic Behavior Example The issue arises when `total_batched_samples` is not divisible by args.gradient_accumulation_steps. For example, if `steps_per_epoch = 3` and `gradient_accumulation_steps = 2`, we observe the following behavior: Epoch 1: * step 0: `on_step_begin` called, `on_step_end` not called (expected behavior) * step 1: `on_step_begin` not called, `on_step_end` called (expected behavior) * step 2: `on_step_begin` called, `on_step_end` not called (expected behavior) Epoch 2: * step 3: `on_step_begin` called (0 % 2 == 0), `on_step_end` called (4 % 2 == 0) (incorrect because on_step_end is called after only one sub step) * step 4: `on_step_begin` not called (1 % 2 != 0), `on_step_end` not called (5 % 2 != 0) (incorrect) * step 5: `on_step_begin` called (2 % 2 == 0), `on_step_end` called (6 % 2 == 0) (incorrect because on_step_end is called after only one sub step) Epoch 3: * step 6: `on_step_begin` called, `on_step_end` not called (expected behavior) * step 7: `on_step_begin` not called, `on_step_end` called (expected behavior) * step 8: `on_step_begin` called, `on_step_end` not called (expected behavior) Note: `total_batched_samples` is incremented by 1 at the start of each step loop. In this case, when the number of steps per epoch is not divisible by gradient_accumulation_steps, the callbacks only function correctly at intervals, leading to incorrect behavior during other epochs.
[ 66, 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "trainer", "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/34902
TITLE ViTImageproc handle pil hw images COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? fixes #34820 Modifies image transformations and ViTImageProcessor such that HW PIL images can be have __call__ applied given the right arguments. To supply an HW image to the ViTImageProcessor use the `input_data_format='none' or `input_data_format=ChannelDimension.NONE`. Further it is required that this is converted to RGB via the `do_convert_rgb=True` as the first option ``` dataset = load_dataset("ylecun/mnist") processor = AutoImageProcessor.from_pretrained("farleyknight-org-username/vit-base-mnist") def process(examples): processed_inputs = processor(examples["image"], input_data_format="none", do_convert_rgb=True) return processed_inputs processed_dataset = dataset.map(process, batched=True) ``` or to leave it as a single channeled np.array by setting the mean and std of the image processor with `image_mean=[mean_value]` and `image_std=[std_value]` ``` ... def process(examples): processed_inputs = teacher_processor(examples["image"], input_data_format="none", do_convert_rgb=False, image_mean=[.5], image_std=[.5]) return processed_inputs ... ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts, @qubvel
[ 62, 65 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Vision", "Processing" ]
https://api.github.com/repos/huggingface/transformers/issues/34796
TITLE Copy of entire logits incur large memory usage at first call of generate. (from v4.45.0) COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.46.2 - Platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35 - Python version: 3.10.15 - Huggingface_hub version: 0.26.2 - Safetensors version: 0.4.5 - PyTorch version (GPU?): 2.5.1+cu124 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: No - Using GPU in script?: Yes ### Who can help? @zucchini-nlp ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The problem is included since [this commit](https://github.com/huggingface/transformers/blob/19d58d31f19049e8280ccb62a5b098d89909bf5a/src/transformers/generation/utils.py#L3015). Before that, the logits were sliced first and then copied, which would only incur negligible memory overhead. But now, the entire logits will be copied and lead to double large memory usage during the first call of `generate()`. ``` # Previous next_token_logits = outputs.logits[:, -1, :].clone().float() # Current next_token_logits = outputs.logits.clone()[:, -1, :].float() ``` ### Expected behavior hope this problem can be fixed. 🤗
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/34437
TITLE Incorrect repr string for tokenizer objects COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info transformers 4.46.0 Any OS and python version ### Who can help? @ArthurZucker @itazap ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen2.5-0.5B-Instruct') print(tokenizer) ``` ### Expected behavior The repr of tokenizer objects is incorrectly formatted due to this part of the code: https://github.com/huggingface/transformers/blob/1d063793318b20654ebb850f48f43e0a247ab7bb/src/transformers/tokenization_utils_base.py#L1684C1-L1692C10 The repr of a Tokenizer object looks like this: `Tokenizer(...), added_tokens_decoder={...}` Whereas is should look like this: `Tokenizer(..., added_tokens_decoder={...})` The dict that is the value of the `added_tokens_decoder` attribute should be listed within the parentheses along with the other attributes, not after the closing parenthesis. The current representation is problematic because having the `added_tokens_decoder` outside the main parenthesized structure breaks the expected flow of representing object attributes, and it's confusing. It suggests that the relationship between the tokenizer parameters and the added tokens decoder is different from what it actually is. Someone reading the string representation could assume it's a separate entity instead of an attribute belonging to the tokenizer. Lines 1690-1691 should be corrected like this: ``` f" special_tokens={self.special_tokens_map}, clean_up_tokenization_spaces={self.clean_up_tokenization_spaces}, " " added_tokens_decoder={\n\t" + added_tokens_decoder_rep + "\n})" ```
[ 47, 61, 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Core: Tokenization", "Good First Issue", "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/34736
TITLE Incompatibility between transformers 4.45.0 and torch 1.9.1 COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info Hello, I was using depth estimation model, `pipe = pipeline(task="depth-estimation", model="depth-anything/Depth-Anything-V2-Metric-Indoor-Small-hf")` But I got this error: ``` Traceback (most recent call last): File "test.py", line 3, in <module> pipe = pipeline(task="depth-estimation", model="depth-anything/Depth-Anything-V2-Metric-Indoor-Small-hf") File "/home/q84sun/miniconda3/envs/vlnce_py3.8/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 896, in pipeline framework, model = infer_framework_load_model( File "/home/q84sun/miniconda3/envs/vlnce_py3.8/lib/python3.8/site-packages/transformers/pipelines/base.py", line 288, in infer_framework_load_model model = model_class.from_pretrained(model, **kwargs) File "/home/q84sun/miniconda3/envs/vlnce_py3.8/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained return model_class.from_pretrained( File "/home/q84sun/miniconda3/envs/vlnce_py3.8/lib/python3.8/site-packages/transformers/modeling_utils.py", line 3808, in from_pretrained state_dict = load_state_dict(resolved_archive_file) File "/home/q84sun/miniconda3/envs/vlnce_py3.8/lib/python3.8/site-packages/transformers/modeling_utils.py", line 556, in load_state_dict return safe_load_file(checkpoint_file) File "/home/q84sun/miniconda3/envs/vlnce_py3.8/lib/python3.8/site-packages/safetensors/torch.py", line 315, in load_file result[k] = f.get_tensor(k) AttributeError: module 'torch' has no attribute 'frombuffer' ``` It seemed like a compatible issue between transformers and torch. What is the right torch version to match transformers 4.45.0? My environment: ubuntu: 22.04 Python: 3.8.20 torch: 1.9.1+cu111 transformers: 4.45.0 nvcc: cuda_11.7 ### Who can help? @amyeroberts @qubvel @Rocketknight1 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I was using the official example scripts from https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-Indoor-Small-hf ``` from transformers import pipeline pipe = pipeline(task="depth-estimation", model="depth-anything/Depth-Anything-V2-Metric-Indoor-Small-hf") ``` When running this test script with the my own environment, it raised up an error. ### Expected behavior Should be working.
[ 50, 27, 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "PyTorch", "dependencies", "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/34857
TITLE smol improvements to support more flexible usage COMMENTS 5 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? This PR improves a few smol issues we had with Idefics 3: 1) We couldn't use the model with images larger than 5*364. This was the default max_image_size. The method where this was computed took a parameter as input, but it was never used. It would also raise an error if we wanted to resize to a larger size. I changed this for a default value of 4k resolution, as this is already considerably larger than what we trained on, ie, anything larger is pretty outrageous. 2) We couldn't train with datasets that contained grayscale images since the input_data_format wasn't properly parsed. I fixed this by switching around the processing order. Now, if the images are grayscale, I add a channel to the end or start of the images. Then, the input_data_format can be correctly inferred if it is none. 3) Finally, when converting to pil_image, we were not passing the input_data_format. For images that have 4 channels, this was breaking the processing. Since we already have the input_data_format in these functions, I added it. - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? - vision models: @qubvel
[ 62, 65 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Vision", "Processing" ]
https://api.github.com/repos/huggingface/transformers/issues/34495
TITLE feat: add `benchmarks_entrypoint.py` COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Adding `benchmarks_entrypoint.py` file, which will be run from the benchmarks CI. This python script will list all python files from the `benchmark/` folder and run the included `run_benchmark` function, allowing people to add new benchmarks scripts.
[ 8 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "run-benchmark" ]
https://api.github.com/repos/huggingface/transformers/issues/33414
TITLE Do Transformers onnx export support the input of the Llama is the input_embeds? COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY Do Transformers onnx export support the input of the Llama is the input_embeds?
[ 64, 46 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug", "ONNX" ]
https://api.github.com/repos/huggingface/transformers/issues/34502
TITLE VLMs: major clean up 🧼 COMMENTS 9 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? We have updated all the configs for VLMs on the hub so this PR removes legacy path for models, as it has been there for already 3 releases from v4.44. Also it fixes some stuff that broke on the way, like generating from only text input in LLaVA models For Video-LLaVA the hub configs cannot be updated as the hub owner has been silent for several mmonths already. And since there is only one model with such architecture, we can hardcode the default values for `patch_num` and also remove the legacy path fixes https://github.com/huggingface/transformers/issues/34824, fixes https://github.com/huggingface/transformers/issues/35169 and fixes https://github.com/huggingface/transformers/issues/35450, fixes https://github.com/huggingface/transformers/issues/35424
[ 73 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "run-slow" ]
https://api.github.com/repos/huggingface/transformers/issues/34264
TITLE T5 models fail when loaded with `torch_dtype=torch.half` COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.45.0.dev0 - Platform: Linux-5.15.0-117-generic-x86_64-with-glibc2.35 - Python version: 3.10.15 - Huggingface_hub version: 0.26.0 - Safetensors version: 0.4.5 - Accelerate version: 1.0.1 - Accelerate config: not found - PyTorch version (GPU?): 2.3.0a0+gitd2f9472 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: No - Using GPU in script?: Yes - GPU type: AMD Instinct MI250X/MI250 ### Who can help? @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` import torch from transformers import T5Tokenizer, T5EncoderModel tokenizer = T5Tokenizer.from_pretrained("t5-small") model = T5EncoderModel.from_pretrained("t5-small", device_map="auto", torch_dtype=torch.half) input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model(input_ids) print(outputs[0].dtype) ``` Error: ``` Traceback (most recent call last): File "/workspace/repro.py", line 10, in <module> outputs = model(input_ids) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/workspace/transformers/src/transformers/models/t5/modeling_t5.py", line 1996, in forward encoder_outputs = self.encoder( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/workspace/transformers/src/transformers/models/t5/modeling_t5.py", line 1131, in forward layer_outputs = layer_module( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/workspace/transformers/src/transformers/models/t5/modeling_t5.py", line 711, in forward self_attention_outputs = self.layer[0]( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/workspace/transformers/src/transformers/models/t5/modeling_t5.py", line 616, in forward normed_hidden_states = self.layer_norm(hidden_states) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/apex/normalization/fused_layer_norm.py", line 386, in forward return fused_rms_norm_affine(input, self.weight, self.normalized_shape, self.eps) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/apex/normalization/fused_layer_norm.py", line 189, in fused_rms_norm_affine return FusedRMSNormAffineFunction.apply(*args) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 598, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/apex/normalization/fused_layer_norm.py", line 69, in forward output, invvar = fused_layer_norm_cuda.rms_forward_affine( RuntimeError: expected scalar type Float but found Half ``` ### Expected behavior With the default fp32 inference: ``` import torch from transformers import T5Tokenizer, T5EncoderModel tokenizer = T5Tokenizer.from_pretrained("t5-small") model = T5EncoderModel.from_pretrained("t5-small", device_map="auto") input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model(input_ids) print(outputs[0].dtype) # Outputs `torch.float32` ``` I assume this issue occurs with all other T5 models (This issue was found while trying to run `stabilityai/stable-diffusion-3-medium-diffusers` in half precision, which uses the `T5Encoder`)
[ 23, 67, 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Core: Modeling", "Usage", "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/36160
TITLE Moshi Generation Does Not Work as Expected COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info **🐛 Bug Report** ### Description The provided Moshi example code does not function correctly with the Transformers library. The `generate` function fails when attempting to generate new tokens, and an issue arises with the expected input formats. And here is `moshi_output.wav` https://github.com/user-attachments/assets/191d8176-a846-4b72-8b7c-de5c15b8140b I tried different temperature settings, generation configurations, and other samples, but it only produces a static 'chijijik...' sound. cc. @ylacombe ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from datasets import load_dataset, Audio import torch, math from transformers import MoshiForConditionalGeneration, AutoFeatureExtractor, AutoTokenizer import soundfile as sf import torch import transformers import os import torch # Disable all automatic compilation features os.environ['TORCH_COMPILE'] = '0' os.environ['TORCHDYNAMO_DISABLE'] = '1' # Fully disables TorchDynamo os.environ['TORCHDYNAMO_VERBOSE'] = '0' # Suppresses unnecessary logs os.environ['TORCHDYNAMO_RECOMPILE_LIMIT'] = '0' # Avoid recompile limits # Apply global config settings for eager mode torch._dynamo.config.suppress_errors = True # Avoids crashes and falls back to eager mode torch._dynamo.config.cache_size_limit = 0 # Prevents recompilation limits torch._dynamo.reset() # Clears any cached compile traces librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") device = "cuda" # prepare user input audio librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=24000)) audio_sample = librispeech_dummy[-1]["audio"]["array"] # (107520,) # WAV_PATH = f"./audio/moshi_opening.wav" # audio_sample, sample_rate = sf.read(WAV_PATH) waveform_to_token_ratio = 1 / 1920 model = MoshiForConditionalGeneration.from_pretrained("kmhf/hf-moshiko", attn_implementation="eager", torch_dtype=torch.float16) feature_extractor = AutoFeatureExtractor.from_pretrained("kmhf/hf-moshiko") tokenizer = AutoTokenizer.from_pretrained("kmhf/hf-moshiko") model = model.to(device) user_input_values = feature_extractor(raw_audio=audio_sample, sampling_rate=24000, return_tensors="pt").to(device=device, dtype=torch.float16) # prepare moshi input values - we suppose moshi didn't say anything while the user spoke moshi_input_values = torch.zeros_like(user_input_values.input_values) # (1, 1, 107520) # prepare moshi input ids - we suppose moshi didn't say anything while the user spoke num_tokens = math.ceil(moshi_input_values.shape[-1] * waveform_to_token_ratio) input_ids = torch.ones((1, num_tokens), device=device, dtype=torch.int64) * tokenizer.encode("<pad>")[0] # Force disable torch.compile inside Transformers transformers.models.moshi.modeling_moshi.MoshiForConditionalGeneration.forward = torch._dynamo.disable( transformers.models.moshi.modeling_moshi.MoshiForConditionalGeneration.forward ) transformers.models.moshi.modeling_moshi.MoshiForConditionalGeneration.generate = torch._dynamo.disable( transformers.models.moshi.modeling_moshi.MoshiForConditionalGeneration.generate ) transformers.models.moshi.modeling_moshi.MoshiForConditionalGeneration.prepare_inputs_for_generation = torch._dynamo.disable( transformers.models.moshi.modeling_moshi.MoshiForConditionalGeneration.prepare_inputs_for_generation ) # generate 25 new tokens (around 2s of audio) output = model.generate( input_ids=input_ids, user_input_values=user_input_values.input_values, moshi_input_values=moshi_input_values, max_new_tokens=50, temperature=0.8, do_sample=True, ) text_tokens = output.sequences # decode text tokens text = tokenizer.decode(text_tokens[0], skip_special_tokens=True) print(text) # decode audio tokens audio_waveforms = output.audio_sequences.squeeze(0).squeeze(0) # (L,) audio_waveforms = audio_waveforms.double() # cut audio for input length audio_waveforms = audio_waveforms[:user_input_values.input_values.shape[-1]] # save audio sf.write("moshi_output.wav", audio_waveforms.cpu().numpy(), 24000) ``` ### Expected behavior should produce sounds
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/34287
TITLE feat: run benchmarks on A100 COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### What does this PR do? Add A100 runner group for benchmarks CI.
[ 57, 8 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Benchmarks", "run-benchmark" ]
https://api.github.com/repos/huggingface/transformers/issues/34238
TITLE GGUF support for BERT architecture COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request I want to add the ability to use GGUF BERT models in transformers. Currently the library does not support this architecture. When I try to load it, I get an error TypeError: Architecture 'bert' is not supported. I have done most of the mapping, with some fields I am having difficulty. Can anybody help me and provide comments on this feature? ### Motivation I ran into a problem that I can't use gguf models in RASA(rasa uses standard from_pretrained). So I decided to make BERT support ### Your contribution That's my extended ggml.py file ```python GGUF_TENSOR_MAPPING = { "bert": { "context_length": "max_position_embeddings", "block_count": "num_hidden_layers", "feed_forward_length": "intermediate_size", "embedding_length": "hidden_size", "attention.head_cgguf>=0.10.0ount": "num_attention_heads", "attention.layer_norm_rms_epsilon": "rms_norm_eps", # "attention.causal": "", # "pooling_type": "", "vocab_size": "vocab_size", } } GGUF_CONFIG_MAPPING = { "bert": { "context_length": "max_position_embeddings", "block_count": "num_hidden_layers", "feed_forward_length": "intermediate_size", "embedding_length": "hidden_size", "attention.head_cgguf>=0.10.0ount": "num_attention_heads", "attention.layer_norm_rms_epsilon": "rms_norm_eps", # "attention.causal": "", # "pooling_type": "", "vocab_size": "vocab_size", } } GGUF_TOKENIZER_MAPPING = { "tokenizer": { # "ggml.token_type_count": "", # "ggml.pre": "", "ggml.model": "tokenizer_type", "ggml.tokens": "all_special_tokens", "ggml.token_type": "all_special_ids", "ggml.unknown_token_id": "unk_token_id", "ggml.seperator_token_id": "sep_token_id", "ggml.padding_token_id": "pad_token_id", "ggml.cls_token_id": "cls_token_id", "ggml.mask_token_id": "mask_token_id", }, "tokenizer_config": { "ggml.unknown_token_id": "unk_token_id", "ggml.seperator_token_id": "sep_token_id", "ggml.padding_token_id": "pad_token_id", "ggml.cls_token_id": "cls_token_id", "ggml.mask_token_id": "mask_token_id", }, } ```
[ 76 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/34313
TITLE speed up whisper compile time COMMENTS 5 REACTIONS +1: 1 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request after torch compiling the whisper.text_decoder model, the inference time is crazy low !. Thank you for the work ! however the warm up time is very long since it needs to go through all logits (at a maximum of 448) how can reduce this time ? (i have looked into storing the compiled model with pytorch but it does not seem supported) (i have tried compiling torch_tensorrt but i have the error EncoderDecoderCache encountered in the dynamo_compile input parsing) ### Motivation the start up time of the model can take around 10m for a large model ### Your contribution happy to do a pr but need guidance
[ 76 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/33357
TITLE bus error on version 4.43.0 with pretrained community CLIP model - MacOS COMMENTS 19 REACTIONS +1: 2 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.43.0 - Platform: macOS-13.0-arm64-arm-64bit - Python version: 3.10.9 - Huggingface_hub version: 0.24.6 - Safetensors version: 0.4.5 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.4.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import CLIPModel, CLIPTokenizerFast tokenizer = CLIPTokenizerFast.from_pretrained("patrickjohncyh/fashion-clip") model = CLIPModel.from_pretrained("patrickjohncyh/fashion-clip") tokenized = tokenizer(["hello"], return_tensors="pt", padding=True) print("tokenized", tokenized) # bus error occurs here embed = model.get_text_features(**tokenized).detach().cpu().numpy() print("embedded", tokenized) ``` gives : ``` tokenized {'input_ids': tensor([[49406, 3497, 49407]]), 'attention_mask': tensor([[1, 1, 1]])} zsh: bus error python test_hf.py ``` I don't think the issue has been posted already. After bisecting versions, it looks like `4.42.4` does not have the issue and `4.43.0` has the issue I have little insight to provide except the `bus error`, and that this does not occur with the `clip-vit-base-patch32` model. I saw some breaking changes in this version release, but only about the tokenizer. I did not have time to test on a linux distribution yet Thanks ! ### Expected behavior By using the exact same script with the hugging face CLIP pretrained model, the embedding get computed as they should ``` processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") tokenizer = CLIPTokenizerFast.from_pretrained("openai/clip-vit-base-patch32") ```
[ 50, 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "PyTorch", "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/36210
TITLE Token healing throws error with "Qwen/Qwen2.5-Coder-7B-Instruct" COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info OS type: Sequoia 15.2 Apple M2 Pro. I tried to reproduce using 🤗 [Inference Endpoints](https://endpoints.huggingface.co/AI-MO/endpoints/dedicated) when deploying https://huggingface.co/desaxce/Qwen2.5-Coder-7B-Instruct. It's a fork of `Qwen/Qwen2.5-Coder-7B-Instruct` with `token_healing=True` and a `handler.py` to deploy on 🤗 Inference Endpoints (use Default container, not TGI). python 3.12.8 transformers 4.48.2 torch 2.6.0 Generate text with using [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) while specifying `token_healing` to `True`: ``` from transformers import AutoTokenizer, Qwen2ForCausalLM, Qwen2Tokenizer pipe = Qwen2ForCausalLM.from_pretrained("./") tokenizer = Qwen2Tokenizer.from_pretrained("./") prompt = f'Complete the following Lean 4 code:\n\n```lean4\nimport ' inputs = tokenizer(prompt, return_tensors="pt") # Here we activate token healing, which triggers error. generate_ids = pipe.generate(inputs.input_ids, tokenizer=tokenizer, max_new_tokens=1, token_healing=True) tokenizer.batch_decode(generate_ids, skip_special_tokens=False, clean_up_tokenization_spaces=False)[0] ``` The error: ``` {'error': "where() received an invalid combination of arguments - got (bool, int, Tensor), but expected one of: * (Tensor condition) * (Tensor condition, Tensor input, Tensor other, *, Tensor out) * (Tensor condition, Number self, Tensor other) didn't match because some of the arguments have invalid types: (!bool!, !int!, Tensor) * (Tensor condition, Tensor input, Number other) didn't match because some of the arguments have invalid types: (!bool!, !int!, !Tensor!) * (Tensor condition, Number self, Number other) didn't match because some of the arguments have invalid types: (!bool!, !int!, !Tensor!) "} ``` I traced it to https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L2436. Because `tokenizer.bos_token_id` is `None`, the `torch.where()` call fails. I commented this line and a subsequent error popped up a few lines below on https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L2447. The error: ``` TypeError Traceback (most recent call last) Cell In[1], line 9 6 prompt = f'Complete the following Lean 4 code:\n\n```lean4\nimport ' 7 inputs = tokenizer(prompt, return_tensors="pt") ----> 9 generate_ids = pipe.generate(inputs.input_ids, tokenizer=tokenizer, max_new_tokens=1, token_healing=True) 10 tokenizer.batch_decode(generate_ids, skip_special_tokens=False, clean_up_tokenization_spaces=False)[0] File [~/miniconda3/envs/autocomplete/lib/python3.12/site-packages/torch/utils/_contextlib.py:116](http://localhost:8888/~/miniconda3/envs/autocomplete/lib/python3.12/site-packages/torch/utils/_contextlib.py#line=115), in context_decorator.<locals>.decorate_context(*args, **kwargs) 113 @functools.wraps(func) 114 def decorate_context(*args, **kwargs): 115 with ctx_factory(): --> 116 return func(*args, **kwargs) File [~/miniconda3/envs/autocomplete/lib/python3.12/site-packages/transformers/generation/utils.py:2084](http://localhost:8888/~/miniconda3/envs/autocomplete/lib/python3.12/site-packages/transformers/generation/utils.py#line=2083), in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, **kwargs) 2081 input_ids = inputs_tensor if model_input_name == "input_ids" else model_kwargs.pop("input_ids") 2083 if generation_config.token_healing: -> 2084 input_ids = self.heal_tokens(input_ids, tokenizer) 2086 if streamer is not None: 2087 streamer.put(input_ids.cpu()) File [~/miniconda3/envs/autocomplete/lib/python3.12/site-packages/transformers/generation/utils.py:2499](http://localhost:8888/~/miniconda3/envs/autocomplete/lib/python3.12/site-packages/transformers/generation/utils.py#line=2498), in GenerationMixin.heal_tokens(self, input_ids, tokenizer) 2495 return input_ids 2497 tail_ids = input_ids[:, -1].tolist() -> 2499 space_tok = tokenizer.convert_ids_to_tokens(tokenizer.convert_tokens_to_ids(" "))[0] 2500 # tail tokens are used for a prefix search, thus, whitespaces are replaced with 2501 # their tokenization (e.g. 'Ġ') to enable search for tokens prefixed with a whitespace 2502 tail_toks = (tokenizer.decode(t).replace(" ", space_tok) for t in tail_ids) File [~/miniconda3/envs/autocomplete/lib/python3.12/site-packages/transformers/tokenization_utils.py:1065](http://localhost:8888/~/miniconda3/envs/autocomplete/lib/python3.12/site-packages/transformers/tokenization_utils.py#line=1064), in PreTrainedTokenizer.convert_ids_to_tokens(self, ids, skip_special_tokens) 1063 return self._convert_id_to_token(ids) 1064 tokens = [] -> 1065 for index in ids: 1066 index = int(index) 1067 if skip_special_tokens and index in self.all_special_ids: TypeError: 'NoneType' object is not iterable ``` This time, it's due to `tokenizer.convert_tokens_to_ids(" ")` returning `None` because the space character is not a token (the tokenizer already uses `Ġ` to represent space characters). ### Who can help? @ArthurZucker @itazap I suspect an issue in `heal_tokens` for tokenizers which: - have `tokenizer.bos_token_id` equal to `None` - do not have space character as a token, i.e. `tokenizer.convert_tokens_to_ids(" ")` is `None` ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction To reproduce on 🤗 Inference Endpoints, deploy https://huggingface.co/desaxce/Qwen2.5-Coder-7B-Instruct on a "Default" container. I forked this repository from https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct to reproduce the issue: I added the `token_healing: true` parameter in `generation_config.json` and a `handler.py` to be able to deploy on 🤗 Inference Endpoints. It's important to select "Default" container to reproduce - with TGI I didn't have any error (but I didn't check that token healing was indeed being used). In all cases, the error can be reproduced locally ⬇ To reproduce locally, clone https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct and run this snippet which generates using token healing: ``` from transformers import AutoTokenizer, Qwen2ForCausalLM, Qwen2Tokenizer pipe = Qwen2ForCausalLM.from_pretrained("./") tokenizer = Qwen2Tokenizer.from_pretrained("./") prompt = f'Complete the following Lean 4 code:\n\n```lean4\nimport ' inputs = tokenizer(prompt, return_tensors="pt") # Here we activate token healing, which triggers error. generate_ids = pipe.generate(inputs.input_ids, tokenizer=tokenizer, max_new_tokens=1, token_healing=True) tokenizer.batch_decode(generate_ids, skip_special_tokens=False, clean_up_tokenization_spaces=False)[0] ``` ### Expected behavior I expect completion to take place with tokens healed: ![Image](https://github.com/user-attachments/assets/2f97b3ba-4475-4d39-84a3-978bfdf8aca1)
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/33826
TITLE Inconsistency in Llama RoPE implementation? COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info transformers==4.45.1 ### Who can help? @ArthurZucker ### Reproduction I came across something unexpected with llama RoPE implementation, specifically in `apply_rotary_pos_emb` function in `modeling_llama.py` ```python def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1): cos = cos.unsqueeze(unsqueeze_dim) sin = sin.unsqueeze(unsqueeze_dim) q_embed = (q * cos) + (rotate_half(q) * sin) k_embed = (k * cos) + (rotate_half(k) * sin) return q_embed, k_embed ``` The meta-llama/llama3 [implementation](https://github.com/meta-llama/llama3/blob/main/llama/model.py) from meta uses the function `apply_rotary_emb` and complex numbers to perform to dot product with the rotation matrix, but I am not sure I fully understand huggingface's implementation. ```python def apply_rotary_emb( xq: torch.Tensor, xk: torch.Tensor, freqs_cis: torch.Tensor, ) -> Tuple[torch.Tensor, torch.Tensor]: xq_ = torch.view_as_complex(xq.float().reshape(*xq.shape[:-1], -1, 2)) xk_ = torch.view_as_complex(xk.float().reshape(*xk.shape[:-1], -1, 2)) freqs_cis = reshape_for_broadcast(freqs_cis, xq_) xq_out = torch.view_as_real(xq_ * freqs_cis).flatten(3) xk_out = torch.view_as_real(xk_ * freqs_cis).flatten(3) return xq_out.type_as(xq), xk_out.type_as(xk) ``` Currently, I am running llama 3 with PyTorch's [torchtitan](https://github.com/pytorch/torchtitan) codebase and with huggingface and comparing the logits. I have the following script where the `model` is the torchtitan's llama 3 implementation ```python3 from transformers import AutoModelForCausalLM, AutoTokenizer weights_path = 'meta-llama/Llama-3.2-1B' hf_model = AutoModelForCausalLM.from_pretrained(weights_path) tok = AutoTokenizer.from_pretrained(weights_path) device = "cuda" hf_model.to(device) model.eval() text = "Hello world" data = tok(text, return_tensors="pt").to(device) hf_logits = hf_model(**data).logits logits = model(data.input_ids) print(torch.allclose(hf_logits, logits, atol=1e-4)) # False ``` I am going into the forward passes for both huggingface's implementation and torchtitan's implementation and the query's and key's embeddings don't match between the two implementations after the rotary positional embedding is applied. However if I change the implementation of `apply_rotary_pos_emb` (in `modeling_llama.py`) to the one given below, which tries to implement the rotation matrix multiplication in the RoPE paper, I get almost an exact match between the logits of the torchtitan's implementation and huggingface's implementation. ```python def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1): cos = cos.unsqueeze(unsqueeze_dim) sin = sin.unsqueeze(unsqueeze_dim) # q_embed = (q * cos) + (rotate_half(q) * sin) # k_embed = (k * cos) + (rotate_half(k) * sin) # return q_embed, k_embed cos = cos[..., :cos.shape[-1] // 2] sin = sin[..., :sin.shape[-1] // 2] q_embed = torch.empty_like(q, device=q.device) q_embed[..., ::2] = (q[..., ::2] * cos) - (q[..., 1::2] * sin) q_embed[..., 1::2] = (q[..., ::2] * sin) + (q[..., 1::2] * cos) k_embed = torch.empty_like(k, device=k.device) k_embed[..., ::2] = (k[..., ::2] * cos) - (k[..., 1::2] * sin) k_embed[..., 1::2] = (k[..., ::2] * sin) + (k[..., 1::2] * cos) return q_embed, k_embed ``` ### Expected behavior Ideally I would expect to get the logits matching between the two implementations. Please let me if I am missing something or there is actually an issue.
[ 75, 23, 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ "Discussion", "Core: Modeling", "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/35588
TITLE flash_attention_2 2.7.2.post1 seems to crash when using `torch.compile` and `DataCollatorWithFlattening` COMMENTS 7 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.47.1 - Platform: Linux-6.6.20-aufs-1-x86_64-with-glibc2.36 - Python version: 3.11.2 - Huggingface_hub version: 0.26.2 - Safetensors version: 0.4.5 - Accelerate version: 1.2.1 - Accelerate config: not found - PyTorch version (GPU?): 2.5.1+cu124 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: No - Using GPU in script?: yes - GPU type: NVIDIA RTX A5000 ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction update to latest flash attention version (as the time of writing 2.7.2). this should be torch.compile compatible as described in https://github.com/Dao-AILab/flash-attention load a model with fa2 (tested with opt and qwen) use trainer with `DataCollatorWithFlattening` and train. this causes a crash with the following stacktrace: ``` Traceback (most recent call last): File "/cs/labs/oabend/avishai.elma/slm_eval/slm_eval/train.py", line 89, in main trainer.train(resume_from_checkpoint=cfg.cont_training) File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/trainer.py", line 2164, in train return inner_training_loop( ^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/trainer.py", line 2524, in _inner_training_loop tr_loss_step = self.training_step(model, inputs, num_items_in_batch) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/slm_eval/trainer/slam_trainer.py", line 71, in training_step return super().training_step(model, inputs, num_items_in_batch) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/trainer.py", line 3654, in training_step loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/trainer.py", line 3708, in compute_loss outputs = model(**inputs) ^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/accelerate/utils/operations.py", line 823, in forward return model_forward(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/accelerate/utils/operations.py", line 811, in __call__ return convert_to_fp32(self.model_forward(*args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/slm_eval/model/unit_lm.py", line 118, in forward def forward(self, File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 1109, in forward @add_start_docstrings_to_model_forward(QWEN2_INPUTS_DOCSTRING) File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 895, in forward layer_outputs = decoder_layer( ^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 584, in forward def forward( File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 364, in forward def forward( File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 419, in torch_dynamo_resume_in_forward_at_419 logger.warning_once( File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/modeling_flash_attention_utils.py", line 231, in _flash_attention_forward def _flash_attention_forward( File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/modeling_flash_attention_utils.py", line 329, in torch_dynamo_resume_in__flash_attention_forward_at_329 max_length_q is not None or (query_length != 1 and not (torch.diff(position_ids, dim=-1) >= 0).all()) File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1269, in __call__ return self._torchdynamo_orig_callable( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1064, in __call__ result = self._inner_convert( ^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 526, in __call__ return _compile( ^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 924, in _compile guarded_code = compile_inner(code, one_graph, hooks, transform) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 666, in compile_inner return _compile_inner(code, one_graph, hooks, transform) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_utils_internal.py", line 87, in wrapper_function return function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 699, in _compile_inner out_code = transform_code_object(code, transform) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1322, in transform_code_object transformations(instructions, code_options) File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 219, in _fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 634, in transform tracer.run() File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2796, in run super().run() File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run while self.step(): ^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step self.dispatch_table[inst.opcode](self, inst) File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper return inner_fn(self, inst) ^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1680, in CALL_FUNCTION_EX self.call_function(fn, argsvars.items, kwargsvars) File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 324, in call_function return super().call_function(tx, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call return cls.inline_call_(parent, func, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_ tracer.run() File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run while self.step(): ^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step self.dispatch_table[inst.opcode](self, inst) File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper return inner_fn(self, inst) ^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2279, in CALL self._call(inst) File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2273, in _call self.call_function(fn, args, kwargs) File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/misc.py", line 1024, in call_function return self.obj.call_method(tx, self.name, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/misc.py", line 774, in call_method return self.call_apply(tx, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/misc.py", line 699, in call_apply ).call_function(tx, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/higher_order_ops.py", line 2015, in call_function (fwd_out, _), fwd_graph, fwd_freevars = speculate_subgraph( ^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/higher_order_ops.py", line 462, in speculate_subgraph output = f.call_function(tx, args, sub_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 324, in call_function return super().call_function(tx, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call return cls.inline_call_(parent, func, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_ tracer.run() File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run while self.step(): ^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step self.dispatch_table[inst.opcode](self, inst) File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper return inner_fn(self, inst) ^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2279, in CALL self._call(inst) File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2273, in _call self.call_function(fn, args, kwargs) File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/torch.py", line 897, in call_function tensor_variable = wrap_fx_proxy( ^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 2037, in wrap_fx_proxy return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 2124, in wrap_fx_proxy_cls example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 2082, in get_fake_value raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 2017, in get_fake_value ret_val = wrap_fake_exception( ^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 1574, in wrap_fake_exception return fn() ^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 2018, in <lambda> lambda: run_node(tx.output, node, args, kwargs, nnmodule) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 2150, in run_node raise RuntimeError(make_error_message(e)).with_traceback( File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 2132, in run_node return node.target(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_ops.py", line 1116, in __call__ return self._op(*args, **(kwargs or {})) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ torch._dynamo.exc.TorchRuntimeError: Failed running call_function flash_attn._flash_attn_varlen_forward(*(FakeTensor(..., device='cuda:0', size=(s3, s4, s5), dtype=torch.float16, grad_fn=<AsStridedBackward0>), FakeTensor(..., device='cuda:0', size=(s6, s7, s8), dtype=torch.float16, grad_fn=<Error>), FakeTensor(..., device='cuda:0', size=(s9, s10, s11), dtype=torch.float16, grad_fn=<Error>), FakeTensor(..., device='cuda:0', size=(s13,), dtype=torch.int32), FakeTensor(..., device='cuda:0', size=(s13,), dtype=torch.int32), FakeTensor(..., device='cuda:0', size=(), dtype=torch.int64), FakeTensor(..., device='cuda:0', size=(), dtype=torch.int64), 0.0, FloatPow(ToFloat(s5), -0.5)), **{'causal': True, 'window_size_left': -1, 'window_size_right': -1, 'softcap': 0.0, 'alibi_slopes': None, 'return_softmax': False, 'block_table': None}): flash_attn::_flash_attn_varlen_forward() Expected a value of type 'int' for argument 'max_seqlen_q' but instead found type 'FakeTensor'. Position: 5 Value: FakeTensor(..., device='cuda:0', size=(), dtype=torch.int64) Declaration: flash_attn::_flash_attn_varlen_forward(Tensor q, Tensor k, Tensor v, Tensor cu_seqlens_q, Tensor cu_seqlens_k, SymInt max_seqlen_q, SymInt max_seqlen_k, float dropout_p, float softmax_scale, bool causal, SymInt window_size_left=-1, SymInt window_size_right=-1, float softcap=0., Tensor? alibi_slopes=None, bool return_softmax=False, Tensor? block_table=None, Tensor? leftpad_k=None, Tensor? seqused_k=None) -> (Tensor, Tensor, Tensor, Tensor) Cast error details: Unable to cast Python instance of type <class 'torch._subclasses.fake_tensor.FakeTensor'> to C++ type '?' (#define PYBIND11_DETAILED_ERROR_MESSAGES or compile in debug mode for details) from user code: File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/modeling_flash_attention_utils.py", line 346, in torch_dynamo_resume_in__flash_attention_forward_at_335 attn_output = flash_attn_varlen_func( File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/flash_attn/flash_attn_interface.py", line 1412, in flash_attn_varlen_func return FlashAttnVarlenFunc.apply( File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/flash_attn/flash_attn_interface.py", line 901, in forward out_padded, softmax_lse, S_dmask, rng_state = _wrapped_flash_attn_varlen_forward( Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information You can suppress this exception and fall back to eager by setting: import torch._dynamo torch._dynamo.config.suppress_errors = True ``` the code works fine when not using compile. the code doesn't crash when using compile but **not** using `DataCollatorWithFlattening`. when using compile and **not** using `DataCollatorWithFlattening` I am getting the following graph break with qwen2.5 ``` W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] Graph break from `Tensor.item()`, consider setting: W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] torch._dynamo.config.capture_scalar_outputs = True W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] or: W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] env TORCHDYNAMO_CAPTURE_SCALAR_OUTPUTS=1 W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] to include these operations in the captured graph. W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] Graph break: from user code at: W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/accelerate/utils/operations.py", line 823, in forward W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] return model_forward(*args, **kwargs) W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/accelerate/utils/operations.py", line 811, in __call__ W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] return convert_to_fp32(self.model_forward(*args, **kwargs)) W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] return func(*args, **kwargs) W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] File "/cs/labs/oabend/avishai.elma/slm_eval/slm_eval/model/unit_lm.py", line 138, in forward W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] outputs = self.lm( W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 1165, in forward W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] outputs = self.model( W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 864, in forward W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] causal_mask = self._update_causal_mask( W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 943, in _update_causal_mask W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] if attention_mask is not None and 0.0 in attention_mask: W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] ``` ### Expected behavior the training shouldn't crash.
[ 64, 59 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug", "Compilation" ]
https://api.github.com/repos/huggingface/transformers/issues/33397
TITLE the problem of precision COMMENTS 5 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY I tried to get the hidden_state of the same sentence at the CLS position, but found that they seemed to be different。I'm confused as to why this is。I also tried two versions of transformers, but the phenomenon is the same transformers version: 3.3.0/4.44.2 **Code execution results:** ``` ['我可', '我可'] {'input_ids': tensor([[ 101, 2769, 1377, 102], [ 101, 2769, 1377, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0], [0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1], [1, 1, 1, 1]])} torch.Size([2, 4, 768]) [-0.5772758722305298, 0.12515394389629364, 0.5431485772132874, -0.25107723474502563, -0.024788254871964455] [-0.577276349067688, 0.12515370547771454, 0.5431481599807739, -0.2510775625705719, -0.024788187816739082] ``` **Complete code:** ```py from transformers import BertConfig, BertModel, BertTokenizer import torch import random import numpy as np random.seed(1234) np.random.seed(1234) torch.manual_seed(1234) model_dir = "pretrained_model/bert" config = BertConfig.from_pretrained(model_dir) model = BertModel.from_pretrained(model_dir, config=config) tokenizer:BertTokenizer = BertTokenizer.from_pretrained(model_dir) model.eval() def func(text_list): batch = tokenizer(text_list, add_special_tokens=True, return_tensors="pt", padding=True, truncation=True) outputs = model(**batch, return_dict=True) print(text_list) print(batch) print(outputs.last_hidden_state.size()) for x in outputs.last_hidden_state[:, 0, -5:].tolist(): print(x) print() text_list_a = ["我可", "我可"] func(text_list_a) ```
[ 67 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Usage" ]
https://api.github.com/repos/huggingface/transformers/issues/33710
TITLE Add support for Molmo COMMENTS 7 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 7 heart: 1 rocket: 0 eyes: 0 BODY ### Feature request Hi, Would it be possible to add support for [Molmo](https://huggingface.co/allenai/Molmo-7B-D-0924) (currently using custom code)? Thanks! ### Motivation Molmo is not supported ### Your contribution N/A
[ 77, 76, 62, 12 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0 ]
[ "New model", "Feature request", "Vision", "Multimodal" ]
https://api.github.com/repos/huggingface/transformers/issues/35244
TITLE StopStringCriteria relies on `len(tokenizer)==model.config.vocab_size`, leading to index errors COMMENTS 6 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info Python: 3.12.0 Transformers: 4.46.3 ### Who can help? @gante @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction After fine-tuning EleutherAI/pythia-14m using transformer's Trainer, I run inference like this: ```python checkpoint = "models/checkpoint-166000" device = "cuda" model = AutoModelForCausalLM.from_pretrained(checkpoint) model.to(device) tokenizer = AutoTokenizer.from_pretrained(checkpoint, padding_side="left") tokenizer.pad_token_id = 1 tokenizer.pad_token = "<|padding|>" prompts = [ "prompt1", "prompt2", ] inputs = tokenizer( prompts, return_tensors="pt", padding=True, truncation=True, max_length=512, ) gen_config = copy.deepcopy(model.generation_config) gen_config.update( max_new_tokens=max_length, do_sample=True, top_k=0, pad_token_id=tokenizer.pad_token_id, stop_strings="end", ) gen_config.validate() outputs = model.generate( input_ids=inputs["input_ids"].to(device), attention_mask=inputs["attention_mask"].to(device), num_return_sequences=32, generation_config=gen_config, output_scores=True, return_dict_in_generate=True, tokenizer=tokenizer, ) ``` Note that `tokenizer.pad_token_id` has to be set explicitly because it is not present in Pythia's `special_tokens_map.json`. This code leads to the following error (run with `CUDA_LAUNCH_BLOCKING=1`): ``` ../aten/src/ATen/native/cuda/Indexing.cu:1308: indexSelectLargeIndex: block: [1,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ../aten/src/ATen/native/cuda/Indexing.cu:1308: indexSelectLargeIndex: block: [1,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ../aten/src/ATen/native/cuda/Indexing.cu:1308: indexSelectLargeIndex: block: [1,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ../aten/src/ATen/native/cuda/Indexing.cu:1308: indexSelectLargeIndex: block: [1,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed. Traceback (most recent call last): File "home/m/src/playground.py", line 43, in <module> outputs = model.generate( ^^^^^^^^^^^^^^^ File "/home/m/venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/m/venv/lib/python3.12/site-packages/transformers/generation/utils.py", line 2215, in generate result = self._sample( ^^^^^^^^^^^^^ File "/home/m/venv/lib/python3.12/site-packages/transformers/generation/utils.py", line 3262, in _sample unfinished_sequences = unfinished_sequences & ~stopping_criteria(input_ids, scores) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/m/venv/lib/python3.12/site-packages/transformers/generation/stopping_criteria.py", line 496, in __call__ is_done = is_done | criteria(input_ids, scores, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/m/venv/lib/python3.12/site-packages/transformers/generation/stopping_criteria.py", line 402, in __call__ embedded = F.embedding(flipped_ids, self.embedding_vec) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/m/venv/lib/python3.12/site-packages/torch/nn/functional.py", line 2551, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: CUDA error: device-side assert triggered Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ``` This is due to mismatch between `len(tokenizer)` (50277) and `model.config.vocab_size` (50304 or 50432). This decision to round up the size of the embedding matrix to the next multiple of 128 or 256 was presumably made due to efficiency reasons. However, during sampling, tokens above `len(tokenizer)` can sometimes be generated. This is silently ignored by the tokenizer, converting such tokens to empty string. However, `StopStringCriteria` is implemented by indexing into an embedding with size determined by `len(tokenizer)` and therefore fails when it encounters a higher token. A temporary fix is to explicitly suppress the unknown tokens from being generated: ```python if len(tokenizer) < model.config.vocab_size: model.generation_config.suppress_tokens = list(range(len(tokenizer), model.config.vocab_size)) ``` I propose that a more principled solution would to be modify `StopStringCriteria` to ignore tokens above `len(tokenizer)`. ### Expected behavior Expected behavior of the `generate` method is to not fail.
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/35332
TITLE DeBERTa's `DisentangledSelfAttention` hardcodes `float` dtype, which causes `bfloat16` overflow error COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info transformers: 4.47.0 Python: 3.10.5 PyTorch: 2.5.1+cu124 GPU: NVIDIA GTX 980 Ti ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm training a `DebertaForMaskedLM` model with a broader experimental framework, but you can reproduce the bug with simple inference as follows: instantiate such a model with datatype `bfloat16`, and send a batch through it. ```python import torch from transformers import DebertaConfig, DebertaForMaskedLM model = DebertaForMaskedLM._from_config(DebertaConfig(), torch_dtype=torch.bfloat16) model(**{"input_ids": torch.tensor([[101,102,103,104]]), "attention_mask": torch.tensor([[1,1,1,1]])}) ``` One of two errors is now thrown in `modeling_deberta.py`, both in `DisentangledSelfAttention.forward()` (and they can both be traced back to the same issue): 1. `RuntimeError: expected m1 and m2 to have the same dtype, but got: float != struct c10::BFloat16` 2. `RuntimeError: value cannot be converted to type at::BFloat16 without overflow` Here's where they come from: two fields in DeBERTa's `DisentangledSelfAttention` are constructed by explicitly declaring their `dtype` as `torch.float`: https://github.com/huggingface/transformers/blob/9613933b022ddbf085e2c593ed4ceea4c734179a/src/transformers/models/deberta/modeling_deberta.py#L187-L188 Then, in `forward()`, we create the two tensors `query_layer` and `key_layer` that start out with the `dtype` of the hidden states, which have the `dtype` of the model, namely `bfloat16`: https://github.com/huggingface/transformers/blob/9613933b022ddbf085e2c593ed4ceea4c734179a/src/transformers/models/deberta/modeling_deberta.py#L258-L259 But then, one of these tensors, `query_layer`, is modified by adding `self.q_bias` into it. The resulting tensor inherits the `torch.float` data type: https://github.com/huggingface/transformers/blob/9613933b022ddbf085e2c593ed4ceea4c734179a/src/transformers/models/deberta/modeling_deberta.py#L268 The first RuntimeError can occur on the following line, when `query_layer` (now `torch.float`) and `key_layer` (still `torch.bfloat16`) are multiplied. I've had this line crash on one machine and work on another, so perhaps this kind of mixed precision sometimes works. https://github.com/huggingface/transformers/blob/9613933b022ddbf085e2c593ed4ceea4c734179a/src/transformers/models/deberta/modeling_deberta.py#L276 The second RuntimeError occurs even when mixed precision is supported. It happens on the following line: https://github.com/huggingface/transformers/blob/9613933b022ddbf085e2c593ed4ceea4c734179a/src/transformers/models/deberta/modeling_deberta.py#L290 `attention_scores` is of type `bfloat16`. You then ask to fill it with the minimal value *for the data type of `query_layer`, not the data type of `attention_scores`*. Because `query_layer.dtype` is `torch.float`, that minimal value (-3.40282e+38) is *more negative than the most negative `torch.bfloat16`* (-3.38953e+38). Hence, the overflow. ### Expected behavior The `dtype` of `self.q_bias` and `self.v_bias` should be set like the rest of the modules/tensors in the model, rather than being hardcoded. That would keep everything `bfloat16`.
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/35990
TITLE Transformers PaliGemma evaluate and compute_loss fail with tensors/device errors COMMENTS 13 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info My versions are: ``` Python Version: 3.12.7 | packaged by conda-forge | (main, Oct 4 2024, 16:05:46) [GCC 13.3.0] Torch Version: 2.5.1+cu124 CUDA Available: True CUDA Device Count: 2 GPU Name: NVIDIA GeForce RTX 3090 Transformers Version: 4.48.1 Tokenizers Version: 0.21.0 Accelerate Version: 1.3.0 ``` ### Who can help? @ArthurZucker , @amyeroberts, @qubvel ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm loading a PaliGemma2 model `google/paligemma2-3b-pt-224` and trying to fine-tune using Trainer/Seq2SeqTrainer. If I add evaluation, this fails. After doing some digging, I found that this only happens if the model is in evaluate mode. ``` batch = [valid_dataset[i] for i in range(8)] inputs = collate_fn(batch) #generate_ids = model.generate(**inputs, max_length=286+30) trainer.model.train() trainer.compute_loss(model, inputs, return_outputs=False, num_items_in_batch=416) print("works") trainer.model.train(False) trainer.compute_loss(model, inputs, return_outputs=False, num_items_in_batch=416) print("fails.") ``` I've worked around it by mokey-patching compute_loss_context_manager as follows: ``` orig_context_manager = trainer.compute_loss_context_manager class TempTrainContext(object): def __init__(self, trainer): self.trainer = trainer self.orig_context_manager = trainer.compute_loss_context_manager def __enter__(self): self.orig_context_inst = self.orig_context_manager() self.orig_context_inst.__enter__() self.training_enter = self.trainer.model.training self.trainer.model.train() def __exit__(self, type, value, traceback): self.trainer.model.train(self.training_enter) self.orig_context_inst.__exit__(type, value, traceback) def __call__(self): return self trainer.compute_loss_context_manager = TempTrainContext(trainer) ``` (Bonus question: Is this safe to do, or will I train on the test set?) Error: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[13], line 8 6 print("works") 7 trainer.model.train(False) ----> 8 trainer.compute_loss(model, inputs, return_outputs=False, num_items_in_batch=416) 9 print("fails.") 12 orig_context_manager = trainer.compute_loss_context_manager File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/transformers/trainer.py:3731, in Trainer.compute_loss(self, model, inputs, return_outputs, num_items_in_batch) 3729 loss_kwargs["num_items_in_batch"] = num_items_in_batch 3730 inputs = {**inputs, **loss_kwargs} -> 3731 outputs = model(**inputs) 3732 # Save past state if it exists 3733 # TODO: this needs to be fixed and made cleaner later. 3734 if self.args.past_index >= 0: File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs) 1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1735 else: -> 1736 return self._call_impl(*args, **kwargs) File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs) 1742 # If we don't have any hooks, we want to skip the rest of the logic in 1743 # this function, and just call forward. 1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1745 or _global_backward_pre_hooks or _global_backward_hooks 1746 or _global_forward_hooks or _global_forward_pre_hooks): -> 1747 return forward_call(*args, **kwargs) 1749 result = None 1750 called_always_called_hooks = set() File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/accelerate/hooks.py:170, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs) 168 output = module._old_forward(*args, **kwargs) 169 else: --> 170 output = module._old_forward(*args, **kwargs) 171 return module._hf_hook.post_forward(module, output) File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/transformers/models/paligemma/modeling_paligemma.py:530, in PaliGemmaForConditionalGeneration.forward(self, input_ids, pixel_values, attention_mask, position_ids, past_key_values, token_type_ids, cache_position, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict, num_logits_to_keep) 525 labels = torch.where(input_ids == self.pad_token_id, self.config.ignore_index, labels) 527 causal_mask = self._update_causal_mask( 528 attention_mask, token_type_ids, past_key_values, cache_position, input_ids, inputs_embeds, is_training 529 ) --> 530 outputs = self.language_model( 531 attention_mask=causal_mask, 532 position_ids=position_ids, 533 past_key_values=past_key_values, 534 inputs_embeds=inputs_embeds, 535 use_cache=use_cache, 536 output_attentions=output_attentions, 537 output_hidden_states=output_hidden_states, 538 return_dict=return_dict, 539 cache_position=cache_position, 540 num_logits_to_keep=num_logits_to_keep, 541 ) 543 logits = outputs.logits 544 loss = None File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs) 1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1735 else: -> 1736 return self._call_impl(*args, **kwargs) File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs) 1742 # If we don't have any hooks, we want to skip the rest of the logic in 1743 # this function, and just call forward. 1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1745 or _global_backward_pre_hooks or _global_backward_hooks 1746 or _global_forward_hooks or _global_forward_pre_hooks): -> 1747 return forward_call(*args, **kwargs) 1749 result = None 1750 called_always_called_hooks = set() File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/transformers/models/gemma2/modeling_gemma2.py:842, in Gemma2ForCausalLM.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict, cache_position, num_logits_to_keep, **loss_kwargs) 840 return_dict = return_dict if return_dict is not None else self.config.use_return_dict 841 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) --> 842 outputs = self.model( 843 input_ids=input_ids, 844 attention_mask=attention_mask, 845 position_ids=position_ids, 846 past_key_values=past_key_values, 847 inputs_embeds=inputs_embeds, 848 use_cache=use_cache, 849 output_attentions=output_attentions, 850 output_hidden_states=output_hidden_states, 851 return_dict=return_dict, 852 cache_position=cache_position, 853 ) 855 hidden_states = outputs[0] 856 # Only compute necessary logits, and do not upcast them to float if we are not computing the loss File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs) 1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1735 else: -> 1736 return self._call_impl(*args, **kwargs) File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs) 1742 # If we don't have any hooks, we want to skip the rest of the logic in 1743 # this function, and just call forward. 1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1745 or _global_backward_pre_hooks or _global_backward_hooks 1746 or _global_forward_hooks or _global_forward_pre_hooks): -> 1747 return forward_call(*args, **kwargs) 1749 result = None 1750 called_always_called_hooks = set() File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/transformers/models/gemma2/modeling_gemma2.py:629, in Gemma2Model.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict, cache_position, **flash_attn_kwargs) 617 layer_outputs = self._gradient_checkpointing_func( 618 decoder_layer.__call__, 619 hidden_states, (...) 626 cache_position, 627 ) 628 else: --> 629 layer_outputs = decoder_layer( 630 hidden_states, 631 position_embeddings=position_embeddings, 632 attention_mask=causal_mask, 633 position_ids=position_ids, 634 past_key_value=past_key_values, 635 output_attentions=output_attentions, 636 use_cache=use_cache, 637 cache_position=cache_position, 638 **flash_attn_kwargs, 639 ) 641 hidden_states = layer_outputs[0] 643 if output_attentions: File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs) 1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1735 else: -> 1736 return self._call_impl(*args, **kwargs) File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs) 1742 # If we don't have any hooks, we want to skip the rest of the logic in 1743 # this function, and just call forward. 1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1745 or _global_backward_pre_hooks or _global_backward_hooks 1746 or _global_forward_hooks or _global_forward_pre_hooks): -> 1747 return forward_call(*args, **kwargs) 1749 result = None 1750 called_always_called_hooks = set() File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/accelerate/hooks.py:170, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs) 168 output = module._old_forward(*args, **kwargs) 169 else: --> 170 output = module._old_forward(*args, **kwargs) 171 return module._hf_hook.post_forward(module, output) File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/transformers/models/gemma2/modeling_gemma2.py:299, in Gemma2DecoderLayer.forward(self, hidden_states, position_embeddings, attention_mask, position_ids, past_key_value, output_attentions, use_cache, cache_position) 296 hidden_states = self.input_layernorm(hidden_states) 298 # Self Attention --> 299 hidden_states, self_attn_weights = self.self_attn( 300 hidden_states=hidden_states, 301 position_embeddings=position_embeddings, 302 attention_mask=attention_mask, 303 position_ids=position_ids, 304 past_key_value=past_key_value, 305 output_attentions=output_attentions, 306 use_cache=use_cache, 307 cache_position=cache_position, 308 ) 309 hidden_states = self.post_attention_layernorm(hidden_states) 310 hidden_states = residual + hidden_states File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs) 1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1735 else: -> 1736 return self._call_impl(*args, **kwargs) File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs) 1742 # If we don't have any hooks, we want to skip the rest of the logic in 1743 # this function, and just call forward. 1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1745 or _global_backward_pre_hooks or _global_backward_hooks 1746 or _global_forward_hooks or _global_forward_pre_hooks): -> 1747 return forward_call(*args, **kwargs) 1749 result = None 1750 called_always_called_hooks = set() File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/accelerate/hooks.py:170, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs) 168 output = module._old_forward(*args, **kwargs) 169 else: --> 170 output = module._old_forward(*args, **kwargs) 171 return module._hf_hook.post_forward(module, output) File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/transformers/models/gemma2/modeling_gemma2.py:224, in Gemma2Attention.forward(self, hidden_states, position_embeddings, attention_mask, past_key_value, cache_position, **kwargs) 221 if past_key_value is not None: 222 # sin and cos are specific to RoPE models; cache_position needed for the static cache 223 cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position} --> 224 key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs) 226 attention_interface: Callable = eager_attention_forward 227 if self.config._attn_implementation != "eager": File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/transformers/cache_utils.py:1717, in HybridCache.update(self, key_states, value_states, layer_idx, cache_kwargs) 1714 else: 1715 update_fn = self._static_update -> 1717 return update_fn( 1718 cache_position, 1719 layer_idx, 1720 key_states, 1721 value_states, 1722 k_out, 1723 v_out, 1724 k_out.shape[2], 1725 ) File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/transformers/cache_utils.py:1694, in HybridCache._static_update(self, cache_position, layer_idx, key_states, value_states, k_out, v_out, max_cache_len) 1693 def _static_update(self, cache_position, layer_idx, key_states, value_states, k_out, v_out, max_cache_len): -> 1694 k_out[:, :, cache_position] = key_states 1695 v_out[:, :, cache_position] = value_states 1697 self.key_cache[layer_idx] = k_out RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!" ``` Error of Evaluator (bottom half of file): https://gist.github.com/BlGene/607c7bee450e03835aa2bf0d2fd2959a ### Expected behavior Training runs with evaluation enabled.
[ 64, 33 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug", "Cache" ]
https://api.github.com/repos/huggingface/transformers/issues/35716
TITLE Regression - Phi3 has graph breaks in 4.48 but not in 4.47.1 COMMENTS 9 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.48.0 - Platform: Linux-6.8.0-48 - Python version: 3.12.3 - Huggingface_hub version: 0.27.1 - Safetensors version: 0.5.2 - Accelerate version: 1.2.1 - Accelerate config: not found - PyTorch version (GPU?): 2.6.0 - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: No - Using GPU in script?: No - GPU type: NVIDIA RTX 6000 Ada Generation ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python import torch from transformers import AutoConfig, AutoModelForCausalLM cfg = AutoConfig.from_pretrained("microsoft/Phi-3-mini-128k-instruct") cfg.num_hidden_layers = 2 with torch.device("cuda"): m = AutoModelForCausalLM.from_config(cfg) def backend(gm, sample_args): # gm.print_readable() print("SUBGRAPH") return gm m.model = torch.compile(m.model, backend=backend) input_ids = torch.randint(0, 100, (1, 4096), device="cuda") m(input_ids) ``` For 4.48, we see 4 subgraphs while with previous 4.47.1 we see only 1 subgraph. Running with `TORCH_LOGS="graph_breaks"` prints ```python V0115 16:09:58.933000 510381 torch/_dynamo/symbolic_convert.py:444] [1/0] [__graph_breaks] Graph break (details suppressed) in user code at /usr/local/lib/python3.12/dist-packages/transformers/models/phi3/modeling_phi3.py:386 V0115 16:09:58.933000 510381 torch/_dynamo/symbolic_convert.py:444] [1/0] [__graph_breaks] Reason: Unsupported: Dynamic control flow is not supported at the moment. Please use functorch.experimental.control_flow.cond to explicitly capture the control flow. For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#cond-operands V0115 16:09:58.945000 510381 torch/_dynamo/symbolic_convert.py:444] [2/0] [__graph_breaks] Graph break (details suppressed) in user code at /usr/local/lib/python3.12/dist-packages/transformers/models/phi3/modeling_phi3.py:386 V0115 16:09:58.945000 510381 torch/_dynamo/symbolic_convert.py:444] [2/0] [__graph_breaks] Reason: Data-dependent jump ``` ### Expected behavior Should have a single subgraph ideally like before.
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/34138
TITLE Incorrect average calculation in `Perplexity of fixed-length models` COMMENTS 6 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.44.0 - Platform: macOS-14.6-arm64-arm-64bit - Python version: 3.10.14 - Huggingface_hub version: 0.24.5 - Safetensors version: 0.4.3 - Accelerate version: 0.33.0 - Accelerate config: not found - PyTorch version (GPU?): 2.4.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @stevhliu ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction (This takes longer than 30s, I apologize. But the example has a lot of data and I wanted to minimally change the provided code to demonstrate the point. Note this is the pytorch version.). 1. [The example script for finding the perplexity of fixed length](https://huggingface.co/docs/transformers/perplexity) model using strided windows does not properly calculate the average negative log-likelihood for each token aggregated over all the strided context windows. The first context window has the maximum allowable size, which is 1024. That amounts to 1023 targets. The remaining windows have 511 targets, except the final window which has 411. The way the average is calculated assumes that the same number of targets are considered in each strided context. While this results in a minor difference (roughly 0.01 higher than it should be) in the case shown, it leads to much larger issues with shorter texts. 2. Run the following [colab script](https://drive.google.com/file/d/13jCCj85-MOw4bhRTWJ8iyFvs4ApBg9R2/view?usp=sharing) (I don't have access to GPUs on colab, so I ran this on mac with `mps` with the exact same notebook). 3. Compare the printed `ppl` value which is `16.44` to the reported one which is `16.45` ### Expected behavior The value of `ppl` should reflect the average negative log-likelihood for each token across the entire corpus.
[ 6, 64 ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Good Second Issue", "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/33459
TITLE chat_template.json is not saved when using LlavaProcessor.save_pretrained() COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.45.0.dev0 - Platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.24.6 - Safetensors version: 0.4.5 - Accelerate version: 0.33.0 - Accelerate config: not found - PyTorch version (GPU?): 2.4.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: NVIDIA A100 80GB PCIe ### Who can help? @zucchini-nlp ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoImageProcessor, AutoTokenizer vision_model_name_or_path = "Bingsu/clip-vit-large-patch14-ko" language_model_name_or_path = "beomi/llama-2-ko-7b" output_dir = "test" tokenizer = AutoTokenizer.from_pretrained(language_model_name_or_path, chat_template="test") processor = LlavaProcessor( tokenizer=tokenizer, image_processor=AutoImageProcessor.from_pretrained(vision_model_name_or_path), chat_template=tokenizer.chat_template, ) processor.save_pretrained(output_dir) print(processor.from_pretrained(output_dir).chat_template) ``` ### Expected behavior Hi huggingface! As the title says, chat_template.json is not generated properly when I do LlavaProcessor.save_pretrained(). So when I do from_pretrained, the chat_template is not loaded properly. Although there is a [section](https://github.com/huggingface/transformers/blob/main/src/transformers/processing_utils.py#L504-L510) in ProcessorMixin.save_pretrained() to save chat_template, [ProcessorMixin.to_dict()](https://github.com/huggingface/transformers/blob/main/src/transformers/processing_utils.py#L504-L510) method is removing the chat_template. Looking at the history, I see that this syntax was added when I added LLava-OneVision. My guess is that chat_template.json should be saved.
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/34639
TITLE Padding error when using Universal Assisted Generation with ASR pipeline COMMENTS 9 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 1 BODY ### System Info Can not solve by supplying padding arg to pipeline (not accepted). @gante transformers version: https://github.com/huggingface/transformers.git@refs/pull/34504/merge Ubuntu Python 3.10.15 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ![image](https://github.com/user-attachments/assets/5818757b-25a6-471c-81a9-20679b43f73d) ### Expected behavior Should complete pipeline execution as normal
[ 51, 64, 43 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Core: Pipeline", "bug", "Audio" ]
https://api.github.com/repos/huggingface/transformers/issues/33835
TITLE DistillBERT is ExecuTorch compatible COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request Enable DistillBERT to ["Export to ExecuTorch"](https://github.com/huggingface/transformers/issues/32253) workflow ### Motivation See details in #32253 ### Your contribution Enable DistillBERT model
[ 76, 31 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ "Feature request", "ExecuTorch" ]
https://api.github.com/repos/huggingface/transformers/issues/34927
TITLE Add Pytorch Tensor Parallel support for Mistral COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? This PR uses the torch.distributed.tensor.parallel subpackage to implement Tensor Parallel for Mistral. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. Link:https://github.com/huggingface/transformers/issues/34789 - [] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker - vision models: @amyeroberts, @qubvel - speech models: @ylacombe, @eustlb - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @zucchini-nlp (visual-language models) or @gante (all others) - pipelines: @Rocketknight1 - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @SunMarc - chat templates: @Rocketknight1 Integrations: - deepspeed: HF Trainer/Accelerate: @muellerzr - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber Documentation: @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
[ 81 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ]
[ "Tensor Parallel" ]
https://api.github.com/repos/huggingface/transformers/issues/34983
TITLE BatchEncoding.to throws away columns silently, thus no way to pass non-tensor columns such as String in Trainer metric computation COMMENTS 6 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info unrelated ### Who can help? @muellerzr @SunMarc (original tags, no longer valid) @ArthurZucker (re-tag because want to discuss patch release) ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi thanks for the library! Consider this simple line: ``` x = transformers.tokenization_utils_base.BatchEncoding({'a': ['x','y']}) x.to('cpu') # or cuda or whatever ``` The column `a` is then silently removed :( This is annoying in the following scenario: For each of my training/eval sample, I have a string column that serves as a tag for it, and want to utilize it when computing metrics and losses. Then it does not work. After some debugging, the root reason is that it gets silently removed in the `to` mentioned above. It seems torch does not support a tensor of dtype `str`, thus it seems impossible to have data pass through. ### Expected behavior (see above)
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/33672
TITLE Xmod model has not module 'roberta' COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info latest version ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction path: src/transformers/models/xmod/modeling_xmod.py function `def freeze_embeddings_and_language_adapters(self)` when i call this function, the error rasie: **model no module roberta** a similar error raise when load my funtinue xmod checkpoint and i modify as this (delete roberta), it seem run ok, but I cant certain its really right: ``` def freeze_embeddings_and_language_adapters(self): """ Freeze the embeddings and language adapters of the model. Usually, this is applied before the model is fine-tuned on a downstream task. """ logger.info("Freezing embeddings") for parameter in self.embeddings.parameters(): parameter.requires_grad = False logger.info("Freezing adapters") for layer in self.encoder.layer: if layer.output.adapter_layer_norm is not None: for parameter in layer.output.adapter_layer_norm.parameters(): parameter.requires_grad = False for parameter in layer.output.adapter_modules.parameters(): parameter.requires_grad = False ``` ### Expected behavior call `freeze_embeddings_and_language_adapters ` with no error raise
[ 67, 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Usage", "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/35685
TITLE Mask2former & Maskformer Fast Image Processor COMMENTS 7 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker - vision models: @amyeroberts, @qubvel - speech models: @ylacombe, @eustlb - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @zucchini-nlp (visual-language models) or @gante (all others) - pipelines: @Rocketknight1 - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @SunMarc - chat templates: @Rocketknight1 Integrations: - deepspeed: HF Trainer/Accelerate: @muellerzr - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber Documentation: @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
[ 62, 65 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Vision", "Processing" ]
https://api.github.com/repos/huggingface/transformers/issues/33617
TITLE [please help!] I can't load the tokenizer COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.44.2 - Platform: Linux-4.18.0-305.3.1.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.14 - Huggingface_hub version: 0.25.0 - Safetensors version: 0.4.4 - Accelerate version: 0.27.2 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When I used the following code to load the tokenizer, a bug occurred. Does anyone know how to fix it? # Code: import transformers model = "meta-llama/Meta-Llama-3-70B" hf_token = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' model = transformers.LlamaForCausalLM.from_pretrained(model, torch_dtype='auto', token=hf_token, low_cpu_mem_usage=True) tokenizer = transformers.AutoTokenizer.from_pretrained(model, use_fast=False, token=hf_token) # Error: Loading checkpoint shards: 100%|███████| 30/30 [00:03<00:00, 9.20it/s] Traceback (most recent call last): File "/home/shaoyuantian/anaconda3/envs/rllm/lib/python3.10/site-packages/transformers/utils/hub.py", line 402, in cached_file resolved_file = hf_hub_download( File "/home/shaoyuantian/anaconda3/envs/rllm/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py", line 101, in inner_f return f(*args, **kwargs) File "/home/shaoyuantian/anaconda3/envs/rllm/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 106, in _inner_fn validate_repo_id(arg_value) File "/home/shaoyuantian/anaconda3/envs/rllm/lib/python3.10/site-packages/huggingface_hub/utils/validators.py", line 160, in validate_repo_id raise HFValidationError( huggingface_hub.errors.HFValidationError: Repo id must use alphanumeric chars or '-', '', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(128256, 8192) (layers): ModuleList( (0-79): 80 x LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear(in_features=8192, out_features=8192, bias=False) (k_proj): Linear(in_features=8192, out_features=1024, bias=False) (v_proj): Linear(in_features=8192, out_features=1024, bias=False) (o_proj): Linear(in_features=8192, out_features=8192, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=8192, out_features=28672, bias=False) (up_proj): Linear(in_features=8192, out_features=28672, bias=False) (down_proj): Linear(in_features=28672, out_features=8192, bias=False) (act_fn): SiLU() ) (input_layernorm): LlamaRMSNorm((8192,), eps=1e-05) (post_attention_layernorm): LlamaRMSNorm((8192,), eps=1e-05) ) ) (norm): LlamaRMSNorm((8192,), eps=1e-05) (rotary_emb): LlamaRotaryEmbedding() ) (lm_head): Linear(in_features=8192, out_features=128256, bias=False) )'. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/shaoyuantian/program/RLLM/idea_test/test.py", line 20, in tokenizer = transformers.AutoTokenizer.from_pretrained(model, use_fast=False, File "/home/shaoyuantian/anaconda3/envs/rllm/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 834, in from_pretrained tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs) File "/home/shaoyuantian/anaconda3/envs/rllm/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 666, in get_tokenizer_config resolved_config_file = cached_file( File "/home/shaoyuantian/anaconda3/envs/rllm/lib/python3.10/site-packages/transformers/utils/hub.py", line 466, in cached_file raise EnvironmentError( OSError: Incorrect path_or_model_id: 'LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(128256, 8192) (layers): ModuleList( (0-79): 80 x LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear(in_features=8192, out_features=8192, bias=False) (k_proj): Linear(in_features=8192, out_features=1024, bias=False) (v_proj): Linear(in_features=8192, out_features=1024, bias=False) (o_proj): Linear(in_features=8192, out_features=8192, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=8192, out_features=28672, bias=False) (up_proj): Linear(in_features=8192, out_features=28672, bias=False) (down_proj): Linear(in_features=28672, out_features=8192, bias=False) (act_fn): SiLU() ) (input_layernorm): LlamaRMSNorm((8192,), eps=1e-05) (post_attention_layernorm): LlamaRMSNorm((8192,), eps=1e-05) ) ) (norm): LlamaRMSNorm((8192,), eps=1e-05) (rotary_emb): LlamaRotaryEmbedding() ) (lm_head): Linear(in_features=8192, out_features=128256, bias=False) )'. Please provide either the path to a local folder or the repo_id of a model on the Hub. ### Expected behavior I hope to locate the cause of the problem and find a solution
[ 47, 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Core: Tokenization", "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/35072
TITLE When using `save_pretrained` to save a model loaded with `from_pretrained`, its size becomes twice as large. COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.46.1 - Platform: Linux-5.15.0-25-generic-x86_64-with-glibc2.35 - Python version: 3.10.15 - Huggingface_hub version: 0.26.2 - Safetensors version: 0.4.5 - Accelerate version: 1.0.1 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: DEEPSPEED - mixed_precision: fp16 - use_cpu: False - debug: False - num_processes: 2 - machine_rank: 0 - num_machines: 1 - rdzv_backend: static - same_network: True - main_training_function: main - enable_cpu_affinity: False - deepspeed_config: {'gradient_accumulation_steps': 1, 'gradient_clipping': 1.0, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': False, 'zero_stage': 2} - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] - PyTorch version (GPU?): 2.5.1+cu124 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: NVIDIA GeForce RTX 3090 ### Who can help? @ArthurZucker @Rocketknight1 @muellerzr ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` import transformers from transformers import AutoTokenizer model = transformers.OPTForCausalLM.from_pretrained("facebook/opt-6.7b") tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b") model.save_pretrained("/data/llm/opt-6.7b-testSave") tokenizer.save_pretrained("/data/llm/opt-6.7b-testSave") ``` ### Expected behavior The weights of `facebook/opt-6.7b` downloaded from Hugging Face are 12.4GB. After using `save_pretrained`, the saved weights became 24.8GB, which is twice as large as the original size. This leads to an "OUT OF MEMORY" error when I load the model for inference. I want the model saved with `save_pretrained` to be the same size as the downloaded one.
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/34475
TITLE DistilBERT is ExecuTorch compatible COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? DistilBert is ExecuTorch compatible. Unit Test: `RUN_SLOW=1 pytest tests/models/distilbert/test_modeling_distilbert.py -k test_export -v` ``` tests/models/distilbert/test_modeling_distilbert.py::DistilBertModelIntergrationTest::test_export PASSED [100%] ``` E2E test in ExecuTorch: Patch https://github.com/pytorch/executorch/pull/6509 `python -m extension.export_util.export_hf_model -hfm="distilbert-base-uncased" -lm masked_lm` ``` Saved exported program to ./distilbert.pte ``` `./cmake-out/backends/xnnpack/xnn_executor_runner --model_path distilbert.pte` ``` I 00:00:00.080326 executorch:executor_runner.cpp:82] Model file distilbert.pte is loaded. I 00:00:00.080359 executorch:executor_runner.cpp:91] Using method forward I 00:00:00.080361 executorch:executor_runner.cpp:138] Setting up planned buffer 0, size 12286720. I 00:00:00.115094 executorch:executor_runner.cpp:161] Method loaded. I 00:00:00.115124 executorch:executor_runner.cpp:171] Inputs prepared. I 00:00:00.179285 executorch:executor_runner.cpp:180] Model executed successfully. I 00:00:00.179301 executorch:executor_runner.cpp:184] 1 outputs: Output 0: tensor(sizes=[1, 64, 30522], [ -4.47825, -4.55548, -4.59359, -4.61276, -4.71701, -4.22803, -4.54525, -4.30736, -4.532, -4.9645, -4.19537, -4.51069, -4.34262, -4.96867, -4.38696, -5.06627, -5.01279, -4.89841, -4.42651, -4.47658, -4.70912, -4.49927, -4.48796, -4.67513, -4.3218, -4.54809, -4.59159, -4.65592, -4.54133, -4.50207, -4.24141, -4.65805, -4.49932, -4.36075, -4.38477, -4.69771, -4.76032, -5.06464, -4.57687, -4.54149, -4.54834, -4.80815, -4.47513, -4.61154, -4.69458, -4.09497, -4.42706, -4.48752, -4.84431, -4.40653, -4.6515, -4.60421, -4.39167, -4.9955, -4.65156, -4.57042, -4.58516, -4.46815, -4.43985, -4.83551, -4.20381, -4.59275, -4.94262, -4.32183, -4.44933, -4.59167, -4.66095, -4.85241, -4.83965, -4.37491, -4.82371, -4.34802, -4.26705, -4.79766, -4.47379, -4.7745, -4.59805, -4.6717, -4.2979, -4.65086, -4.88208, -4.84994, -4.24183, -4.73356, -4.97729, -5.18642, -4.64655, -4.64227, -4.46517, -4.6624, -4.50896, -4.75761, -4.26062, -4.75898, -4.7547, -4.54612, -4.43117, -4.4847, -4.28017, -4.33875, ..., -2.56383, -0.124811, -1.62058, -0.539149, -2.0116, -2.13068, 0.614868, -1.62362, -2.73875, -0.295115, -2.33206, 0.223186, -3.19978, -2.81419, -0.764227, 0.385865, -3.02447, -4.4802, -3.33432, -1.58703, -1.79603, -2.96534, -1.06687, -3.17183, -1.81405, 0.0236263, -0.992222, -3.71788, 0.761198, 0.089091, -2.99735, -2.04351, -2.40324, -2.86246, -1.24337, -2.34749, -2.01503, -2.45599, -4.6185, 1.14074, -3.04769, -1.78048, -1.09878, -3.30111, -2.08858, -1.64816, -2.03306, -1.94704, -0.205174, -1.90752, -2.6837, -1.25019, -0.415001, -3.73985, -1.53322, -0.605044, -3.7232, -0.258519, -1.85742, -1.55172, -4.25782, -3.31136, -1.23, -1.60789, -2.16738, -2.58743, 0.324617, 0.266767, -2.14392, -2.59203, -1.90562, -3.10258, -1.81314, 1.15056, -3.81185, -2.48559, -2.03798, -2.57377, -2.39025, -1.43463, -0.672718, -1.97253, -3.45209, -1.31699, -0.362099, -2.69917, -3.11479, -3.16947, -0.0704084, 0.330248, -3.50465, -3.19989, -4.00352, -3.97841, -2.49317, -4.99941, -4.31784, -3.77685, -4.15103, 3.47488, ]) ``` ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. #33835 - [x] Did you write any new necessary tests? ## Who can review? @ArthurZucker @qubvel
[ 73, 31 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "run-slow", "ExecuTorch" ]
https://api.github.com/repos/huggingface/transformers/issues/33834
TITLE T5 is ExecuTorch compatible COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request Enable T5 to ["Export to ExecuTorch"](https://github.com/huggingface/transformers/issues/32253) workflow ### Motivation See details in #32253 ### Your contribution model enablement
[ 76, 31 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ "Feature request", "ExecuTorch" ]
https://api.github.com/repos/huggingface/transformers/issues/33960
TITLE AutoModelForConditionalGeneration COMMENTS 8 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request A function that will do the same as AutoModelForCausalLM but only for models that also work with images ### Motivation I've noticed more vLLM models that use the "ForConditionalGeneration" prefix in their models. I have code that works with any text models. Because of AutoModelForCausalLM, people can use any chat model. I'd like to do the same for models that work with images ### Your contribution -
[ 76 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/34657
TITLE Different LlamaRotaryEmbedding in old and new versions of transformers COMMENTS 6 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info Two versions of transformers: ========= NEW VERSION ============== - `transformers` version: 4.46.1 - Platform: Linux-5.15.0-1044-nvidia-x86_64-with-glibc2.35 - Python version: 3.11.10 - Huggingface_hub version: 0.23.3 - Safetensors version: 0.4.3 - Accelerate version: 0.32.1 - Accelerate config: not found - PyTorch version (GPU?): 2.3.1+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: NVIDIA H100 80GB HBM3 =========== OLD VERSION ===================== - `transformers` version: 4.34.1 - Platform: Linux-5.15.0-1044-nvidia-x86_64-with-glibc2.35 - Python version: 3.11.10 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.3 - Accelerate version: 0.20.3 - Accelerate config: not found - PyTorch version (GPU?): 2.1.1+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The pull request https://github.com/huggingface/transformers/pull/29285 was aimed to make calculations of sin and cos of RoPE to be in float 32. But it seems that changing device from cpu to cuda also produces different results. Though the difference is not so big. To check this you may run the following code. ```import torch vals = torch.linspace(0, 1, 30000, dtype=torch.float32) computes = { "cpu_32" : vals.float().cpu().cos(), "cuda_32" : vals.float().cuda().cos(), "cpu_16" : vals.half().cpu().cos(), "cuda_16": vals.half().cuda().cos() } def compare(x, y): return max(torch.max(torch.abs(x.to(y.device) - y)), torch.max(torch.abs(x - y.to(x.device)))).item() keys = computes.keys() print(end='\t') for k in keys: print(k, end='\t\t') print() for k1 in keys: print(k1, end='\t') for k2 in keys: print(f"{compare(computes[k1], computes[k2]):1.3e}", end='\t') print() ``` The output: ``` cpu_32 cuda_32 cpu_16 cuda_16 cpu_32 0.000e+00 5.960e-08 4.389e-04 4.389e-04 cuda_32 5.960e-08 0.000e+00 4.389e-04 4.389e-04 cpu_16 4.389e-04 4.389e-04 0.000e+00 0.000e+00 cuda_16 4.389e-04 4.389e-04 0.000e+00 0.000e+00 ``` This table shows the maximum difference between calculations on different devices and using different data types. You may see that all float16 computations are identical. But float32 are different for cuda and cpu. Previously all sin and cos computations were performed on cpu. To maintain backward compatibility, I propose to run float32 computations on cpu. Here https://github.com/unslothai/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L142 ``` 142 emb = torch.cat((freqs, freqs), dim=-1) 143 cos = emb.cos() 144 sin = emb.sin() ``` change to ``` 142 emb = torch.cat((freqs, freqs), dim=-1).cpu() 143 cos = emb.cos().to(device_type) 144 sin = emb.sin().to(device_type) ``` ### Impact According to my study, this difference in calculation of sin & cos embeddings impacts output logits and generated tokens. The difference between values of output logits may exceed 10. More than 0.1% of output tokens may be changed in comparison to the original calculations. ### Expected behavior RoPE sin and cos values are expected to be the same as in previous versions of transformers.
[ 23, 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Core: Modeling", "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/34063
TITLE Add DetrImageProcessorFast COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 3 eyes: 0 BODY # What does this PR do? Adds a fast image processors for DETR. Follows issue #33810. This image processor is a result of [this work](https://www.notion.so/huggingface2/OptimVision-Optimize-preprocessing-time-10f1384ebcac8091a12debb87fe5f591) on comparing different image processing method. The processing methods use only [torchvision transforms](https://pytorch.org/vision/stable/transforms.html) (either v1 or v2, depending on the torchvision version) and torch tensors. Just like the current DETR image processor, this processor can also process object detection or segmentation annotations. This processing also uses only torch tensors and torchvision transforms. The post-processing methods have not been modified from the original image processor. ## Implementation A previous fast image processor implementation for VIT ([link](https://github.com/huggingface/transformers/blob/c9afee539204f5e658d03e63a1df3aacb4cab305/src/transformers/models/vit/image_processing_vit_fast.py#L50)) uses torchvision transform classes and `Compose` to create a one step processing class. However this poses two problems: - The torchvision v2 Transforms are only torch compile/scripting compatible in their functional forms and not in their Class form ([source](https://pytorch.org/vision/stable/transforms.html#torchscript-support)). - A one step processing class is not possible when the processing depends on the input, like it's the case for DETR for resizing and padding. So this implementation uses the functional forms of torchvision transforms, and it's structure is very similar to the current DETR image processor. All the numpy/PIL operations have been converted to torch or torchvision operations, and like the VIT fast image processor, this processor only accept `return_tensors = "pt"` The processor call function accept a `device` kwarg, as processing can be performed on both CPU and GPU, but is much faster on GPU. I wanted to add device as an `init` argument, but that would make the signatures of fast and slow processors different, which make some tests fails. ## Usage Except for the fact that it only returns torch tensors, this fast processor is fully compatible with the current one. It can be instantiated through AutoImageProcessor with use_fast=True, or through the Class directly: ```python from transformers import AutoImageProcessor processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50", use_fast=True) ``` ```python from transformers import DetrImageProcessorFast processor = DetrImageProcessorFast.from_pretrained("facebook/detr-resnet-50") ``` Usage is the same as the current processor, except for the `device` kwarg: ```python from torchvision.io import read_image images = torchvision.io.read_image(image_path) processor = DetrImageProcessorFast.from_pretrained("facebook/detr-resnet-50") images_processed = processor(images , return_tensors="pt", device="cuda") ``` If `device` is not specified: - If the input images are tensors, the processing will be done on the device of the images. - If the inputs are PIL or Numpy images, the processing is done on CPU. ## Performance gains ### Main Takeaways #### Processing speedup - **~60x faster processing on GPU (single image)** - **~80x faster processing on GPU (batch_size=8)** - **~5x faster processing on CPU (single image)** - **~2.6x faster processing on CPU (batch_size=8)** #### Inference pass speedup (GPU) - **~2.2x speedup on whole model inference pass (single image, eager)** - **~3.2x speedup on whole model inference pass (single image, compiled)** - **~2.4x speedup on whole model inference pass (batch_size=8, eager)** --- - Average over 100 runs on the same 480x640 image. No padding needed, as "all" the images have the same size. ![benchmark_results_full_pipeline_detr_fast](https://github.com/user-attachments/assets/46129c0c-02ac-485e-9211-afeb68e5fe22) --- - Average over 10% of the COCO 2017 validation dataset, with `batch_size=8`. Padding needed, as the images have different sizes, and the DETR processor resize them using "shortest_edge"/"longest_edge", resulting in different sized resized images. ![benchmark_results_full_pipeline_detr_fast_batched](https://github.com/user-attachments/assets/d3c7136d-3f43-45d6-b25f-52e28d36c3b8) --- - Average over 10% of the COCO 2017 validation dataset, with `batch_size=8`. Forcing padding to 1333x1333 (="longest_edge"), as otherwise torch.compile needs to recompile if the different batches have different max sizes. (I'm not sure what is going wrong when using the compiled model with the current processor) ![benchmark_results_full_pipeline_detr_fast_batched_compiled](https://github.com/user-attachments/assets/1a2b82a9-44e9-4084-b29f-c072137a1e59) --- - Average over 10% of the COCO 2017 validation dataset, with `batch_size=1`. Forcing padding to 1333x1333 for comparison with batched inputs ![benchmark_results_full_pipeline_detr_fast_padded](https://github.com/user-attachments/assets/b80b0c77-d81e-40a1-a765-82827f6f24d7) --- ## Tests - The new image processor is tested on all the tests of the current processor. - I have also added two consistency tests (panoptic and detection) for processing on GPU vs CPU. --- Looking forward to your feedback! I was also wondering if we should adopt a more modular approach to the fast image processors, as there is quite a lot of repetition with the "slow" processor for now. It looks like something like this was done for Fast tokenizers? If someone that worked on Fast tokenizers has any advice on that I'll gladly hear them 🤗. There will also be the question of how to advertise this "use_fast" option to users, and if we want to make it default eventually when torchvision is available? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker - vision models: @amyeroberts, @qubvel - speech models: @ylacombe, @eustlb - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @zucchini-nlp (visual-language models) or @gante (all others) - pipelines: @Rocketknight1 - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @SunMarc - chat templates: @Rocketknight1 Integrations: - deepspeed: HF Trainer/Accelerate: @muellerzr - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc Documentation: @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
[ 62, 39, 65 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Vision", "optimization", "Processing" ]
https://api.github.com/repos/huggingface/transformers/issues/35011
TITLE fix zoedepth initialization error under deepspeed zero3 COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR addresses an issue with the initialization of tensor shapes in the `ZoeDepth` model when using the DeepSpeed framework with ZeRO-3 optimization. Under DeepSpeed ZeRO-3, model parameters are partitioned, and the torch.Tensor class’s initialization behavior is overridden (details can be found in [here](https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/zero/partition_parameters.py#L593)). This override alters the behavior of tensor initialization, causing the error of the following line: ```python self.register_buffer("k_minus_1", torch.Tensor([self.k - 1]).view(1, -1, 1, 1), persistent=False) print(self.k_minus_1.shape) # torch.Size([1, 63, 1, 1]) wrong! ``` To resolve this, the initialization has been updated to use torch.tensor instead of torch.Tensor, which preserves the intended shape under DeepSpeed ZeRO-3. ```python self.register_buffer("k_minus_1", torch.tensor([self.k - 1]).view(1, -1, 1, 1), persistent=False) print(self.k_minus_1.shape) # torch.Size([1, 1, 1, 1]) correct! ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker - vision models: @amyeroberts, @qubvel - speech models: @ylacombe, @eustlb - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @zucchini-nlp (visual-language models) or @gante (all others) - pipelines: @Rocketknight1 - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @SunMarc - chat templates: @Rocketknight1 Integrations: - deepspeed: HF Trainer/Accelerate: @muellerzr - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber Documentation: @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
[ 21, 64, 62 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "DeepSpeed", "bug", "Vision" ]
https://api.github.com/repos/huggingface/transformers/issues/34979
TITLE Add TextNet COMMENTS 5 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 1 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? This PR adds TextNet which will be used to add Fast. It builds up on the work of this PR: https://github.com/huggingface/transformers/pull/27425 (which was approved but not merged) and make it up to date with the library changes. TODO: - [x] Update the model's README file - [x] Check why some tests are failing - [x] Fix processing class errors - [x] Fix modeling file errors HF Model Cards: TextNet-B: https://huggingface.co/jadechoghari/textnet-base TextNet-S: https://huggingface.co/jadechoghari/textnet-small TextNet-T: https://huggingface.co/jadechoghari/textnet-tiny Notebook to replicate the author's logits: https://colab.research.google.com/drive/1YsraOg-GHFh7PlvuIC9iJeBZquVWdz-r?usp=sharing
[ 77, 62, 73 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ "New model", "Vision", "run-slow" ]
https://api.github.com/repos/huggingface/transformers/issues/34023
TITLE fix(Wav2Vec2ForCTC): torch export COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? Fixes #34022 by implementing the masking of the hidden states using an elementwise multiplication rather than indexing with assignment. The torch.export functionality seems to mark the tensor as frozen even though the update is legal. This change is a workaround for now to allow the export of the model as a FxGraph. Further investigation is required to find the real solution in pytorch. Tagging: @ylacombe, @eustlb Please let me know if someone else is more appropriate to review this PR.
[ 73 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "run-slow" ]
https://api.github.com/repos/huggingface/transformers/issues/35747
TITLE `pipeline` AttributeError with `torch.nn.DataParallel` COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.48.0 - Platform: Linux-6.5.0-35-generic-x86_64-with-glibc2.35 - Python version: 3.11.5 - Huggingface_hub version: 0.27.1 - Safetensors version: 0.4.3 - Accelerate version: 0.33.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.1+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: yes - Using GPU in script?: yes - GPU type: NVIDIA RTX A6000 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Hello, I am finetuning a `BertForSequenceClassification` after which point I would like to test it using `pipelines`. However, since I have multiple GPUs, I use `torch.nn.DataParallel` to wrap it in the following way: ```python self.model = torch.nn.DataParallel( module=BertForSequenceClassification.from_pretrained( pretrained_model_name_or_path=self.config.embedding_model_file.model_name, cache_dir=Path(self.config.embedding_model_file.cache_dir), num_labels=len(self.datasets.train.unique_classes), id2label={ idx: label for idx, label in enumerate(self.datasets.train.unique_classes) }, label2id={ label: idx for idx, label in enumerate(self.datasets.train.unique_classes) }, torch_dtype=self.config.training_params.torch_dtype, ).to(self.device) ) ``` and then try to use it for inference via: ```python pipeline( task="text-classification", model=self.model, tokenizer=self.datasets.test.tokenizer, device=self.device, top_k=self.config.training_params.top_k, torch_dtype=self.config.training_params.torch_dtype, ) ``` This worked when I simply had the `BertForSequenceClassification` instance but now with the `DataParallel` wrapping over it I get: ```python File "/home/xx/miniconda3/envs/xxx/lib/python3.11/site-packages/transformers/pipelines/__init__.py", line 950, in pipeline model_config = model.config ^^^^^^^^^^^^ File "/home/xx/miniconda3/envs/xxx/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1695, in __getattr__ raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'") AttributeError: 'DataParallel' object has no attribute 'config' ```' What is the recommended way in this case, do I have to unwrap the model from the `DataParallel` at inference? ### Expected behavior Expected behavior is for the `pipeline` call to not throw an Exception.
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/34467
TITLE Assert error in convert_llava_onevision_weights_to_hf.py COMMENTS 20 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 1 BODY ### System Info - `transformers` version: 4.46.0 - Platform: Linux-5.15.0-97-generic-x86_64-with-glibc2.35 - Python version: 3.12.3 - Huggingface_hub version: 0.26.1 - Safetensors version: 0.4.5 - Accelerate version: 1.0.1 - Accelerate config: not found - PyTorch version (GPU?): 2.3.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: NVIDIA GeForce RTX 4090 ### Who can help? @zucchini-nlp ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I copied [convert_llava_onevision_weights_to_hf.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llava_onevision/convert_llava_onevision_weights_to_hf.py) as `convert.py`, and run: ```bash python convert.py --pytorch_dump_folder_path ./0.5b --model_id lmms-lab/llava-onevision-qwen2-0.5b-ov python convert.py --pytorch_dump_folder_path ./7b --model_id lmms-lab/llava-onevision-qwen2-7b-ov ``` Then I encountered an assertion error; it appears that the logits produced by the converted model do not match those specified in the script. `lmms-lab/llava-onevision-qwen2-0.5b-ov` output: ```bash $ python convert.py --pytorch_dump_folder_path ./0.5b --model_id lmms-lab/llava-onevision-qwen2-0.5b-ov {'_name_or_path': '/mnt/bn/vl-research/checkpoints/onevision/llavanext-google_siglip-so400m-patch14-384-Qwen_Qwen2-0.5B-Instruct-mid_to_final_next_3p2m_am9_july21', 'architectures': ['LlavaQwenForCausalLM'], 'attention_dropout': 0.0, 'mm_newline_position': 'one_token', 'bos_token_id': 151643, 'eos_token_id': 151645, 'hidden_act': 'silu', 'hidden_size': 896, 'image_aspect_ratio': 'anyres_max_9', 'image_crop_resolution': None, 'image_grid_pinpoints': [[384, 384], [384, 768], [384, 1152], [384, 1536], [384, 1920], [384, 2304], [768, 384], [768, 768], [768, 1152], [768, 1536], [768, 1920], [768, 2304], [1152, 384], [1152, 768], [1152, 1152], [1152, 1536], [1152, 1920], [1152, 2304], [1536, 384], [1536, 768], [1536, 1152], [1536, 1536], [1536, 1920], [1536, 2304], [1920, 384], [1920, 768], [1920, 1152], [1920, 1536], [1920, 1920], [1920, 2304], [2304, 384], [2304, 768], [2304, 1152], [2304, 1536], [2304, 1920], [2304, 2304]], 'image_split_resolution': None, 'image_token_index': 151646, 'initializer_range': 0.02, 'intermediate_size': 4864, 'max_position_embeddings': 32768, 'max_window_layers': 24, 'mm_hidden_size': 1152, 'mm_patch_merge_type': 'spatial_unpad', 'mm_projector_lr': None, 'mm_projector_type': 'mlp2x_gelu', 'mm_resampler_type': None, 'mm_spatial_pool_mode': 'bilinear', 'mm_tunable_parts': 'mm_vision_tower,mm_mlp_adapter,mm_language_model', 'mm_use_im_patch_token': False, 'mm_use_im_start_end': False, 'mm_vision_select_feature': 'patch', 'mm_vision_select_layer': -2, 'mm_vision_tower': 'google/siglip-so400m-patch14-384', 'mm_vision_tower_lr': 2e-06, 'model_type': 'llava', 'num_attention_heads': 14, 'num_hidden_layers': 24, 'num_key_value_heads': 2, 'pos_skipping_range': 4096, 'rms_norm_eps': 1e-06, 'rope_scaling': None, 'rope_theta': 1000000.0, 'sliding_window': 32768, 'tie_word_embeddings': True, 'tokenizer_model_max_length': 32768, 'tokenizer_padding_side': 'right', 'torch_dtype': 'bfloat16', 'transformers_version': '4.40.0.dev0', 'use_cache': True, 'use_mm_proj': True, 'use_pos_skipping': False, 'use_sliding_window': False, 'vision_tower_pretrained': None, 'vocab_size': 151936} Fetching 1 files: 100%|█████████████████████████████████████████████| 1/1 [00:00<00:00, 2460.00it/s] The new embeddings will be initialized from a multivariate normal distribution that has old embeddings' mean and covariance. As described in this article: https://nlp.stanford.edu/~johnhew/vocab-expansion.html. To disable this, use `mean_resizing=False` The new lm_head weights will be initialized from a multivariate normal distribution that has old embeddings' mean and covariance. As described in this article: https://nlp.stanford.edu/~johnhew/vocab-expansion.html. To disable this, use `mean_resizing=False` Saving model and processor for lmms-lab/llava-onevision-qwen2-0.5b-ov to ./0.5b Single forward pass Shape of logits: torch.Size([1, 6578, 152000]) First values of logits: tensor([[-12.0234, -14.3828, -12.7500], [ 2.3828, 1.0283, 3.9512], [ 3.6641, 4.7031, 9.1172]], device='cuda:0') Traceback (most recent call last): File "/root/autodl-tmp/convert.py", line 388, in <module> convert_llava_to_hf(args.model_id, args.pytorch_dump_folder_path, args.push_to_hub) File "/root/autodl-tmp/convert.py", line 288, in convert_llava_to_hf assert torch.allclose(outputs.logits[0, :3, :3], expected_slice, atol=1e-4) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Half did not match Float ``` `lmms-lab/llava-onevision-qwen2-7b-ov` output: ```bash $ python convert.py --pytorch_dump_folder_path ./7b --model_id lmms-lab/llava-onevision-qwen2-7b-ov {'_name_or_path': '/mnt/bn/vl-research/checkpoints/onevision/llavanext-google_siglip-so400m-patch14-384-Qwen_Qwen2-7B-Instruct-mid_to_final_next_2p4m_am4', 'architectures': ['LlavaQwenForCausalLM'], 'mm_newline_position': 'one_token', 'attention_dropout': 0.0, 'bos_token_id': 151643, 'eos_token_id': 151645, 'hidden_act': 'silu', 'hidden_size': 3584, 'image_token_index': 151646, 'image_aspect_ratio': 'anyres_max_9', 'image_crop_resolution': None, 'image_grid_pinpoints': [[384, 384], [384, 768], [384, 1152], [384, 1536], [384, 1920], [384, 2304], [768, 384], [768, 768], [768, 1152], [768, 1536], [768, 1920], [768, 2304], [1152, 384], [1152, 768], [1152, 1152], [1152, 1536], [1152, 1920], [1152, 2304], [1536, 384], [1536, 768], [1536, 1152], [1536, 1536], [1536, 1920], [1536, 2304], [1920, 384], [1920, 768], [1920, 1152], [1920, 1536], [1920, 1920], [1920, 2304], [2304, 384], [2304, 768], [2304, 1152], [2304, 1536], [2304, 1920], [2304, 2304]], 'image_split_resolution': None, 'initializer_range': 0.02, 'intermediate_size': 18944, 'max_position_embeddings': 32768, 'max_window_layers': 28, 'mm_hidden_size': 1152, 'mm_patch_merge_type': 'spatial_unpad', 'mm_projector_lr': None, 'mm_projector_type': 'mlp2x_gelu', 'mm_resampler_type': None, 'mm_spatial_pool_mode': 'bilinear', 'mm_tunable_parts': 'mm_vision_tower,mm_mlp_adapter,mm_language_model', 'mm_use_im_patch_token': False, 'mm_use_im_start_end': False, 'mm_vision_select_feature': 'patch', 'mm_vision_select_layer': -2, 'mm_vision_tower': 'google/siglip-so400m-patch14-384', 'mm_vision_tower_lr': 2e-06, 'model_type': 'llava', 'num_attention_heads': 28, 'num_hidden_layers': 28, 'num_key_value_heads': 4, 'pos_skipping_range': 4096, 'rms_norm_eps': 1e-06, 'rope_scaling': None, 'rope_theta': 1000000.0, 'sliding_window': 131072, 'tie_word_embeddings': False, 'tokenizer_model_max_length': 32768, 'tokenizer_padding_side': 'right', 'torch_dtype': 'bfloat16', 'transformers_version': '4.40.0.dev0', 'use_cache': True, 'use_mm_proj': True, 'use_pos_skipping': False, 'use_sliding_window': False, 'vision_tower_pretrained': None, 'vocab_size': 152064} Fetching 4 files: 100%|██████████████████████████████████████████████| 4/4 [00:00<00:00, 298.81it/s] The new embeddings will be initialized from a multivariate normal distribution that has old embeddings' mean and covariance. As described in this article: https://nlp.stanford.edu/~johnhew/vocab-expansion.html. To disable this, use `mean_resizing=False` The new lm_head weights will be initialized from a multivariate normal distribution that has old embeddings' mean and covariance. As described in this article: https://nlp.stanford.edu/~johnhew/vocab-expansion.html. To disable this, use `mean_resizing=False` Saving model and processor for lmms-lab/llava-onevision-qwen2-7b-ov to ./7b Loading checkpoint shards: 100%|██████████████████████████████████████| 4/4 [00:03<00:00, 1.28it/s] Single forward pass Shape of logits: torch.Size([1, 6578, 152128]) First values of logits: tensor([[1.8486, 3.4219, 1.3125], [3.1191, 3.0195, 3.1660], [4.2461, 4.7227, 9.9609]], device='cuda:0') Traceback (most recent call last): File "/root/autodl-tmp/convert.py", line 388, in <module> convert_llava_to_hf(args.model_id, args.pytorch_dump_folder_path, args.push_to_hub) File "/root/autodl-tmp/convert.py", line 288, in convert_llava_to_hf assert torch.allclose(outputs.logits[0, :3, :3], expected_slice, atol=1e-4) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Half did not match Float ``` ### Expected behavior The output logits remain consistent and do not produce an assertion error.
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/34731
TITLE Translation model M2M100 uses 2 models in cache (from version 4.46.0) COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info I'm using `facebook/m2m100_418M` translation model. From version 4.46.0 it downloads another model which wieghts ~2 GB. I'm using python 3.11, in `ubuntu` ### Who can help? @ArthurZucker ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` import torch from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer model = M2M100ForConditionalGeneration.from_pretrained( "facebook/m2m100_418M", torch_dtype=torch.float16, ).to("cpu").eval() token = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M") encoded_text = token("my name is earl", return_tensors="pt") encoded_text = encoded_text.to("cpu") target_lang_id = token.get_lang_id("he") generated_tokens = model.generate(**encoded_text, forced_bos_token_id=target_lang_id) print(generated_tokens) ``` ### Expected behavior The models are being put in `/home/ubuntu/.cache/huggingface/hub/models--facebook--m2m100_418M/` until version 4.46.0 there was this hierarchy: `snapshots/55c2e61bbf05dfb8d7abccdc3fae6fc8512fd636` which contained 7 files (one of them is the model itself pytorch_model.bin - ~2 GB). From version 4.46.0, there is a new dir: `snapshots/791dc1c6d300846c9a747d4bd11fcc7f369b750e`, there is one file in there: `model.safetensors`, which is a soft link to another heavy ~2GB file in blobs dir. Can you please resolve it and make it download and use only one model file? this usage is very wasteful. Thanks!
[ 64, 4 ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug", "Safetensors" ]
https://api.github.com/repos/huggingface/transformers/issues/33662
TITLE Error when merging image features and text features? COMMENTS 5 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY https://github.com/huggingface/transformers/blob/be9cf070ee2cb6a9f0d162e5be32d9d68b9df3af/src/transformers/models/llava/modeling_llava.py#L336C9-L336C107 ```python # 5. Fill the embeddings corresponding to the images. Anything that is not `text_positions` needs filling (#29835) image_to_overwrite = torch.full( (batch_size, max_embed_dim), True, dtype=torch.bool, device=inputs_embeds.device ) image_to_overwrite[batch_indices, text_to_overwrite] = False image_to_overwrite &= image_to_overwrite.cumsum(-1) - 1 >= nb_image_pad[:, None].to(target_device) ``` I think the last line has something wrong in logic, look at the example ```python the input_ids is as follows, 32000 is the image token index, 0 is the padding index, we suppose num_patches=2 [[32000, 32000, 1, 2, 3], [1, 32000, 2, 3, 0]] then new_token_positions is: [[1, 3, 4, 5, 6], [0, 2, 3, 4, 5]] nb_image_pad is: [0, 1] before the last step, image_to_overwrite is: [[True, True, True, True, False, False, False], [False, True, True, False, False, False, True]] after the last step, image_to_overwrite is: [[True, True, True, True, False, False, False], [False, False, True, False, False, False, True]] however, the right result should be: [[True, True, True, True, False, False, False], [False, True, True, False, False, False, False]] ``` I think the code is only for left padding , if we use right padding, there should be some modifications, and here is my code: ``` image_to_overwrite = torch.full( (batch_size, max_embed_dim), True, dtype=torch.bool, device=inputs_embeds.device ) image_to_overwrite[batch_indices, text_to_overwrite] = False if left_padding: image_to_overwrite &= image_to_overwrite.cumsum(-1) - 1 >= nb_image_pad[:, None].to(target_device) else: image_to_overwrite &= torch.ones_like(image_to_overwrite, dtype=torch.bool).cumsum(-1) - 1 <= new_positions[:, -1:].to(target_device) ```
[ 62 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Vision" ]
https://api.github.com/repos/huggingface/transformers/issues/34740
TITLE potential rope implementation issue in llama model COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY https://github.com/huggingface/transformers/blob/a3d69a8994d673899608a7c17fbf4f953f50474e/src/transformers/models/llama/modeling_llama.py#L199 heyyy 🤗, i was learning rope recently and was looking at the reference implementation above. and i'm a little confused here. in the original rope paper, we should pair each adjacent numbers and apply the roration. below is a screenshot from the paper. ![image](https://github.com/user-attachments/assets/e259db56-b1d4-48cd-b3a6-4b362e3de5c9) notice that the index for the second X should goes like 2, 1, 4, 3, 6, 5 .... but in the code above it uses x1 = x[..., : x.shape[-1] // 2] , which rotate all the second half of X with the all the first half, so the index will goes like 4, 5, 6, 1, 2, 3... so this seems not aligning with what rope needs, it is effectively trying to pair xi with xi+(d/2), but what we need is to pair xi with xi+1. would love to hear anything from you guys 🤗 tom
[ 75 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ "Discussion" ]
https://api.github.com/repos/huggingface/transformers/issues/34521
TITLE num_quantizer in EncodecConfig should accept variable codebook size COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info transformers == 4.46.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I pretrained custom EnCodec model of my own and trying to convert it to huggingface model format I used a different codebook size in my custom EnCodec model such as 4096 or 8192 (the default is 1024) Here is the problem : The user can change the codebook size in EncodecConfig by passing the argument. When the num_quantizer is calculated however, the codebook size is fixed to default value 1024. https://github.com/huggingface/transformers/blob/bc598c00db37d1fbb1551723873d37e238c3ede7/src/transformers/models/encodec/configuration_encodec.py#L187-L189 Here, 10 multiplied with self.frame_rate, which stands for the number of bit consumed by the codebook size, is fixed. ### Expected behavior num_quantizer should accept variable codebook size since we can already change the codebook size in argument. Here is the modified code based on official implementation. ``` @property def num_quantizers(self) -> int: return int(max(1, math.floor(self.target_bandwidths[-1] * 1000 / self.frame_rate * math.log2(self.codebook_size)))) ```
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/36194
TITLE AutoProcessor loading error COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info Related Issues and PR: #34307 https://github.com/huggingface/transformers/pull/36184 - `transformers` version: 4.49.0.dev0 - Platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35 - Python version: 3.10.16 - Huggingface_hub version: 0.27.1 - Safetensors version: 0.5.2 - Accelerate version: 1.0.1 - Accelerate config: not found - PyTorch version (GPU?): 2.6.0+cu126 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: NVIDIA H100 80GB HBM3 ### Who can help? @Rocketknight1 ### Information - [x] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Here are the reprodution steps 1. choose a mllm like qwen2.5vl, and download it's config file 2. derive its images processor, processor and model 3. modify the config file and try to use AutoProcessor to load_from_pretrain 4. and the error occurs like #34307 ```python from transformers import Qwen2_5_VLProcessor, Qwen2_5_VLImageProcessor, Qwen2_5_VLForConditionalGeneration, Qwen2_5_VLConfig class NewProcessor(Qwen2_5_VLProcessor): image_processor_class = "NewImageProcessor" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) class NewImageProcessor(Qwen2_5_VLImageProcessor): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) class NewConfig(Qwen2_5_VLConfig): model_type = "new_model" class NewModel(Qwen2_5_VLForConditionalGeneration): config_class = NewConfig def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) from transformers import AutoModel, AutoImageProcessor, AutoProcessor AutoImageProcessor.register(NewModel.config_class, NewImageProcessor) AutoProcessor.register(NewModel.config_class, NewProcessor) AutoModel.register(NewModel.config_class, NewModel) if __name__ == "__main__": processor = NewProcessor.from_pretrained("path/to/NewModel_config/") ``` modified config ``` config.json: "architectures": [ "NewModel" ], "model_type": "new_model", preprocessor_config.json: "image_processor_type": "NewImageProcessor", "processor_class": "NewProcessor" ``` I also check the pr https://github.com/huggingface/transformers/pull/36184, it didn't work, because the func _get_class_from_class_name use mapping but the key is string rather than Config class ### Expected behavior None
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/35400
TITLE Add D-FINE Model into Transformers COMMENTS 6 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? This PR add D-FINE into the Transformers library. There is a new thing in transformers called modular, which adds new models by creating a modeling_modelname.py file. Since D-FINE updates several RT-DETR arch parts while keeping the rest of the model unchanged, it serves as an ideal use case for this modular approach. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. Link: https://github.com/huggingface/transformers/issues/35283 - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @qubvel @Rocketknight1 @ArthurZucker <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker - vision models: @amyeroberts, @qubvel - speech models: @ylacombe, @eustlb - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @zucchini-nlp (visual-language models) or @gante (all others) - pipelines: @Rocketknight1 - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @SunMarc - chat templates: @Rocketknight1 Integrations: - deepspeed: HF Trainer/Accelerate: @muellerzr - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber Documentation: @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
[ 77, 62 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ "New model", "Vision" ]
https://api.github.com/repos/huggingface/transformers/issues/35298
TITLE [Question] Why doesn't `trainer.state.epoch` fall round after training? COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ```python # run.py from datasets import Dataset from transformers import TrainingArguments, Trainer, AutoModelForCausalLM def main(): model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B") dataset = Dataset.from_dict( { "input_ids": [[1, 2, 3] for _ in range(260)], "labels": [[4, 5, 6] for _ in range(260)], } ) trainer = Trainer( model=model, args=TrainingArguments( output_dir="my_output_dir", per_device_train_batch_size=16, gradient_accumulation_steps=2, num_train_epochs=1, report_to="none", ), train_dataset=dataset, ) trainer.train() print(trainer.state.epoch) # 0.9411 if __name__ == "__main__": main() ``` ``` python run.py ``` In this case, I would expect `trainer.state.epoch` to be 1 after the training, but I end up with 0.9411 (=16/17). How to explain this? @muellerzr ## System info - `transformers` version: 4.47.0.dev0 22834eeba1c2bf8d632e22fca238ab7c15d1b904 - Platform: Linux-5.15.0-1048-aws-x86_64-with-glibc2.31 - Python version: 3.11.10 - Huggingface_hub version: 0.26.2 - Safetensors version: 0.4.5 - Accelerate version: 1.2.0.dev0 - Accelerate config: not found - PyTorch version (GPU?): 2.5.0+cu124 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: NVIDIA H100 80GB HBM3
[ 66 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "trainer" ]
https://api.github.com/repos/huggingface/transformers/issues/35919
TITLE Update deprecated Jax calls COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY jax v0.4.27 introduced the [deprecation](https://jax.readthedocs.io/en/latest/changelog.html#jax-0-4-27-may-7-2024:~:text=jax.numpy.clip()%20has%20a%20new%20argument%20signature%3A%20a%2C%20a_min%2C%20and%20a_max%20are%20deprecated%20in%20favor%20of%20x%20(positional%20only)%2C%20min%2C%20and%20max%20(%2320550).) of `jax.numpy.clip`'s `a_min` and `a_max` arguments (changing to just `min` and `max`, respectively). This PR updates [all uses of jnp.clip](https://github.com/search?q=repo%3Ahuggingface%2Ftransformers+%22jnp.clip%22+%28%22a_max%22+OR+%22a_min%22%29&type=code) to use these new arguments. This PR was originally written by @jakevdp. cc @sanchit-gandhi your review would be appreciated!
[ 55 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Flax" ]
https://api.github.com/repos/huggingface/transformers/issues/35895
TITLE safetensors_rust.SafetensorError: Error while serializing: IoError(Os { code: 5, kind: Uncategorized, message: "Input/output error" }) COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.47.1 - Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.39 - Python version: 3.12.3 - Huggingface_hub version: 0.27.0 - Safetensors version: 0.4.5 - Accelerate version: 1.2.1 - Accelerate config: not found - PyTorch version (GPU?): 2.5.1+cu124 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: No - Using GPU in script?: Yes - GPU type: NVIDIA GeForce RTX 3060 Laptop GPU ### Who can help? @SunMarc @MekkCyber ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Error: ``` Traceback (most recent call last): File "/mnt/d/Python_Projects/Jupyter/other/call-center-prompter/debug/quant/check-quantizations.py", line 31, in <module> quantize_gptq(model_id=model_id, quant_config=gptq_config, prefix_dir=prefix_dir) File "/mnt/d/Python_Projects/Jupyter/other/call-center-prompter/debug/quant/gptq_quantize.py", line 32, in quantize_gptq model.save_pretrained(prefix_dir + quant_path) File "/mnt/d/Python_Projects/Jupyter/other/call-center-prompter/debug/quant/venv-wsl2/lib/python3.12/site-packages/transformers/modeling_utils.py", line 3034, in save_pretrained safe_save_file(shard, os.path.join(save_directory, shard_file), metadata={"format": "pt"}) File "/mnt/d/Python_Projects/Jupyter/other/call-center-prompter/debug/quant/venv-wsl2/lib/python3.12/site-packages/safetensors/torch.py", line 286, in save_file serialize_file(_flatten(tensors), filename, metadata=metadata) safetensors_rust.SafetensorError: Error while serializing: IoError(Os { code: 5, kind: Uncategorized, message: "Input/output error" }) ``` Code: ``` import os import logging from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig from huggingface_hub import login, snapshot_download logger = logging.getLogger(__name__) logger.info("Logging in HF") login(token=<mytoken>) def quantize_gptq(model_id: str, quant_config: dict, prefix_dir: str = './') -> str: prefix_dir += '/' if prefix_dir[-1] != '/' else '' model_path = prefix_dir + model_id.split('/')[1] if os.path.exists(prefix_dir + model_id.split('/')[1]) else model_id quant_path = model_id.split('/')[1] + f"-GPTQ-{quant_config['bits']}bit" if os.path.exists(prefix_dir + quant_path): logger.info("Skipping GPTQ quantization because it already exists") else: tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True) config = GPTQConfig(**quant_config, dataset="c4", tokenizer=tokenizer) # exllama_config={"version":2} model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", trust_remote_code=False, quantization_config=config, revision="main" ) logger.info("Save GPTQ quantized model") os.makedirs(prefix_dir + quant_path, exist_ok=True) model.save_pretrained(prefix_dir + quant_path) tokenizer.save_pretrained(prefix_dir + quant_path) logger.info("Push to hub GPTQ quantized model") model.push_to_hub(quant_path) tokenizer.push_to_hub(quant_path) return prefix_dir + quant_path ``` ### Expected behavior Model saving without errors
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/36166
TITLE Add phi3 Vision COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 1 heart: 1 rocket: 0 eyes: 0 BODY ### Model description Model is here: https://huggingface.co/microsoft/Phi-3-vision-128k-instruct with code, weights and paper! 🚀 ### Open source status - [ ] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation _No response_
[ 77, 0 ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ "New model", "Good Difficult Issue" ]
https://api.github.com/repos/huggingface/transformers/issues/33309
TITLE Add SDPA support for M2M100 COMMENTS 5 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds SDPA support for M2M100 models. Part of #28005. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker @fxmarty @amyeroberts
[ 73 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "run-slow" ]
https://api.github.com/repos/huggingface/transformers/issues/33642
TITLE Enable changing the loss function by making the hard-coded `loss_fct` an attribute of `BertForTokenClassification`. COMMENTS 4 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request In the method `transformers.models.bert.modeling_bert.BertForTokenClassification.forward`, the `loss_fct = CrossEntropyLoss()` is currently hard-coded. To change the loss function (e.g., to set class weights in `CrossEntropyLoss`), one must currently monkey-patch the model. By making `loss_fct` an attribute (e.g., `self.loss_fct`), users can simply replace it and use custom loss functions during training. ### Motivation The motivation behind this proposal stems from the need to change the loss function for fine-tuning a pre-trained BERT model for token classification, particularly when dealing with imbalanced classes. In my use case, I need to prioritize recall, as most tokens belong to the "other" class. To achieve this, I need to set custom weights in the `CrossEntropyLoss`, like this: ```python loss_fct = CrossEntropyLoss(weight=torch.tensor([0.1, 1.0, 1.0, 2.0, 2.0], device=self.device) ``` However, since the loss function is hard-coded inside the `forward` method, modifying it currently requires overriding the entire method just to change one line, as shown here: ```python @patch def forward( self: BertForTokenClassification, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, token_type_ids: Optional[torch.Tensor] = None, position_ids: Optional[torch.Tensor] = None, head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple[torch.Tensor], 'TokenClassifierOutput']: r""" labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`. """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict outputs = self.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) sequence_output = outputs[0] sequence_output = self.dropout(sequence_output) logits = self.classifier(sequence_output) loss = None if labels is not None: class_weights = torch.tensor([0.1, 1.0, 1.0, 2.0, 2.0], device=self.device) loss_fct = CrossEntropyLoss(weight=class_weights) # <------------------- only change loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) if not return_dict: output = (logits,) + outputs[2:] return ((loss,) + output) if loss is not None else output return TokenClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) ``` By turning `loss_fct` into an attribute, we could avoid the need monkey-patching. The change could be as simple as: ```python class_weights = torch.tensor([0.1, 1.0, 1.0, 2.0, 2.0], device=model.device) model.loss_fct = CrossEntropyLoss(weight=class_weights) ``` This would leave existing code unchanged but make it easier to swap in a custom loss function when needed. ### Your contribution I am new to this repository and this would be my first pull request. I would like to ask if these types of changes are welcomed, and if it makes sense to proceed with submitting a pull request for this improvement.
[ 23, 67, 76 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ "Core: Modeling", "Usage", "Feature request" ]
https://api.github.com/repos/huggingface/transformers/issues/34754
TITLE warmup LR schedulers start from LR=0 COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info transformers commit: 52ea4aa589324bae43dfb1b6db70335da7b68654 (main at time of writing) the rest isn't relevant. ### Who can help? trainer: @muellerzr @SunMarc ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run any example with a warmup scheduler, observe the the effective LR is 0 for the first step, unnecessarily wasting compute. See similar discussion for this issue on torchtune https://github.com/pytorch/torchtune/issues/2010. See the code at https://github.com/huggingface/transformers/blob/52ea4aa589324bae43dfb1b6db70335da7b68654/src/transformers/optimization.py#L182 and evaluate for step 0. Observe it returns LR factor = 0, weights will not be updated. ### Expected behavior Expect every optimizer step to adjust the weights of my model unless there is a good reason not to.
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/33416
TITLE The examples in the examples directory are mostly for fine-tuning pre-trained models?how to trian from scratch COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Model description no ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_
[ 77 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/35889
TITLE Request to add Doge COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 1 heart: 0 rocket: 0 eyes: 0 BODY ### Model description Doge is an architecture that combines the advantages of state-space and self-attention. It solves the problem of self-attention getting lost in long sequences by computing **dynamic mask** from cached value states using zeroth-order holding. It can also use `wsd_scheduler` on top of dense weight checkpoints to **additionally train** a sparsely activated feedforward network expansion layer. paper: https://arxiv.org/abs/2412.11834 ### Open source status - [x] The model implementation is available - [x] The model weights are available ### Provide useful links for the implementation Repository: https://github.com/LoserCheems/WonderfulMatrices Weights: https://huggingface.co/collections/JingzeShi/doge-slm-677fd879f8c4fd0f43e05458
[ 77 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 ]
[ "New model" ]
https://api.github.com/repos/huggingface/transformers/issues/33836
TITLE Albert is ExecuTorch compatible COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### Feature request Enable Albert to ["Export to ExecuTorch"](https://github.com/huggingface/transformers/issues/32253) workflow ### Motivation See details in #32253 ### Your contribution Enable Albert model
[ 76, 31 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ "Feature request", "ExecuTorch" ]
https://api.github.com/repos/huggingface/transformers/issues/35428
TITLE cannot custom `warmup_min_lr` of deepspeed lr scheduler COMMENTS 2 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info 4.45 version ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction [link](https://github.com/huggingface/transformers/blob/4eb17b26e77611d4fbcdcbbc20c7bf275eb015c9/src/transformers/integrations/deepspeed.py#L171) idk why it's hardcoded ### Expected behavior warmup_min_lr value is not working
[ 21, 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "DeepSpeed", "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/35221
TITLE Numpy is not available COMMENTS 1 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info MacOs Sequoia 15.0.1, Python 3.10 Running `pip list` ``` Package Version ------------------------- -------------- accelerate 1.2.0 aiohappyeyeballs 2.4.4 aiohttp 3.11.9 aiosignal 1.3.1 anyio 4.6.2.post1 appnope 0.1.4 argon2-cffi 23.1.0 argon2-cffi-bindings 21.2.0 arrow 1.3.0 asttokens 3.0.0 async-lru 2.0.4 async-timeout 5.0.1 attrs 24.2.0 babel 2.16.0 beautifulsoup4 4.12.3 bleach 6.2.0 certifi 2024.8.30 cffi 1.17.1 charset-normalizer 3.4.0 comm 0.2.2 datasets 3.1.0 debugpy 1.8.9 decorator 5.1.1 defusedxml 0.7.1 dill 0.3.8 evaluate 0.4.3 exceptiongroup 1.2.2 executing 2.1.0 fastjsonschema 2.21.1 filelock 3.16.1 fqdn 1.5.1 frozenlist 1.5.0 fsspec 2024.9.0 h11 0.14.0 httpcore 1.0.7 httpx 0.28.0 huggingface-hub 0.26.3 idna 3.10 ipykernel 6.29.5 ipython 8.30.0 ipywidgets 8.1.5 isoduration 20.11.0 jedi 0.19.2 Jinja2 3.1.4 json5 0.10.0 jsonpointer 3.0.0 jsonschema 4.23.0 jsonschema-specifications 2024.10.1 jupyter 1.1.1 jupyter_client 8.6.3 jupyter-console 6.6.3 jupyter_core 5.7.2 jupyter-events 0.10.0 jupyter-lsp 2.2.5 jupyter_server 2.14.2 jupyter_server_terminals 0.5.3 jupyterlab 4.3.2 jupyterlab_pygments 0.3.0 jupyterlab_server 2.27.3 jupyterlab_widgets 3.0.13 MarkupSafe 3.0.2 matplotlib-inline 0.1.7 mistune 3.0.2 mpmath 1.3.0 multidict 6.1.0 multiprocess 0.70.16 nbclient 0.10.1 nbconvert 7.16.4 nbformat 5.10.4 nest-asyncio 1.6.0 networkx 3.4.2 notebook 7.3.0 notebook_shim 0.2.4 numpy 2.1.3 overrides 7.7.0 packaging 24.2 pandas 2.2.3 pandocfilters 1.5.1 parso 0.8.4 pexpect 4.9.0 pillow 11.0.0 pip 24.2 platformdirs 4.3.6 prometheus_client 0.21.1 prompt_toolkit 3.0.48 propcache 0.2.1 protobuf 5.29.0 psutil 6.1.0 ptyprocess 0.7.0 pure_eval 0.2.3 pyarrow 18.1.0 pycparser 2.22 Pygments 2.18.0 python-dateutil 2.9.0.post0 python-json-logger 2.0.7 pytz 2024.2 PyYAML 6.0.2 pyzmq 26.2.0 referencing 0.35.1 regex 2024.11.6 requests 2.32.3 rfc3339-validator 0.1.4 rfc3986-validator 0.1.1 rpds-py 0.22.3 safetensors 0.4.5 Send2Trash 1.8.3 sentencepiece 0.2.0 setuptools 75.1.0 six 1.16.0 sniffio 1.3.1 soupsieve 2.6 stack-data 0.6.3 sympy 1.13.3 terminado 0.18.1 tinycss2 1.4.0 tokenizers 0.20.3 tomli 2.2.1 torch 2.2.2 torchvision 0.17.2 tornado 6.4.2 tqdm 4.67.1 traitlets 5.14.3 transformers 4.46.3 types-python-dateutil 2.9.0.20241003 typing_extensions 4.12.2 tzdata 2024.2 uri-template 1.3.0 urllib3 2.2.3 wcwidth 0.2.13 webcolors 24.11.1 webencodings 0.5.1 websocket-client 1.8.0 wheel 0.44.0 widgetsnbextension 4.0.13 xxhash 3.5.0 yarl 1.18.3 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction From here, https://huggingface.co/learn/nlp-course/chapter3/3#evaluation, I ran this code, ``` predictions = trainer.predict(tokenized_datasets["validation"]) print(predictions.predictions.shape, predictions.label_ids.shape) ``` and I got this error, ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[10], line 1 ----> 1 predictions = trainer.predict(tokenized_datasets["validation"]) 2 print(predictions.predictions.shape, predictions.label_ids.shape) File ~/anaconda3/envs/py10hugface/lib/python3.10/site-packages/transformers/trainer.py:4053, in Trainer.predict(self, test_dataset, ignore_keys, metric_key_prefix) 4050 start_time = time.time() 4052 eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop -> 4053 output = eval_loop( 4054 test_dataloader, description="Prediction", ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix 4055 ) 4056 total_batch_size = self.args.eval_batch_size * self.args.world_size 4057 if f"{metric_key_prefix}_jit_compilation_time" in output.metrics: File ~/anaconda3/envs/py10hugface/lib/python3.10/site-packages/transformers/trainer.py:4235, in Trainer.evaluation_loop(self, dataloader, description, prediction_loss_only, ignore_keys, metric_key_prefix) 4232 delattr(self, "_past") 4234 # Gather all remaining tensors and put them back on the CPU -> 4235 all_losses = all_losses.get_arrays() 4236 all_preds = all_preds.get_arrays() 4237 all_labels = all_labels.get_arrays() File ~/anaconda3/envs/py10hugface/lib/python3.10/site-packages/transformers/trainer_pt_utils.py:346, in EvalLoopContainer.get_arrays(self) 344 def get_arrays(self): 345 """Returns the numpified and moved to CPU stored objects.""" --> 346 self.to_cpu_and_numpy() 347 return self.arrays File ~/anaconda3/envs/py10hugface/lib/python3.10/site-packages/transformers/trainer_pt_utils.py:333, in EvalLoopContainer.to_cpu_and_numpy(self) 330 if self.tensors is None: 331 return --> 333 new_arrays = nested_numpify(self.tensors) 334 if self.arrays is None: 335 self.arrays = new_arrays File ~/anaconda3/envs/py10hugface/lib/python3.10/site-packages/transformers/trainer_pt_utils.py:180, in nested_numpify(tensors) 175 if t.dtype == torch.bfloat16: 176 # As of Numpy 1.21.4, NumPy does not support bfloat16 (see 177 # https://github.com/numpy/numpy/blob/a47ecdea856986cd60eabbd53265c2ca5916ad5d/doc/source/user/basics.types.rst ). 178 # Until Numpy adds bfloat16, we must convert float32. 179 t = t.to(torch.float32) --> 180 return t.numpy() RuntimeError: Numpy is not available ``` ### Expected behavior To see the same output as in the class i.e., `(408, 2) (408,)`
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/35621
TITLE The argument "dim" is gone from LlamaRotaryEmbedding initializer. Intentional? COMMENTS 3 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info 4.48.0 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction We see the following error with the latest version 4.48.0 of transformers when initializating LlamaRotaryEmbedding: ``` 2025-01-10 14:08:19,223::DEBUG: [stdout] cache_ids = cache_ids.view(-1, 1) 2025-01-10 14:08:19,223::DEBUG: [stdout] > embed = LlamaRotaryEmbedding(dim=d_head, max_position_embeddings=2048, base=10000) 2025-01-10 14:08:19,223::DEBUG: [stdout] E TypeError: LlamaRotaryEmbedding.__init__() got an unexpected keyword argument 'dim' 2025-01-10 14:08:19,223::DEBUG: [stdout] 2025-01-10 14:08:19,223::DEBUG: [stdout] transformers_neuronx_test/unit/1_core/test_rotary.py:108: TypeError ``` To test simply instantiate LlamaRotaryEmbedding with dim equal something: ``` from transformers.models.llama.modeling_llama import LlamaRotaryEmbedding embed = LlamaRotaryEmbedding(dim=96, max_position_embeddings=2048, base=10000) ``` ### Expected behavior The dim argument was there in previous version. Is the argument no longer needed?
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/35909
TITLE ForSequenceClassification models assume right-padding COMMENTS 0 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY We have `ForSequenceClassification` variants for several of our CLMs, and this line is commonly copied between them: `sequence_lengths = torch.eq(input_ids, self.config.pad_token_id).int().argmax(-1) - 1` However, this line assumes **right-padding**, because it treats the length of the sequence as the number of tokens before the first pad token (when multiple values have the max value, then `argmax` returns the first index with that value). If we have left-padding instead, then this will break because it will compute a sequence length of -1. This was reported as a bug in Gemma [here](https://github.com/huggingface/transformers/issues/30004), but it likely affects other models as well. `tokenizer.padding_side` is probably not accessible to the model, because it is stored in `tokenizer_config` and not `config`. As a result, I think the solution here will have to be rewriting this code so that it can handle either left- or right-padding.
[ 64 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "bug" ]
https://api.github.com/repos/huggingface/transformers/issues/33666
TITLE Qwen2-VL: Multi-GPU training COMMENTS 8 REACTIONS +1: 0 -1: 0 laugh: 0 hooray: 0 heart: 0 rocket: 0 eyes: 0 BODY ### System Info - `transformers` version: 4.45.0.dev0 - Platform: Linux-4.18.0-477.10.1.el8_8.x86_64-x86_64-with-glibc2.28 - Python version: 3.11.5 - Huggingface_hub version: 0.24.0 - Safetensors version: 0.4.3 - Accelerate version: 0.34.2 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: NO - mixed_precision: bf16 - use_cpu: False - debug: False - num_processes: 1 - machine_rank: 0 - num_machines: 1 - gpu_ids: all - rdzv_backend: static - same_network: True - main_training_function: main - enable_cpu_affinity: False - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] - PyTorch version (GPU?): 2.2.1+rocm5.7 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: <fill in> - Using GPU in script?: <fill in> - GPU type: AMD Instinct MI250X ### Who can help? @muellerzr @ArthurZucker @gante Issue about both the Qwen-VL model and perhaps the trainer so not sure who is best suited to answer :) ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Replicating the setup is a bit tough, so this is more of a preliminary discussion issue to see if there is an obvious problem that surfaces. 1. Multi-GPU setup + Huggingface trainer 2. Train Qwen2-VL model with dynamic image resolution 3. The processor creates BatchEncodings with pixel_values, input_ids, attention_mask and image_grid_thw. 4. Run a model forward pass with the model in data parallel mode of the trainer. We observe that compared to mono-gpu setups, the rope values are disaligned with the hidden_states size. Typically, in line 1109 (Qwen2VisionTransformerPretrainedModel forward pass): ```python def forward(self, hidden_states: torch.Tensor, grid_thw: torch.Tensor) -> torch.Tensor: hidden_states = self.patch_embed(hidden_states) rotary_pos_emb = self.rot_pos_emb(grid_thw) ``` we can see rotary_pos_emb is hidden_states have a sligtly different dimension 0. ex: torch.Size([7820, 40]) torch.Size([7736, 1280]) Upon further inspection, we see rotary_pos_emb has the same dimension as what we would get in mono-gpu runs (normal since it only depends on the grid_thw argument). However, hidden_states (that correspond to pixel values) have a different size. This makes training crash: ```bash File "/lus/home/CT10/cad15443/mfaysse/colpali/venv/lib/python3.11/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 395, in forward q = apply_rotary_pos_emb_vision(q.unsqueeze(0), rotary_pos_emb).squeeze(0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/lus/home/CT10/cad15443/mfaysse/colpali/venv/lib/python3.11/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 254, in apply_rotary_pos_emb_vision output = (tensor * cos) + (rotate_half(tensor) * sin) ~~~~~~~^~~~~ RuntimeError: The size of tensor a (7736) must match the size of tensor b (7808) at non-singleton dimension 1 ``` ### Expected behavior [edited] see below for more details being investigated Thanks !
[ 38, 66, 76, 64, 62, 12 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
[ "Distributed Training / Models", "trainer", "Feature request", "bug", "Vision", "Multimodal" ]