Improve model card for LoRI-D_code_llama3_rank_32: Add details, usage, and license

#1
by nielsr HF Staff - opened

This PR significantly enhances the model card for tomg-group-umd/LoRI-D_code_llama3_rank_32 by:

  • Adding the apache-2.0 license to the metadata for clarity and proper attribution.
  • Expanding the tags to improve discoverability, including lora, peft, fine-tuning, language-model, and specific application domains like code-generation, natural-language-understanding, mathematical-reasoning, safety-alignment, and continual-learning.
  • Updating the paper link to the official Hugging Face Papers page (LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation).
  • Populating the "Model Details", "Model Sources", and "Uses" sections with comprehensive information from the paper's abstract and the GitHub repository.
  • Including a runnable Python code example to demonstrate quick inference using the transformers and peft libraries.
  • Adding the official image from the GitHub repository to visually represent the LoRI method.
  • Detailing "Training Details" and "Evaluation" based on the provided context, including hyperparameters from adapter_config.json.
  • Adding the BibTeX citation for the paper.

These changes make the model card much more informative, user-friendly, and discoverable on the Hugging Face Hub.

juzhengz changed pull request status to merged

Sign up or log in to comment