nielsr HF Staff commited on
Commit
7fcf7a9
·
verified ·
1 Parent(s): 0379fa5

Improve model card: Add pipeline tag, library, paper link, and code link

Browse files

This PR improves the model card for `Tucan-2.6B-v1.0-LoRA` by:
- Adding the `pipeline_tag: text-generation` to ensure the model is discoverable under the correct task filter on the Hub.
- Specifying `library_name: transformers` to enable the "how to use" button and proper library integration.
- Updating the `license` metadata to `cc-by-4.0` to explicitly match the license stated in the model card content.
- Adding the Hugging Face Papers link (`huggingface.co/papers/2506.23394`) for enhanced discoverability.
- Including a direct link to the associated GitHub repository for the evaluation framework (`https://github.com/llm-bg/Tucan-Eval`).
- Integrating the paper's abstract into the "Overview" section to provide a more detailed and immediate understanding of the model and its capabilities.

Please review and merge this PR if everything looks good.

Files changed (1) hide show
  1. README.md +21 -11
README.md CHANGED
@@ -1,25 +1,30 @@
1
  ---
2
- license: gemma
3
- language:
4
- - bg
5
  base_model:
6
  - INSAIT-Institute/BgGPT-Gemma-2-2.6B-IT-v1.0
 
 
 
7
  tags:
8
  - function_calling
9
  - MCP
10
  - tool_use
 
 
11
  ---
12
 
13
  # Tucan-2.6B-v1.0-LoRA
14
 
15
  ## Bulgarian Language Models for Function Calling 🇧🇬
16
 
17
-
18
  **Paper: https://arxiv.org/abs/2506.23394**
 
19
 
20
  ## Overview 🚀
21
 
22
- TUCAN (Tool-Using Capable Assistant Navigator) is a series of open-source Bulgarian language models fine-tuned specifically for function calling and tool use.
 
 
23
 
24
  These models can interact with external tools, APIs, and databases, making them appropriate for building AI agents and [Model Context Protocol (MCP)](https://arxiv.org/abs/2503.23278) applications.
25
 
@@ -63,14 +68,14 @@ pip install -U "transformers[torch]" accelerate bitsandbytes
63
 
64
  Когато използваш функция, форматирай извикването ѝ в блок ```tool_call``` на отделен ред, a след това ще получиш резултат от изпълнението в блок ```toll_response```.
65
 
66
- ## Шаблон за извикване:
67
  ```tool_call
68
  {"name": <function-name>, "arguments": <args-json-object>}```
69
 
70
  ## Налични функции:
71
  [your function definitions here]
72
 
73
- ## Потребителска заявка :
74
  [your query in Bulgarian]<end_of_turn>
75
  <start_of_turn>model
76
  ```
@@ -108,14 +113,19 @@ def create_prompt(functions, user_query):
108
 
109
  Когато използваш функция, форматирай извикването ѝ в блок ```tool_call``` на отделен ред, a след това ще получиш резултат от изпълнението в блок ```toll_response```.
110
 
111
- ## Шаблон за извикване:
112
  ```tool_call
113
  {{"name": <function-name>, "arguments": <args-json-object>}}```
114
  """
115
-
116
  functions_text = json.dumps(functions, ensure_ascii=False, indent=2)
117
- full_prompt = f"{system_prompt}\n## Налични функции:\n{functions_text}\n\n## Потребителска заявка:\n{user_query}"
118
-
 
 
 
 
 
119
  chat = [{"role": "user", "content": full_prompt}]
120
  return tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
121
 
 
1
  ---
 
 
 
2
  base_model:
3
  - INSAIT-Institute/BgGPT-Gemma-2-2.6B-IT-v1.0
4
+ language:
5
+ - bg
6
+ license: cc-by-4.0
7
  tags:
8
  - function_calling
9
  - MCP
10
  - tool_use
11
+ pipeline_tag: text-generation
12
+ library_name: transformers
13
  ---
14
 
15
  # Tucan-2.6B-v1.0-LoRA
16
 
17
  ## Bulgarian Language Models for Function Calling 🇧🇬
18
 
19
+ This model is presented in the paper [Teaching a Language Model to Speak the Language of Tools](https://huggingface.co/papers/2506.23394).
20
  **Paper: https://arxiv.org/abs/2506.23394**
21
+ **Code: https://github.com/llm-bg/Tucan-Eval**
22
 
23
  ## Overview 🚀
24
 
25
+ External tool integration through function-calling is essential for practical language model applications, yet most multilingual models lack reliable tool-use capabilities in non-English languages. Even state-of-the-art multilingual models struggle with determining when to use tools and generating the structured outputs required for function calls, often exhibiting language confusion when prompted in lower-resource languages. This work presents a methodology for adapting existing language models to enable robust tool use in any target language, using Bulgarian as a case study. The approach involves continued training of the BgGPT model series (2.6B, 9B, 27B parameters) on a novel bilingual dataset of 10,035 function-calling examples designed to support standardized protocols like MCP (Model Context Protocol). The research introduces TUCAN (Tool-Using Capable Assistant Navigator), which achieves up to 28.75% improvement in function-calling accuracy over base models while preserving core language understanding, as verified on established Bulgarian benchmarks. Beyond accuracy gains, TUCAN models demonstrate production-ready response formatting with clean, parsable function calls, contrasting with the verbose and inconsistent outputs of base models. The models, evaluation framework, and dataset are released to enable replication for other languages. This work demonstrates a practical approach for extending tool-augmented capabilities beyond English-centric systems.
26
+
27
+ TUCAN (Tool-Using Capable Assistant Navigator) is a series of open-source Bulgarian language models fine-tuned specifically for function calling and tool use.
28
 
29
  These models can interact with external tools, APIs, and databases, making them appropriate for building AI agents and [Model Context Protocol (MCP)](https://arxiv.org/abs/2503.23278) applications.
30
 
 
68
 
69
  Когато използваш функция, форматирай извикването ѝ в блок ```tool_call``` на отделен ред, a след това ще получиш резултат от изпълнението в блок ```toll_response```.
70
 
71
+ ## Шаблон за извикване:
72
  ```tool_call
73
  {"name": <function-name>, "arguments": <args-json-object>}```
74
 
75
  ## Налични функции:
76
  [your function definitions here]
77
 
78
+ ## Потребителска заявка :
79
  [your query in Bulgarian]<end_of_turn>
80
  <start_of_turn>model
81
  ```
 
113
 
114
  Когато използваш функция, форматирай извикването ѝ в блок ```tool_call``` на отделен ред, a след това ще получиш резултат от изпълнението в блок ```toll_response```.
115
 
116
+ ## Шаблон за извикване:
117
  ```tool_call
118
  {{"name": <function-name>, "arguments": <args-json-object>}}```
119
  """
120
+
121
  functions_text = json.dumps(functions, ensure_ascii=False, indent=2)
122
+ full_prompt = f"{system_prompt}
123
+ ## Налични функции:
124
+ {functions_text}
125
+
126
+ ## Потребителска заявка:
127
+ {user_query}"
128
+
129
  chat = [{"role": "user", "content": full_prompt}]
130
  return tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
131