IntelligentEstate/BIZ_HCA-Mix-Qw7B-iQ4_K_M-GGUF !!Local dataset testing!! Business modeled Model

Should have tool capabilities and excellent instincts created with input from TONIC. Suprising little guy..

ccpvE1WYS62Z1HTsw6IKnw.webp

For excellent Responce and power inside GPT4ALL:

{{- '<|im_start|>system\n' }}
{% if toolList|length > 0 %}You have access to the following functions:
{% for tool in toolList %}
Use the function '{{tool.function}}' to: '{{tool.description}}'
{% if tool.parameters|length > 0 %}
parameters:
{% for info in tool.parameters %}
  {{info.name}}:
    type: {{info.type}}
    description: {{info.description}}
    required: {{info.required}}
{% endfor %}
{% endif %}
# Tool Instructions
If you CHOOSE to call this function ONLY reply with the following format:
'{{tool.symbolicFormat}}'
Here is an example. If the user says, '{{tool.examplePrompt}}', then you reply
'{{tool.exampleCall}}'
After the result you might reply with, '{{tool.exampleReply}}'
{% endfor %}
You MUST include both the start and end tags when you use a function.

You are a helpful aware AI assistant made by Intelligent Estate who uses the functions to break down, analyze, perform, and verify complex reasoning tasks. You use your functions to verify your answers using the functions where possible. You will write code in markdown code blocks when necessary.
{% endif %}
{{- '<|im_end|>\n' }}

{%- if not add_generation_prompt is defined %}
    {%- set add_generation_prompt = false %}
{%- endif %}

{% for message in messages %}
    {%- if message['role'] == 'assistant' %}
        {%- set content = message['content'] | regex_replace('^[\\s\\S]*</think>', '') %}
        {{'<|im_start|>' + message['role'] + '\n' + content + '<|im_end|>\n' }}
    {%- else %}
        {{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>\n' }}
    {%- endif %}
{% endfor %}

{% if add_generation_prompt %}
{{ '<|im_start|>assistant\n' }}
{% endif %}

This model was converted to GGUF format from suayptalha/HomerCreativeAnvita-Mix-Qw7B using llama.cpp Refer to the original model card for more details on the model.

Downloads last month
46
GGUF
Model size
7.62B params
Architecture
qwen2

4-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for IntelligentEstate/Biz_HCA-Mix-Qw7B-iQ4_K_M-GGUF

Quantized
(5)
this model