|
# Intelligent-Internet/II-Search-4B-MLX |
|
|
|
This model [Intelligent-Internet/II-Search-4B-MLX ](https://huggingface.co/Intelligent-Internet/II-Search-4B-MLX) was |
|
converted to MLX format from [Intelligent-Internet/II-Search-4B](https://huggingface.co/Intelligent-Internet/II-Search-4B) |
|
using mlx-lm version **0.26.1**. |
|
|
|
## Use with mlx |
|
|
|
```bash |
|
pip install mlx-lm |
|
``` |
|
|
|
```python |
|
from mlx_lm import load, generate |
|
|
|
model, tokenizer = load("Intelligent-Internet/II-Search-4B-MLX") |
|
|
|
prompt = "hello" |
|
|
|
if tokenizer.chat_template is not None: |
|
messages = [{"role": "user", "content": prompt}] |
|
prompt = tokenizer.apply_chat_template( |
|
messages, add_generation_prompt=True |
|
) |
|
|
|
response = generate(model, tokenizer, prompt=prompt, verbose=True) |
|
``` |
|
|
|
## Important: Integrate web_search and web_visit tools |
|
|
|
Equip the served model with web_search and web_visit tools to enable internet-aware functionality. Alternatively, use a middleware like MCP for tool integration—see this example repository: https://github.com/hoanganhpham1006/mcp-server-template. |