PyTorch
llama
File size: 3,712 Bytes
288a437
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7fc5eb6
288a437
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28802de
288a437
 
 
 
 
 
 
 
 
 
 
 
28802de
 
 
 
 
288a437
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
license: mit
---


<div align="center">

<h3>InstructBioMol: A Multimodal LLM for Biomolecule Understanding and Design</h3>

<p align="center">
  <a href="https://arxiv.org/abs/2410.07919">Paper</a><a href="https://github.com/HICAI-ZJU/InstructBioMol">Project</a><a href="#quickstart">Quickstart</a><a href="#citation">Citation</a>
</p>
</div>

### Model Description

InstructBioMol is a multimodal large language model that bridges natural language with biomolecules (proteins and small molecules). It achieves any-to-any alignment between natural language, molecules, and proteins through comprehensive instruction tuning.

*For detailed information, please refer to our [paper](https://arxiv.org/abs/2410.07919) and [code repository](https://github.com/HICAI-ZJU/InstructBioMol).*
### Released Variants

| Model Name | Stage |  Multimodal| Description |
|------------|-----------| -------| -------|
| [InstructBioMol-base](https://huggingface.co/hicai-zju/InstructBioMol-base) (*This Model*)  | Pretraining | ❎| Continual pretrained model on molecular sequences, protein sequences, and scientific literature. |
| [InstructBioMol-instruct-stage1](https://huggingface.co/hicai-zju/InstructBioMol-instruct-stage1) | Instruction tuning (stage 1) | ✅ |  Stage1 instruction-tuned model with biomolecular multimodal processing capabilities. (e.g., 3D molecules/proteins) |
| [InstructBioMol-instruct](https://huggingface.co/hicai-zju/InstructBioMol-instruct) |  Instruction tuning (stage 1 and 2) |  ✅| Fully instruction-tuned model (stage1 & stage2) with biomolecular multimodal processing capabilities (e.g., 3D molecules/proteins) |
### Training Details

**Base Architecture**: LLaMA-2-7B

**Training Data**:

​1. ​Molecular Sequences​​:

  - Format: SELFIES
  - Source: PubChem
  - Size: ​​100 million (100M) entries​​

​2. ​Protein Sequences​​:
  - Format: FASTA-like, prefixed with `<p>` (e.g., `<p>M<p>A<p>L<p>W...`).  
  - Source: UniRef50
  - Size: ​​59 million (59M) entries​​

​3. ​Natural Language Texts​​:
  - Source: Abstracts from ​​PubMed​​, ​​bioRxiv​​, and ​​ChemRxiv​​
  - Size: ​​6 million (6M) abstracts​

**Training Objective**: Causal language modeling (self-supervised)

### Quick Start
```python
from transformers import LlamaForCausalLM, LlamaTokenizer
import torch

model_name = "hicai-zju/InstructBioMol-base"  
tokenizer = LlamaTokenizer.from_pretrained(model_name)
model = LlamaForCausalLM.from_pretrained(model_name, device_map="cuda:0")

prompt = "<p>M"  # protein sequence
# prompt = "[C]"  # molecule sequence
# prompt = 'Scientific'  # natural language
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

with torch.no_grad():
    outputs = model.generate(
        **inputs,
        max_new_tokens=100, 
        temperature=0.7,     
        top_p=0.9,          
        do_sample=True     
    )

generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
```

### Citation

```bibtex
@article{zhuang2025advancing,
  author       = {Xiang Zhuang and
                  Keyan Ding and
                  Tianwen Lyu and
                  Yinuo Jiang and
                  Xiaotong Li and
                  Zhuoyi Xiang and
                  Zeyuan Wang and
                  Ming Qin and
                  Kehua Feng and
                  Jike Wang and
                  Qiang Zhang and
                  Huajun Chen},
  title={Advancing biomolecular understanding and design following human instructions},
  journal={Nature Machine Intelligence},
  pages={1--14},
  year={2025},
  publisher={Nature Publishing Group UK London}
}
```