You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Psyche-R1

We propose the first Chinese psychological reasoning LLM that unifies empathy, expertise, and reasoning.

This model is a fine-tuned version of Qwen/Qwen2.5-7B-Instruct on our proposed dataset encompassing psychological questions paired with detailed rationales, and empathetic single-turn dialogues.

We conduct a hybrid training strategy, including SFT and GRPO training. We will present detailed training hyperparameters later.

It achieves comparable performance to DeepSeek-R1 on several psychology benchmarks, including psychology counselor examination benchmark (PCEB) proposed by Hu et al. (2024), and CPsyExam test set proposed by Zhao et al. (2024). It also demonstates better performance in empathy on SoulChat2.0 test set (Xie et al. 2025).

Training procedure

SFT Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 256
  • total_eval_batch_size: 8
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 2.0

GRPO Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 128
  • total_eval_batch_size: 8
  • ppo_mini_batch_size: 32
  • ppo_micro_batch_size_per_gpu: 20
  • kl_loss_coef: 0.001
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 2.0

Citation

If this work is helpful, please kindly cite as:

@misc{dai2025psycher1reliablepsychologicalllms,
      title={Psyche-R1: Towards Reliable Psychological LLMs through Unified Empathy, Expertise, and Reasoning}, 
      author={Chongyuan Dai and Jinpeng Hu and Hongchang Shi and Zhuo Li and Xun Yang and Meng Wang},
      year={2025},
      eprint={2508.10848},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2508.10848}, 
}
Downloads last month
6
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MACLAB-HFUT/Psyche-R1

Base model

Qwen/Qwen2.5-7B
Finetuned
(2529)
this model