{
"metadata": {
"kernelspec": {
"name": "python3",
"display_name": "Python 3",
"language": "python"
},
"language_info": {
"name": "python",
"version": "3.11.11",
"mimetype": "text/x-python",
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"pygments_lexer": "ipython3",
"nbconvert_exporter": "python",
"file_extension": ".py"
},
"accelerator": "GPU",
"colab": {
"gpuType": "T4",
"provenance": []
},
"kaggle": {
"accelerator": "nvidiaTeslaT4",
"dataSources": [
{
"sourceId": 11952276,
"sourceType": "datasetVersion",
"datasetId": 7514320
}
],
"dockerImageVersionId": 31040,
"isInternetEnabled": true,
"language": "python",
"sourceType": "notebook",
"isGpuEnabled": true
}
},
"nbformat_minor": 0,
"nbformat": 4,
"cells": [
{
"cell_type": "markdown",
"source": [
"# AI Content Detection with Qwen3-0.6B using Unsloth\n",
"\n",
"This notebook demonstrates fine-tuning a Qwen3-0.6B model for AI content detection using the RAID dataset. We use LoRA (Low-Rank Adaptation) for parameter-efficient fine-tuning and implement a custom token mapping approach for binary classification.\n",
"\n",
"## Table of Contents\n",
"1. [Setup and Installation](#setup)\n",
"2. [Model Configuration](#model-config)\n",
"3. [Data Preparation](#data-prep)\n",
"4. [Model Architecture Modification](#model-arch)\n",
"5. [Training](#training)\n",
"6. [Evaluation](#evaluation)\n",
"7. [Model Deployment](#deployment)\n",
"\n",
"---\n",
"\n",
"## 1. Setup and Installation {#setup}\n",
"\n",
"First, we install the required dependencies including Unsloth for efficient training.\n"
],
"metadata": {
"id": "IqM-T1RTzY6C"
}
},
{
"cell_type": "code",
"source": [
"%%capture\n",
"import os\n",
"if \"COLAB_\" not in \"\".join(os.environ.keys()):\n",
" !pip install unsloth\n",
"else:\n",
" # Do this only in Colab notebooks! Otherwise use pip install unsloth\n",
" !pip install --no-deps bitsandbytes accelerate xformers==0.0.29.post3 peft trl==0.15.2 triton cut_cross_entropy unsloth_zoo\n",
" !pip install sentencepiece protobuf \"datasets>=3.4.1\" huggingface_hub hf_transfer\n",
" !pip install --no-deps unsloth"
],
"metadata": {
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T08:24:59.231165Z",
"iopub.execute_input": "2025-05-27T08:24:59.231411Z",
"iopub.status.idle": "2025-05-27T08:25:12.769897Z",
"shell.execute_reply.started": "2025-05-27T08:24:59.231386Z",
"shell.execute_reply": "2025-05-27T08:25:12.769095Z"
},
"id": "vNanTSP_0Pf_"
},
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"### Import Required Libraries\n",
"\n",
"We import all necessary libraries for model training, data processing, and evaluation.\n"
],
"metadata": {
"id": "fBWORuWL2psu"
}
},
{
"cell_type": "code",
"source": [
"# needed as this function doesn't like it when the lm_head has its size changed\n",
"from unsloth import tokenizer_utils\n",
"def do_nothing(*args, **kwargs):\n",
" pass\n",
"tokenizer_utils.fix_untrained_tokens = do_nothing"
],
"metadata": {
"id": "iTSOBWpX7SIT",
"outputId": "57c87715-7ab6-4e10-9c57-04909857eef2",
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T08:25:12.770866Z",
"iopub.execute_input": "2025-05-27T08:25:12.771094Z",
"iopub.status.idle": "2025-05-27T08:25:52.029110Z",
"shell.execute_reply.started": "2025-05-27T08:25:12.771069Z",
"shell.execute_reply": "2025-05-27T08:25:52.028548Z"
}
},
"outputs": [
{
"name": "stdout",
"text": "🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.\n",
"output_type": "stream"
},
{
"name": "stderr",
"text": "2025-05-27 08:25:28.067389: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\nWARNING: All log messages before absl::InitializeLog() is called are written to STDERR\nE0000 00:00:1748334328.287741 35 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\nE0000 00:00:1748334328.352047 35 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"output_type": "stream"
},
{
"name": "stdout",
"text": "🦥 Unsloth Zoo will now patch everything to make training faster!\n",
"output_type": "stream"
}
],
"execution_count": null
},
{
"cell_type": "code",
"source": [
"import torch\n",
"major_version, minor_version = torch.cuda.get_device_capability()\n",
"print(f\"Major: {major_version}, Minor: {minor_version}\")\n",
"from datasets import load_dataset\n",
"import datasets\n",
"from trl import SFTTrainer\n",
"import pandas as pd\n",
"import numpy as np\n",
"import os\n",
"import pandas as pd\n",
"import numpy as np\n",
"from unsloth import FastLanguageModel\n",
"from trl import SFTTrainer\n",
"from transformers import TrainingArguments, Trainer\n",
"from typing import Tuple\n",
"import warnings\n",
"from typing import Any, Dict, List, Union\n",
"from transformers import DataCollatorForLanguageModeling\n",
"from sklearn.model_selection import train_test_split\n",
"import matplotlib.pyplot as plt"
],
"metadata": {
"id": "s08wXFIz7SIV",
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T08:25:52.029912Z",
"iopub.execute_input": "2025-05-27T08:25:52.030628Z",
"iopub.status.idle": "2025-05-27T08:26:03.630625Z",
"shell.execute_reply.started": "2025-05-27T08:25:52.030602Z",
"shell.execute_reply": "2025-05-27T08:26:03.629818Z"
}
},
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"---\n",
"\n",
"## 2. Model Configuration {#model-config}\n",
"\n",
"We configure the model parameters and load the base Qwen3-0.6B model. This section sets up the foundation for our AI content detection task.\n",
"\n",
"### Key Parameters:\n",
"- **NUM_CLASSES**: 2 (Human vs AI)\n",
"- **max_seq_length**: 4096 tokens\n",
"- **dtype**: float16 for Tesla T4 compatibility\n"
],
"metadata": {
"id": "DHUhGcSp2zOB"
}
},
{
"cell_type": "code",
"source": [
"NUM_CLASSES = 2 # number of classes in the csv\n",
"\n",
"max_seq_length = 4096 # Choose any! We auto support RoPE Scaling internally!\n",
"dtype = torch.float16 # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+\n",
"\n",
"model_name = \"Qwen/Qwen3-0.6B-Base\";load_in_4bit = False\n",
"# model_name = \"unsloth/Qwen3-4B-Base\";load_in_4bit = False\n",
"\n",
"model, tokenizer = FastLanguageModel.from_pretrained(\n",
" model_name = model_name,load_in_4bit = load_in_4bit,\n",
" max_seq_length = max_seq_length,\n",
" dtype = dtype,\n",
")\n"
],
"metadata": {
"id": "ouuf4HQ029Qn"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"---\n",
"\n",
"## 3. Model Architecture Modification {#model-arch}\n",
"\n",
"We modify the model architecture to create a custom classification head that only outputs predictions for our 2 classes (Human vs AI). This approach uses token mapping to convert the language modeling task into a classification task.\n",
"\n",
"### Custom Classification Head\n",
"We trim the classification head so the model can only predict numbers 0-1 corresponding to our classes.\n"
],
"metadata": {
"id": "iKNyoIBf3E_n"
}
},
{
"cell_type": "code",
"source": [
"import torch.nn as nn\n",
"\n",
"number_token_ids = []\n",
"for i in range(NUM_CLASSES):\n",
" number_token_ids.append(tokenizer.encode(str(i), add_special_tokens=False)[0])\n",
"\n",
"# Extract the weights for your number tokens\n",
"par = torch.nn.Parameter(model.lm_head.weight[number_token_ids, :])\n",
"\n",
"# Replace lm_head with reduced size\n",
"model.lm_head = nn.Linear(model.config.hidden_size, NUM_CLASSES, bias=False)\n",
"\n",
"# Initialize with the extracted weights\n",
"model.lm_head.weight.data = par.data\n",
"\n",
"reverse_map = {value: idx for idx, value in enumerate(number_token_ids)}\n"
],
"metadata": {
"id": "ETy4ZnJe7SIV",
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T08:26:03.632829Z",
"iopub.execute_input": "2025-05-27T08:26:03.633047Z",
"iopub.status.idle": "2025-05-27T08:26:03.661567Z",
"shell.execute_reply.started": "2025-05-27T08:26:03.633030Z",
"shell.execute_reply": "2025-05-27T08:26:03.661035Z"
}
},
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"### LoRA Configuration\n",
"\n",
"We apply LoRA (Low-Rank Adaptation) for parameter-efficient fine-tuning, targeting key attention and MLP layers while excluding the custom classification head.\n"
],
"metadata": {
"id": "HZI5wOqm3KBV"
}
},
{
"cell_type": "code",
"source": [
"from peft import LoftQConfig\n",
"\n",
"model = FastLanguageModel.get_peft_model(\n",
" model,\n",
" r = 16,\n",
" target_modules = [\n",
" # \"lm_head\", # can easily be trained because it now has a small size\n",
" \"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\",\n",
" \"gate_proj\", \"up_proj\", \"down_proj\",],\n",
" lora_alpha = 16,\n",
" lora_dropout = 0, # Supports any, but = 0 is optimized\n",
" bias = \"none\", # Supports any, but = \"none\" is optimized\n",
" use_gradient_checkpointing = \"unsloth\",\n",
" random_state = 3407,\n",
" use_rslora = True, # We support rank stabilized LoRA\n",
" # init_lora_weights = 'loftq',\n",
" # loftq_config = LoftQConfig(loftq_bits = 4, loftq_iter = 1), # And LoftQ\n",
")\n",
"print(\"trainable parameters:\", sum(p.numel() for p in model.parameters() if p.requires_grad))"
],
"metadata": {
"id": "GDu-tB7-7SIW",
"outputId": "f4ad11db-81c8-4e93-ebc8-3336f77fa8c3",
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T08:26:03.662262Z",
"iopub.execute_input": "2025-05-27T08:26:03.662555Z",
"iopub.status.idle": "2025-05-27T08:26:09.913078Z",
"shell.execute_reply.started": "2025-05-27T08:26:03.662530Z",
"shell.execute_reply": "2025-05-27T08:26:09.912387Z"
}
},
"outputs": [
{
"name": "stderr",
"text": "Unsloth 2025.5.7 patched 28 layers with 28 QKV layers, 28 O layers and 28 MLP layers.\n",
"output_type": "stream"
},
{
"name": "stdout",
"text": "trainable parameters: 10092544\n",
"output_type": "stream"
}
],
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"---\n",
"\n",
"## 4. Data Preparation {#data-prep}\n",
"\n",
"We load and prepare the RAID dataset for training. The dataset contains text samples labeled as either human-written or AI-generated.\n",
"\n",
"### Dataset Loading and Balancing\n"
],
"metadata": {
"id": "K8XZwTlH3QJ5"
}
},
{
"cell_type": "code",
"source": [
"kaggle = os.getcwd() == \"/kaggle/working\"\n",
"input_dir = \"/kaggle/input/raid-dataset/\" if kaggle else \"data/\"\n",
"data = pd.read_csv(input_dir + \"train_none.csv\")[['generation', 'model']]\n",
"data.rename(columns={'generation': 'text'}, inplace=True)\n",
"data['label'] = (data['model'] != 'human').astype(int)\n",
"data.drop('model', axis=1, inplace=True)\n",
"\n",
"# Check current distribution\n",
"print(\"Original distribution:\")\n",
"print(data['label'].value_counts())\n",
"print(f\"Total samples available: {len(data)}\")\n",
"\n",
"# Create balanced dataset with exactly 13,000 samples of each class\n",
"class_0_samples = data[data['label'] == 0] # Human samples\n",
"class_1_samples = data[data['label'] == 1] # AI samples\n",
"\n",
"print(f\"\\nAvailable samples:\")\n",
"print(f\"Class 0 (Human): {len(class_0_samples)} samples\")\n",
"print(f\"Class 1 (AI): {len(class_1_samples)} samples\")\n",
"\n",
"# Sample exactly 13,000 from each class\n",
"class_0_count = 5000\n",
"class_1_count = 5000\n",
"\n",
"# Sample from each class (you have enough samples for both)\n",
"sampled_class_0 = class_0_samples.sample(n=class_0_count, random_state=42)\n",
"sampled_class_1 = class_1_samples.sample(n=class_1_count, random_state=42)\n",
"\n",
"# Combine the samples\n",
"balanced_data = pd.concat([sampled_class_0, sampled_class_1], ignore_index=True)\n",
"\n",
"# Shuffle the combined dataset\n",
"balanced_data = balanced_data.sample(frac=1, random_state=42).reset_index(drop=True)\n",
"\n",
"print(f\"\\nNew balanced distribution:\")\n",
"print(balanced_data['label'].value_counts())\n",
"print(f\"Total samples in balanced dataset: {len(balanced_data)}\")\n",
"\n",
"# Split into train and validation (keeping the 26,000 total)\n",
"train_df, val_df = train_test_split(\n",
" balanced_data,\n",
" train_size=8000, # Use 24k for training, 2k for validation\n",
" stratify=balanced_data['label'],\n",
" random_state=42\n",
")\n",
"\n",
"print(f\"\\nTrain distribution:\")\n",
"print(train_df['label'].value_counts())\n",
"print(f\"Validation distribution:\")\n",
"print(val_df['label'].value_counts())\n",
"\n",
"train_df.head()\n"
],
"metadata": {
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T08:26:09.947141Z",
"iopub.execute_input": "2025-05-27T08:26:09.947317Z",
"iopub.status.idle": "2025-05-27T08:26:27.274767Z",
"shell.execute_reply.started": "2025-05-27T08:26:09.947303Z",
"shell.execute_reply": "2025-05-27T08:26:27.274143Z"
},
"id": "Xi6ylI-n0PgG"
},
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"### Prompt Template Design\n",
"\n",
"We design a structured prompt template that clearly defines the classification task for the model.\n"
],
"metadata": {
"id": "ZKjsMuhq3Yjw"
}
},
{
"cell_type": "code",
"source": [
"prompt = \"\"\"Here is a text sample:\n",
"{}\n",
"\n",
"Classify this text into one of the following:\n",
"class 0: Human\n",
"class 1: AI\n",
"\n",
"SOLUTION\n",
"The correct answer is: class {}\"\"\"\n",
"\n",
"\n",
"def formatting_prompts_func(dataset_):\n",
" texts = []\n",
" for i in range(len(dataset_['text'])):\n",
" text_ = dataset_['text'].iloc[i]\n",
" label_ = str(dataset_['label'].iloc[i])\n",
"\n",
" # Format prompt + label, then add EOS\n",
" text = prompt.format(text_, label_)\n",
" texts.append(text)\n",
" return texts\n",
"\n",
"# apply formatting_prompts_func to train_df\n",
"train_df['text'] = formatting_prompts_func(train_df)\n",
"train_dataset = datasets.Dataset.from_pandas(train_df,preserve_index=False)"
],
"metadata": {
"id": "LjY75GoYUCB8",
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T08:26:27.275549Z",
"iopub.execute_input": "2025-05-27T08:26:27.275808Z",
"iopub.status.idle": "2025-05-27T08:26:27.517696Z",
"shell.execute_reply.started": "2025-05-27T08:26:27.275786Z",
"shell.execute_reply": "2025-05-27T08:26:27.516888Z"
}
},
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"### Custom Data Collator\n",
"\n",
"We implement a custom data collator that focuses training on the last token of each sequence, which contains the classification prediction.\n"
],
"metadata": {
"id": "WZNp2CAO3byG"
}
},
{
"cell_type": "code",
"source": [
"from typing import List, Union, Any, Dict\n",
"from transformers import DataCollatorForLanguageModeling\n",
"\n",
"class DataCollatorForLastTokenLM(DataCollatorForLanguageModeling):\n",
" def __init__(\n",
" self,\n",
" *args,\n",
" mlm: bool = False,\n",
" ignore_index: int = -100,\n",
" **kwargs,\n",
" ):\n",
" super().__init__(*args, mlm=mlm, **kwargs)\n",
" self.ignore_index = ignore_index\n",
"\n",
" def torch_call(self, examples: List[Union[List[int], Any, Dict[str, Any]]]) -> Dict[str, Any]:\n",
" batch = super().torch_call(examples)\n",
"\n",
" for i in range(len(examples)):\n",
" # Find the last non-padding token\n",
" last_token_idx = (batch[\"labels\"][i] != self.ignore_index).nonzero()[-1].item()\n",
" # Set all labels to ignore_index except for the last token\n",
" batch[\"labels\"][i, :last_token_idx] = self.ignore_index\n",
"\n",
" # Get the current token ID\n",
" current_token_id = batch[\"labels\"][i, last_token_idx].item()\n",
"\n",
" # Check if token exists in reverse_map before mapping\n",
" if current_token_id in reverse_map:\n",
" batch[\"labels\"][i, last_token_idx] = reverse_map[current_token_id]\n",
" else:\n",
" # Handle missing token IDs gracefully\n",
" print(f\"Warning: Token ID {current_token_id} ({tokenizer.decode([current_token_id]) if hasattr(tokenizer, 'decode') else 'unknown'}) not found in reverse_map\")\n",
" # You can choose one of these strategies:\n",
" # Option 1: Use a default mapping (e.g., map to a special token)\n",
" batch[\"labels\"][i, last_token_idx] = 0 # or tokenizer.unk_token_id\n",
" # Option 2: Skip this example entirely\n",
" # continue\n",
" # Option 3: Keep the original token (no mapping)\n",
" # pass\n",
"\n",
" return batch\n",
"\n",
"# Initialize the collator with your tokenizer\n",
"collator = DataCollatorForLastTokenLM(tokenizer=tokenizer)\n"
],
"metadata": {
"id": "022AAnx97SIX",
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T08:26:27.518521Z",
"iopub.execute_input": "2025-05-27T08:26:27.518780Z",
"iopub.status.idle": "2025-05-27T08:26:27.528347Z",
"shell.execute_reply.started": "2025-05-27T08:26:27.518760Z",
"shell.execute_reply": "2025-05-27T08:26:27.527250Z"
}
},
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"---\n",
"\n",
"## 5. Training {#training}\n",
"\n",
"We configure and execute the training process using Hugging Face's SFTTrainer with optimized settings for our classification task.\n",
"\n",
"### Training Configuration\n",
"- **Batch size**: 2 per device\n",
"- **Learning rate**: 1e-4\n",
"- **Optimizer**: AdamW 8-bit for memory efficiency\n",
"- **Epochs**: 1 (sufficient for this task)\n"
],
"metadata": {
"id": "idAEIeSQ3xdS"
}
},
{
"cell_type": "code",
"source": [
"trainer = SFTTrainer(\n",
" model = model,\n",
" tokenizer = tokenizer,\n",
" train_dataset = train_dataset,\n",
" max_seq_length = max_seq_length,\n",
" dataset_num_proc = 1,\n",
" packing = False, # not needed because group_by_length is True\n",
" args = TrainingArguments(\n",
" per_device_train_batch_size = 2,\n",
" gradient_accumulation_steps = 1,\n",
" warmup_steps = 10,\n",
" learning_rate = 1e-4,\n",
" fp16 = True,\n",
" bf16 = False,\n",
" logging_steps = 1,\n",
" optim = \"adamw_8bit\",\n",
" weight_decay = 0.01,\n",
" lr_scheduler_type = \"cosine\",\n",
" seed = 3407,\n",
" output_dir = \"outputs\",\n",
" num_train_epochs = 1,\n",
" # report_to = \"wandb\",\n",
" report_to = \"none\",\n",
" group_by_length = True,\n",
" ),\n",
" data_collator=collator,\n",
" dataset_text_field=\"text\",\n",
")"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 177,
"referenced_widgets": [
"477d5041a08f4a3a9f7cfa1d98ab48ff",
"25eecdf89b8845a989ce2c8c4d9edbeb",
"e02470ee64ad4f0fb2160b088cdcba7f",
"b7c0858a80684aed9c19ce00dda85815",
"d0b82646d6f549d7ad59e9ff5f426e88",
"d94a8d1f9d294abbb156e39da065910f",
"ebb0f92e750447a39ff23a23ea73b445",
"d4d01dc3290b4174ab13454a28faf972",
"c6941f8c60ef49a8a79ed265c46652c2",
"24aa300026334db1b545bf0fa906112e",
"6d0d021108b54147a62e50fea1ba88ea",
"deb2982b19764ed5aec0b4d80e776279",
"c24904c0a7294f3a93bb64aa38e70316",
"26a77ab74a4a4e21b9afd9798a9f9a29",
"7189bac8d0474bcea50cf8711259516c",
"ce81350896a44331aa6b6960b7370325",
"73d4af57ddc64fb3afd0e2ab068cbcb4",
"672d990adee44df6b58e27fe5804986f",
"3fe95fc9bc034a2db85ed19aedfd4250",
"c1c0053b8e674ed6ac81316d4af82c48",
"74a0e53405cd4d64bd597ad5461ed5dd",
"278d35f6e08d45e2a49c3509dd442f0c",
"b19d83c6f5ad4f04941e6a684678ac07",
"88e100a175e64a3dab6f772847deffd6",
"457b5df3a4294c8a966e233d34c15a82",
"18af2c44a24b4aaabfd192e3fc4bf655",
"61944d9473394be5b19cfd48fd504481",
"68d18369acc540a594aef61a2de07e63",
"69c676782e0b496b820b55c45424177d",
"08dcaabc623c4759a00fbee5430e3ba9",
"45a3bcaffc3f4184b9ae869f7b0ccef4",
"c45f02fbb40e4041ba7f95074f8e1e82",
"0fc1037a17e541eba92a2d6f400ac6eb",
"7bd2cc9aa724408fa9e70795744cad85",
"fb0588bffd7a4238bac3f73052e09335",
"68ff2006b3794e23b4ccbc2c83c52b9b",
"910f2e6fffd24cd7b6e68c932c2a2524",
"6345dc6f40c6444aa05ed2aba7809a3c",
"92eab7fadbed4bc2bc2935b4e3d800cf",
"2dfef2cd79c8463cb7eadad6a04aba91",
"3be5c37493e742aebf0aa29b6723283b",
"273be47263384901b6cf9f249ee3409d",
"2ff515b72bbb43c889e2172c83933803",
"a1cd830a712d490181d176267ce7b6f0",
"1a8da471604841ec9f6c8e0073471c00",
"f899ae16708544379d63960be07b7c32",
"0bdbf9b92f7b4925a5ac28df93fd0fb0",
"c1d8820a789f4899a839e6d28a4f333c",
"e711d7f85eee4fe195fad9fbddcfece2",
"56ed5fd876d94ebbaa8b4e905c348a0d",
"5d461f10bdc44c6b95d070fa9d7425d1",
"f4e0e7a39ad9484f930e6673c35c2f5d",
"baad9118cade4cee9600a7fbf6426e0a",
"16fac1aad22444c4ad42ad0e6e1dfdc9",
"ac35368de2d746b4bed736459d187dbd",
"b03dbbeaeb214c5e9ccc6e6264bd69c2",
"d9322b20f69b4cf6ad44cee0e1d0a74d"
]
},
"id": "95_Nn-89DhsL",
"outputId": "adb8cb5d-0ec3-4b79-83a7-5691873873e8",
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T08:26:27.529299Z",
"iopub.execute_input": "2025-05-27T08:26:27.529606Z",
"iopub.status.idle": "2025-05-27T08:26:33.758433Z",
"shell.execute_reply.started": "2025-05-27T08:26:27.529580Z",
"shell.execute_reply": "2025-05-27T08:26:33.757938Z"
}
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/plain": "Unsloth: Tokenizing [\"text\"]: 0%| | 0/8000 [00:00, ? examples/s]",
"application/vnd.jupyter.widget-view+json": {
"version_major": 2,
"version_minor": 0,
"model_id": "d9322b20f69b4cf6ad44cee0e1d0a74d"
}
},
"metadata": {}
}
],
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"### Memory Usage Monitoring\n",
"\n",
"Track GPU memory usage before training to optimize resource allocation.\n"
],
"metadata": {
"id": "NPTEwu-C3qJi"
}
},
{
"cell_type": "code",
"source": [
"#@title Show current memory stats\n",
"gpu_stats = torch.cuda.get_device_properties(0)\n",
"start_gpu_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3)\n",
"max_memory = round(gpu_stats.total_memory / 1024 / 1024 / 1024, 3)\n",
"print(f\"GPU = {gpu_stats.name}. Max memory = {max_memory} GB.\")\n",
"print(f\"{start_gpu_memory} GB of memory reserved.\")"
],
"metadata": {
"id": "nCbGS8Ab3v7o"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Execute Training\n",
"\n",
"Run the training process and monitor performance metrics.\n"
],
"metadata": {
"id": "cFBqiES437LO"
}
},
{
"cell_type": "code",
"source": [
"trainer_stats = trainer.train()"
],
"metadata": {
"id": "yqxqAZ7KJ4oL",
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T08:26:37.050070Z",
"iopub.execute_input": "2025-05-27T08:26:37.050302Z",
"iopub.status.idle": "2025-05-27T08:44:25.852944Z",
"shell.execute_reply.started": "2025-05-27T08:26:37.050286Z",
"shell.execute_reply": "2025-05-27T08:44:25.852329Z"
},
"scrolled": true
},
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"### Training Statistics\n",
"\n",
"Display final memory usage and training time statistics.\n"
],
"metadata": {
"id": "JNFPULTa4KVT"
}
},
{
"cell_type": "code",
"source": [
"#@title Show final memory and time stats\n",
"used_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3)\n",
"used_memory_for_lora = round(used_memory - start_gpu_memory, 3)\n",
"used_percentage = round(used_memory /max_memory*100, 3)\n",
"lora_percentage = round(used_memory_for_lora/max_memory*100, 3)\n",
"print(f\"{trainer_stats.metrics['train_runtime']} seconds used for training.\")\n",
"print(f\"{round(trainer_stats.metrics['train_runtime']/60, 2)} minutes used for training.\")\n",
"print(f\"Peak reserved memory = {used_memory} GB.\")\n",
"print(f\"Peak reserved memory for training = {used_memory_for_lora} GB.\")\n",
"print(f\"Peak reserved memory % of max memory = {used_percentage} %.\")\n",
"print(f\"Peak reserved memory for training % of max memory = {lora_percentage} %.\")"
],
"metadata": {
"cellView": "form",
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "pCqnaKmlO1U9",
"outputId": "ff1b0842-5966-4dc2-bd98-c20832526b31",
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T08:44:25.853871Z",
"iopub.execute_input": "2025-05-27T08:44:25.854140Z",
"iopub.status.idle": "2025-05-27T08:44:25.860039Z",
"shell.execute_reply.started": "2025-05-27T08:44:25.854119Z",
"shell.execute_reply": "2025-05-27T08:44:25.859318Z"
}
},
"outputs": [
{
"name": "stdout",
"text": "1063.9641 seconds used for training.\n17.73 minutes used for training.\nPeak reserved memory = 3.744 GB.\nPeak reserved memory for training = 2.316 GB.\nPeak reserved memory % of max memory = 25.399 %.\nPeak reserved memory for training % of max memory = 15.711 %.\n",
"output_type": "stream"
}
],
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"---\n",
"\n",
"## 6. Evaluation {#evaluation}\n",
"\n",
"We evaluate the trained model on the validation set using batched inference for efficiency.\n",
"\n",
"### Model Preparation for Inference\n"
],
"metadata": {
"id": "ekOmTR1hSNcr"
}
},
{
"cell_type": "code",
"source": [
"FastLanguageModel.for_inference(model) # Enable native 2x faster inference\n",
"print()"
],
"metadata": {
"id": "kg-kRZcZ7SIZ",
"outputId": "513a3d14-e95b-47df-e382-f8de7ebf8f1e",
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T08:44:25.860781Z",
"iopub.execute_input": "2025-05-27T08:44:25.860987Z",
"iopub.status.idle": "2025-05-27T08:44:25.878834Z",
"shell.execute_reply.started": "2025-05-27T08:44:25.860971Z",
"shell.execute_reply": "2025-05-27T08:44:25.878186Z"
}
},
"outputs": [
{
"name": "stdout",
"text": "\n",
"output_type": "stream"
}
],
"execution_count": null
},
{
"cell_type": "code",
"source": [
"# Save the fine-tuned model\n",
"model.save_pretrained(\"./qwen-classification-model\")\n",
"tokenizer.save_pretrained(\"./qwen-classification-model\")\n",
"\n",
"# If using LoRA, save the adapter\n",
"if hasattr(model, 'save_pretrained'):\n",
" model.save_pretrained(\"./qwen-lora-adapter\")\n"
],
"metadata": {
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T08:44:25.879475Z",
"iopub.execute_input": "2025-05-27T08:44:25.880100Z",
"iopub.status.idle": "2025-05-27T08:44:26.455524Z",
"shell.execute_reply.started": "2025-05-27T08:44:25.880076Z",
"shell.execute_reply": "2025-05-27T08:44:26.454749Z"
},
"id": "iIpcJg6C0PgM"
},
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"### Single Text Classification Function\n",
"\n",
"Create a function for classifying individual text samples.\n"
],
"metadata": {
"id": "0y-NPudU4U1-"
}
},
{
"cell_type": "code",
"source": [
"# Load the saved model\n",
"from unsloth import FastLanguageModel\n",
"\n",
"model, tokenizer = FastLanguageModel.from_pretrained(\n",
" model_name = \"./qwen-classification-model\",\n",
" max_seq_length = max_seq_length,\n",
" dtype = dtype,\n",
")\n",
"\n",
"# Test function\n",
"def classify_text(text_sample):\n",
" test_prompt = f\"\"\"Here is a text sample:\n",
"{text_sample}\n",
"\n",
"Classify this text into one of the following:\n",
"class 0: Human\n",
"class 1: AI\n",
"\n",
"SOLUTION\n",
"The correct answer is: class \"\"\"\n",
"\n",
" inputs = tokenizer(test_prompt, return_tensors=\"pt\")\n",
"\n",
" # Move inputs to the same device as the model\n",
" device = next(model.parameters()).device\n",
" inputs = {k: v.to(device) for k, v in inputs.items()}\n",
"\n",
" with torch.no_grad():\n",
" outputs = model(**inputs)\n",
" logits = outputs.logits[0, -1, :NUM_CLASSES] # Get last token logits for your classes\n",
" predicted_class = torch.argmax(logits).item()\n",
"\n",
" return predicted_class\n",
"\n",
"\n",
"# Test examples\n",
"test_texts = [\n",
" \"This is a sample human-written text about daily life.\",\n",
" \"The algorithm processes data through multiple neural network layers.\"\n",
"]\n",
"\n",
"for text in test_texts:\n",
" prediction = classify_text(text)\n",
" print(f\"Text: {text[:50]}...\")\n",
" print(f\"Prediction: {'AI' if prediction == 1 else 'Human'}\\n\")\n"
],
"metadata": {
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T11:01:56.023218Z",
"iopub.execute_input": "2025-05-27T11:01:56.023679Z",
"iopub.status.idle": "2025-05-27T11:02:05.730262Z",
"shell.execute_reply.started": "2025-05-27T11:01:56.023655Z",
"shell.execute_reply": "2025-05-27T11:02:05.729611Z"
},
"id": "V_JGJnrv0PgN",
"outputId": "7dd908c9-1ebf-4320-e652-46a560c9f596"
},
"outputs": [
{
"name": "stdout",
"text": "==((====))== Unsloth 2025.5.7: Fast Qwen3 patching. Transformers: 4.51.3.\n \\\\ /| Tesla T4. Num GPUs = 2. Max memory: 14.741 GB. Platform: Linux.\nO^O/ \\_/ \\ Torch: 2.6.0+cu124. CUDA: 7.5. CUDA Toolkit: 12.4. Triton: 3.2.0\n\\ / Bfloat16 = FALSE. FA [Xformers = 0.0.29.post3. FA2 = False]\n \"-____-\" Free license: http://github.com/unslothai/unsloth\nUnsloth: Fast downloading is enabled - ignore downloading bars which are red colored!\nText: This is a sample human-written text about daily li...\nPrediction: Human\n\nText: The algorithm processes data through multiple neur...\nPrediction: Human\n\n",
"output_type": "stream"
}
],
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"### Comprehensive Batch Evaluation\n",
"\n",
"Perform comprehensive evaluation on the entire validation set using efficient batched inference.\n"
],
"metadata": {
"id": "tz1V7bZr4oip"
}
},
{
"cell_type": "code",
"source": [
"# If you have a test dataset\n",
"def evaluate_model(test_df):\n",
" predictions = []\n",
" true_labels = []\n",
"\n",
" for idx, row in test_df.iterrows():\n",
" pred = classify_text(row['text'])\n",
" predictions.append(pred)\n",
" true_labels.append(row['label'])\n",
"\n",
" from sklearn.metrics import accuracy_score, classification_report\n",
"\n",
" accuracy = accuracy_score(true_labels, predictions)\n",
" report = classification_report(true_labels, predictions,\n",
" target_names=['Human', 'AI'])\n",
"\n",
" print(f\"Accuracy: {accuracy:.4f}\")\n",
" print(\"\\nClassification Report:\")\n",
" print(report)\n",
"\n",
" return predictions\n",
"\n",
"# Run evaluation\n",
"# predictions = evaluate_model(val_df.iloc[-1])\n"
],
"metadata": {
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T11:02:28.583681Z",
"iopub.execute_input": "2025-05-27T11:02:28.583952Z",
"iopub.status.idle": "2025-05-27T11:02:28.589101Z",
"shell.execute_reply.started": "2025-05-27T11:02:28.583932Z",
"shell.execute_reply": "2025-05-27T11:02:28.588333Z"
},
"id": "nAu-zZZc0PgO"
},
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"# Batched Inference on Validation Set"
],
"metadata": {
"id": "Q-c11XfC7SIZ"
}
},
{
"cell_type": "code",
"source": [
"import torch\n",
"import torch.nn.functional as F\n",
"from tqdm import tqdm\n",
"import random\n",
"\n",
"# Prepare inference prompt template using your existing prompt structure\n",
"inference_prompt_template = prompt.split(\"class {}\")[0] + \"class \"\n",
"\n",
"# Sort validation set by length for efficient batching\n",
"val_df['token_length'] = val_df['text'].apply(lambda x: len(tokenizer.encode(x, add_special_tokens=False)))\n",
"val_df_sorted = val_df.sort_values(by='token_length').reset_index(drop=True)\n",
"\n",
"# Parameters\n",
"display = 50\n",
"batch_size = 4\n",
"device = next(model.parameters()).device # More robust device detection\n",
"correct = 0\n",
"results = []\n",
"\n",
"# Evaluation loop with inference mode\n",
"with torch.inference_mode():\n",
" for i in tqdm(range(0, len(val_df_sorted), batch_size), desc=\"Evaluating\"):\n",
" batch = val_df_sorted.iloc[i:i+batch_size]\n",
" prompts = [inference_prompt_template.format(text) for text in batch['text']]\n",
"\n",
" # Tokenize and move to device\n",
" inputs = tokenizer(\n",
" prompts,\n",
" return_tensors=\"pt\",\n",
" padding=True,\n",
" truncation=True,\n",
" max_length=max_seq_length\n",
" ).to(device)\n",
"\n",
" # Get model predictions\n",
" logits = model(**inputs).logits\n",
" last_idxs = inputs.attention_mask.sum(1) - 1\n",
" last_logits = logits[torch.arange(len(batch)), last_idxs, :]\n",
"\n",
" # Apply softmax and extract probabilities for number tokens only\n",
" probs_all = F.softmax(last_logits, dim=-1)\n",
" probs = probs_all[:, number_token_ids] # Keep only logits for number tokens\n",
" preds = torch.argmax(probs, dim=-1).cpu().numpy()\n",
"\n",
" # Calculate accuracy\n",
" true_labels = batch['label'].tolist()\n",
" correct += sum([p == t for p, t in zip(preds, true_labels)])\n",
"\n",
" # Store results for analysis\n",
" for j in range(len(batch)):\n",
" results.append({\n",
" \"text\": batch['text'].iloc[j][:200], # Truncate for display\n",
" \"true\": true_labels[j],\n",
" \"pred\": preds[j],\n",
" \"probs\": probs[j].float().cpu().numpy(), # All class probabilities\n",
" \"ok\": preds[j] == true_labels[j]\n",
" })\n",
"\n",
"# Calculate and display accuracy\n",
"accuracy = 100 * correct / len(val_df_sorted)\n",
"print(f\"\\nValidation accuracy: {accuracy:.2f}% ({correct}/{len(val_df_sorted)})\")\n",
"\n",
"# Display random sample results\n",
"print(f\"\\n--- Random samples (showing {min(display, len(results))} out of {len(results)}) ---\")\n",
"for s in random.sample(results, min(display, len(results))):\n",
" print(f\"\\nText: {s['text']}\")\n",
" print(f\"True: {s['true']} ({'Human' if s['true'] == 0 else 'AI'}) \"\n",
" f\"Pred: {s['pred']} ({'Human' if s['pred'] == 0 else 'AI'}) \"\n",
" f\"{'✅' if s['ok'] else '❌'}\")\n",
" print(\"Probs:\", \", \".join([f\"class {k}: {v:.3f}\" for k, v in enumerate(s['probs'])]))\n",
"\n",
"# Additional metrics for better evaluation\n",
"correct_by_class = {0: 0, 1: 0}\n",
"total_by_class = {0: 0, 1: 0}\n",
"\n",
"for result in results:\n",
" true_label = result['true']\n",
" total_by_class[true_label] += 1\n",
" if result['ok']:\n",
" correct_by_class[true_label] += 1\n",
"\n",
"print(f\"\\n--- Per-class accuracy ---\")\n",
"for class_id in [0, 1]:\n",
" class_name = 'Human' if class_id == 0 else 'AI'\n",
" class_acc = 100 * correct_by_class[class_id] / total_by_class[class_id] if total_by_class[class_id] > 0 else 0\n",
" print(f\"Class {class_id} ({class_name}): {class_acc:.2f}% ({correct_by_class[class_id]}/{total_by_class[class_id]})\")\n",
"\n",
"# Clean up\n",
"if 'token_length' in val_df:\n",
" del val_df['token_length']\n"
],
"metadata": {
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T11:16:56.013928Z",
"iopub.execute_input": "2025-05-27T11:16:56.014447Z",
"iopub.status.idle": "2025-05-27T11:18:28.001408Z",
"shell.execute_reply.started": "2025-05-27T11:16:56.014418Z",
"shell.execute_reply": "2025-05-27T11:18:28.000564Z"
},
"id": "9YIkHSoY0PgP",
"outputId": "aca7ae4b-e0b6-40b2-d2bf-1bda33dffc5f"
},
"outputs": [
{
"name": "stderr",
"text": "Evaluating: 100%|██████████| 500/500 [01:29<00:00, 5.58it/s]",
"output_type": "stream"
},
{
"name": "stdout",
"text": "\nValidation accuracy: 91.50% (1830/2000)\n\n--- Random samples (showing 50 out of 2000) ---\n\nText: Ingredients (servings 4): 1 pound chorizio sausage; 2 tablespoons olive oil divided use ; 3 cloves garlic minced or crushed with presser tool in kitchen set, salt and pepper to taste if desired. Direc\nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 0.999\n\nText: A seemingly shy and humble country boy named Luther Sellers is discovered to have a magnificent voice and mesmerizing stage presence. He is given the stage name Stag Preston and after a short time on\nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.134, class 1: 0.000\n\nText: The story opens with Bathsheba Everdeen, an independent and capable young woman who inherits her uncle's farm in the English countryside after his death. With no prior experience managing such a large\nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 0.998\n\nText: Ukraine has agreed to pay 30% more for natural gas supplied by Turkmenistan.\nThe deal was sealed three days after Turkmenistan cut off gas supplies in a price dispute that threatened the Ukrainian eco\nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.136, class 1: 0.000\n\nText: You'll probably think this sounds like an overstatement, but it isn't- 'This film changed my life'. Yes in fact! At least where female looks and sex appeal go... The opening scene shows Ben Stiller wa\nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 0.998\n\nText: Okay! Here's a poem with that title:\n\nI used to dream of finding my ideal\nA love that would be eternal and real\nBut now I'm stuck in this dating game\nAlways ending up with a fool or a shame\n\nMy friend\nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 0.999\n\nText: The mother lay in a stupor filled\nWith alcohol and drugs,\nThe twins lay wet in the carry-cot\nAnd screamed at the top of their lungs,\nThe boyfriend of the moment sat\nAt a bar in a nearby town,\nDrinking\nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.101, class 1: 0.000\n\nText: I started an online friendship 7 months ago and we have spoken on the phone almost everyday since. About 3 months into knowing him, he actually told me that he was having feelings for me, which was a \nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.113, class 1: 0.000\n\nText: Tory leader Michael Howard says his party can save £35bn in government spending by tackling waste.\nThe money would be ploughed back into frontline services like the NHS and schools with the rest used \nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.098, class 1: 0.000\n\nText: Kobe Beef Steak Recipe\n\nIngredients:\n- 1 1/2 lbs Kobe beef steak\n- Salt and freshly ground black pepper\n- 2 tablespoons vegetable oil\n- 2 cloves garlic, minced\n- 1 small onion, thinly sliced\n- 1 cup s\nTrue: 1 (AI) Pred: 0 (Human) ❌\nProbs: class 0: 0.002, class 1: 0.000\n\nText: I really want to draw attention to the title of the review above. I'm sure many die-hard potterheads would want every review on this site to score this movie 10/10 and say it is a masterpiece. Well I \nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.117, class 1: 0.000\n\nText: Marie Ussing Nylen is a Danish-American biologist, dentist, microscopist, and badminton player known for her research on the morphology of tooth enamel and her contributions to refining the electron m\nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.100, class 1: 0.000\n\nText: Moisés Sánchez Parra (born September 21, 1980 in Palma de Mallorca) is a retired amateur Spanish Greco-Roman wrestler, who competed in the men's welterweight category. He represented his nation Spain \nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.135, class 1: 0.000\n\nText: 3/4 cup 'Super fine' brown rice flour\n6 tablespoons tapioca flour\n1/2 teaspoon salt\n1/2 teaspoon baking soda\n1 1/2 teaspoons baking powder\n1/2 teaspoon xanthan gum\n2 eggs\n2/3 cup sugar\n1/3 cup 'mayo' \nTrue: 0 (Human) Pred: 1 (AI) ❌\nProbs: class 0: 0.004, class 1: 0.053\n\nText: 5 large egg yolks\n1 tablespoon Asian (toasted) sesame oil\n3 tablespoons sesame seeds\n3/4 cup sugar\n1 cup whipping cream\n1 3/4 cups milk\nIn a large metal bowl, combine egg yolks and sesame oil; whisk j\nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.110, class 1: 0.000\n\nText: Józef Jerzy Kukuczka (24 March 1948 in Katowice, Poland – 24 October 1989 Lhotse, Nepal) was a Polish alpine and high-altitude climber. Born in Katowice, his family origin is Silesian Goral. On 18 Sep\nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.109, class 1: 0.000\n\nText: \"The Remains of the Day\" is a novel by Nobel Prize-winning British author Kazuo Ishiguro that was published in 1989. It is a psychological drama about Stevens, a butler at a large English country hou\nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 0.002\n\nText: People's War refers to a military strategy that emphasizes the use of unconventional tactics and mobilization of large numbers of people, particularly civilians, in order to achieve strategic goals. T\nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 1.000\n\nText: Pinched Nerveback Problem - Personal - A Warning\nYou know how it is, we do something everyday without thinking of consequences, like not sitting up straighter, or not exercising enough, and that, over\nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 0.998\n\nText: Many diffusion MRI researchers, including the Human Connectome Project (HCP),\nacquire data using multishell (e.g., WU-Minn consortium) and diffusion spectrum\nimaging (DSI) schemes (e.g., USC-Harvard\nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.103, class 1: 0.000\n\nText: The story is about Mitch Albom, an accomplished journalist who reconnects with his former college professor, Morrie Schwartz, after having not seen him in many years. As they spend Tuesdays together, \nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 0.999\n\nText: Sure. Here is the text:\n\n\"Hello, I am William Pearson, editor of the Globe Newspaper. This is Philip Wynn, assistant editor. Mr. Wells, Mr. Bramwell, and Mr. Jenkins—these are our reporters. Tonight \nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 0.006\n\nText: In quiet moments, when I close my eyes,\nTrue: 1 (AI) Pred: 0 (Human) ❌\nProbs: class 0: 0.000, class 1: 0.000\n\nText: Caffrey, Egge, Michel, Rubin and Ver Steegh recently introduced snow leopard\npermutations, which are the anti-Baxter permutations that are compatible with\nthe doubly alternating Baxter permutations.\nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.104, class 1: 0.000\n\nText: Granite is born on a snowy April day in Alaska. For the first weeks of his life, he lives in the kennels, playing with his siblings Digger, Cricket, and Nugget. Even though their mother Seppala gets \nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.149, class 1: 0.000\n\nText: US presidential hopeful Ted Kennedy has been campaigning for the Democratic Party in Maryland.\n\nThe 76-year-old senator spoke alongside Maryland Governor Martin O'Malley and Congressman Chris Van Holl\nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 1.000\n\nText: In a groundbreaking move set to revolutionise the television industry, viewers will soon have the power to influence the content they watch on TV. This innovative approach is expected to transform the\nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 0.000\n\nText: The anisotropic nature of membrane curvature is an important factor in cellular processes such as vesicle formation, endocytosis and cytokinesis. Amphipathic peptides are known to sense and respond to\nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 0.008\n\nText: If I could see nothing but the smoke\nFrom the tip of his cigar, I would know everything\nAbout the years before the war.\nIf his face were halved by shadow I would know\nThis was a street where an EATS s\nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.107, class 1: 0.000\n\nText: Upnor Castle is an Elizabethan artillery fort located on the west bank of the River Medway in Kent. It is in the village of Upnor, opposite and a short distance downriver from the Chatham Dockyard, at\nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.134, class 1: 0.000\n\nText: > We study the relation between large dimension operators and oscillator algebra of Young diagrams. We show that the large dimension operators are the generators of the oscillator algebra of Young dia\nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 1.000\n\nText: Introduction\n\nAnalysis of class imbalances are important when training neural networks with deep architectures because they can lead to overfitting (the tendency to learn too much about particular cla\nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 1.000\n\nText: Valeria Francisca Eugenia Leopoldina de María de Guadalupe Souza Saldívar is a Mexican scientist who specializes in evolutionary and microbial ecology. She is a senior researcher in the Department of \nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.114, class 1: 0.000\n\nText: \"Prep Time:-15 min\"Cook Time:-30 min (?)Ready In:- 45 min Servings: -24 Inch.\"What You Need\"-2 Cup Butter, Softened\"-1/2 Cup Orange Zest, Grated\"-Granulated Sugar\", 1 teaspoon Salt\",-All Purpose Flour\nTrue: 1 (AI) Pred: 0 (Human) ❌\nProbs: class 0: 0.000, class 1: 0.000\n\nText: 5.4 ounces gluten-free baking and pancake mix (about 1 1/4 cups; such as Pamela's)\n1/2 cup packed light brown sugar\n6 tablespoons fat-free milk\n3 tablespoons butter, melted\n1 tablespoon vanilla extrac\nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.152, class 1: 0.000\n\nText: Voters' \"pent up passion\" could confound predictions of a low turnout in the coming general election, Charles Kennedy has said.\nThe Liberal Democrat leader predicted concerns over Iraq and other inter\nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.117, class 1: 0.000\n\nText: Saw a chiropractor for \"sore neck from heavy lifting\" with no injury or trauma. I specifically said I wanted only a single adjustment. Without even examining me, the Chiro led me to the x-ray room. \nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.161, class 1: 0.000\n\nText: The Nakajima Ki-87 (\"Jiro\") was a fighter aircraft used by the Imperial Japanese Army Air Service during World War II. It was designed as an answer to increasing Allied airpower in Asia, particularly \nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 0.003\n\nText: Carl Frederick Wittke, (13 November 1892 – 24 May 1971) was an American historian and academic administrator. He was a specialist on ethnic history in America, especially regarding the German American\nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.000, class 1: 0.000\n\nText: \n\nRomantic racism is a form of racism characterized by beliefs in the natural superiority of one race over another and the inherent abilities of the former to create great civilizations while the latt\nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 0.001\n\nText: > Former California governor and actor Arnold Schwarzenegger has been congratulated by state governor Jerry Brown after he won an Oscar for best supporting actor in Terminator Genisys.He accepted his \nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 0.998\n\nText: \"It's probably the best chocolate chip cookie recipe in the world! You need to follow this recipe exactly! There is one extra step which is probably the most important of all. After you have made the \nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.002, class 1: 0.007\n\nText: I'm not sure if this has happened to you, but I've had my account banned for no reason at all. It's happened twice now and it's really frustrating. The first time was when I was in college (and still \nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 0.004\n\nText: I waited for this as a FF and Dragon quest and chrono trigger. I have always been a fan of square Enix since chrono trigger. Mist walker created chrono trigger. It was my first FF type game. I fell in\nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.163, class 1: 0.000\n\nText: Kathleen \"Kay\" Walsh (15 November 1911 – 16 April 2005) was an English actress, dancer, and screenwriter. Her film career prospered after she met her future husband film director David Lean, with whom\nTrue: 0 (Human) Pred: 1 (AI) ❌\nProbs: class 0: 0.000, class 1: 0.000\n\nText: My Q spouse has been on the Trump Train since he came down the escalator, and has taken on every single aspect of the beliefs of Trumpism and Q that go with it. She’s a reader on the internet, not a p\nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.111, class 1: 0.000\n\nText: The British Prime Minister, Tony Blair, and the Chancellor, Gordon Brown, have clashed over tax policy.\nMr Blair said he would not be bound by a pledge to cut taxes, made by Mr Brown in the run-up to \nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 1.000\n\nText: 100 g butter, chopped\n1 teaspoon vanilla essence\n14 cup powdered sugar icing\n1 egg\n12 cup almond meal\n34 cup flour\n14 cup cocoa powder\n1 liter strawberry ice cream\nBeat the butter, essence, icing suga\nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.118, class 1: 0.000\n\nText: 1 tablespoon chili powder\n1 teaspoon ground cumin\n1 teaspoon freshly ground black pepper\n1/8 teaspoon ground allspice\n1/2 teaspoon sugar\n1 1/2 teaspoons finely chopped garlic, minced and mashed into a\nTrue: 0 (Human) Pred: 0 (Human) ✅\nProbs: class 0: 0.103, class 1: 0.000\n\nText: Ingredients:\n\n- 2 small golden beets, trimmed and peeled\n- 2 small red beets, trimmed and peeled\n- 2 tablespoons extra-virgin olive oil, divided\n- 1 tablespoon white balsamic or white wine vinegar\n- 1\nTrue: 1 (AI) Pred: 1 (AI) ✅\nProbs: class 0: 0.000, class 1: 0.003\n\n--- Per-class accuracy ---\nClass 0 (Human): 94.10% (941/1000)\nClass 1 (AI): 88.90% (889/1000)\n",
"output_type": "stream"
},
{
"name": "stderr",
"text": "\n",
"output_type": "stream"
}
],
"execution_count": null
},
{
"cell_type": "code",
"source": [
"# stop running all cells\n",
"1/0"
],
"metadata": {
"id": "ldcY8HxK7SIa",
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T08:46:12.475940Z",
"iopub.execute_input": "2025-05-27T08:46:12.476643Z",
"iopub.status.idle": "2025-05-27T08:46:12.493647Z",
"shell.execute_reply.started": "2025-05-27T08:46:12.476624Z",
"shell.execute_reply": "2025-05-27T08:46:12.492867Z"
}
},
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"---\n",
"\n",
"## 7. Model Deployment {#deployment}\n",
"\n",
"Deploy the trained model to Hugging Face Hub for easy access and sharing.\n",
"\n",
"### Upload to Hugging Face Hub\n"
],
"metadata": {
"id": "R3KKSQjQ463R"
}
},
{
"cell_type": "code",
"source": [
"# Step 3: Push to Hugging Face (replace with your username and token)\n",
"print(\"Uploading to Hugging Face...\")\n",
"# Only save LoRA adapter (no base model)\n",
"model.push_to_hub(\"subhashbs36/qwen3-0.6-ai-detector-merged\", tokenizer, save_method = \"merged_16bit\", token=\"\")\n",
"# tokenizer.push_to_hub(\"subhashbs36/qwen3-0.6-ai-detector-lora\", token=\"\")\n",
"\n",
"print(\"Model saved successfully!\")"
],
"metadata": {
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T11:08:57.409836Z",
"iopub.execute_input": "2025-05-27T11:08:57.410487Z",
"iopub.status.idle": "2025-05-27T11:09:00.240765Z",
"shell.execute_reply.started": "2025-05-27T11:08:57.410464Z",
"shell.execute_reply": "2025-05-27T11:09:00.239933Z"
},
"id": "kzz4tZ950PgP",
"outputId": "d7ca945c-2272-4530-af13-48d4e4673d48"
},
"outputs": [
{
"name": "stdout",
"text": "Uploading to Hugging Face...\nSaved model to https://huggingface.co/subhashbs36/qwen3-0.6-ai-detector-merged\nModel saved successfully!\n",
"output_type": "stream"
}
],
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"# Inference"
],
"metadata": {
"id": "u5U9I7df7SIa"
}
},
{
"cell_type": "markdown",
"source": [
"### Load and Test Deployed Model\n",
"\n",
"Load the deployed model from Hugging Face Hub and test its functionality.\n"
],
"metadata": {
"id": "Jpm1ZU4e4-JT"
}
},
{
"cell_type": "code",
"source": [
"from unsloth import FastLanguageModel\n",
"import torch\n",
"\n",
"# Load base model first\n",
"model, tokenizer = FastLanguageModel.from_pretrained(\n",
" model_name=\"subhashbs36/qwen3-0.6-ai-detector-merged\",\n",
" max_seq_length=4096,\n",
" dtype=torch.float16,\n",
" load_in_4bit=False,\n",
")\n",
"\n",
"# Load your LoRA adapter\n",
"# model.load_adapter(\"subhashbs36/qwen3-0.6-ai-detector-lora\")\n",
"\n",
"# Enable inference mode\n",
"FastLanguageModel.for_inference(model)"
],
"metadata": {
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T12:46:13.692670Z",
"iopub.execute_input": "2025-05-27T12:46:13.693374Z",
"iopub.status.idle": "2025-05-27T12:46:21.727915Z",
"shell.execute_reply.started": "2025-05-27T12:46:13.693349Z",
"shell.execute_reply": "2025-05-27T12:46:21.727298Z"
},
"id": "74UBqSCS0PgQ",
"outputId": "0f3f02a6-a6ee-4084-e2ad-fc4ad801716d"
},
"outputs": [
{
"name": "stdout",
"text": "==((====))== Unsloth 2025.5.7: Fast Qwen3 patching. Transformers: 4.51.3.\n \\\\ /| Tesla T4. Num GPUs = 2. Max memory: 14.741 GB. Platform: Linux.\nO^O/ \\_/ \\ Torch: 2.6.0+cu124. CUDA: 7.5. CUDA Toolkit: 12.4. Triton: 3.2.0\n\\ / Bfloat16 = FALSE. FA [Xformers = 0.0.29.post3. FA2 = False]\n \"-____-\" Free license: http://github.com/unslothai/unsloth\nUnsloth: Fast downloading is enabled - ignore downloading bars which are red colored!\n",
"output_type": "stream"
},
{
"execution_count": 99,
"output_type": "execute_result",
"data": {
"text/plain": "PeftModelForCausalLM(\n (base_model): LoraModel(\n (model): Qwen3ForCausalLM(\n (model): Qwen3Model(\n (embed_tokens): Embedding(151936, 1024, padding_idx=151654)\n (layers): ModuleList(\n (0-27): 28 x Qwen3DecoderLayer(\n (self_attn): Qwen3Attention(\n (q_proj): lora.Linear(\n (base_layer): Linear(in_features=1024, out_features=2048, bias=False)\n (lora_dropout): ModuleDict(\n (default): Identity()\n )\n (lora_A): ModuleDict(\n (default): Linear(in_features=1024, out_features=16, bias=False)\n )\n (lora_B): ModuleDict(\n (default): Linear(in_features=16, out_features=2048, bias=False)\n )\n (lora_embedding_A): ParameterDict()\n (lora_embedding_B): ParameterDict()\n (lora_magnitude_vector): ModuleDict()\n )\n (k_proj): lora.Linear(\n (base_layer): Linear(in_features=1024, out_features=1024, bias=False)\n (lora_dropout): ModuleDict(\n (default): Identity()\n )\n (lora_A): ModuleDict(\n (default): Linear(in_features=1024, out_features=16, bias=False)\n )\n (lora_B): ModuleDict(\n (default): Linear(in_features=16, out_features=1024, bias=False)\n )\n (lora_embedding_A): ParameterDict()\n (lora_embedding_B): ParameterDict()\n (lora_magnitude_vector): ModuleDict()\n )\n (v_proj): lora.Linear(\n (base_layer): Linear(in_features=1024, out_features=1024, bias=False)\n (lora_dropout): ModuleDict(\n (default): Identity()\n )\n (lora_A): ModuleDict(\n (default): Linear(in_features=1024, out_features=16, bias=False)\n )\n (lora_B): ModuleDict(\n (default): Linear(in_features=16, out_features=1024, bias=False)\n )\n (lora_embedding_A): ParameterDict()\n (lora_embedding_B): ParameterDict()\n (lora_magnitude_vector): ModuleDict()\n )\n (o_proj): lora.Linear(\n (base_layer): Linear(in_features=2048, out_features=1024, bias=False)\n (lora_dropout): ModuleDict(\n (default): Identity()\n )\n (lora_A): ModuleDict(\n (default): Linear(in_features=2048, out_features=16, bias=False)\n )\n (lora_B): ModuleDict(\n (default): Linear(in_features=16, out_features=1024, bias=False)\n )\n (lora_embedding_A): ParameterDict()\n (lora_embedding_B): ParameterDict()\n (lora_magnitude_vector): ModuleDict()\n )\n (q_norm): Qwen3RMSNorm((128,), eps=1e-06)\n (k_norm): Qwen3RMSNorm((128,), eps=1e-06)\n (rotary_emb): LlamaRotaryEmbedding()\n )\n (mlp): Qwen3MLP(\n (gate_proj): lora.Linear(\n (base_layer): Linear(in_features=1024, out_features=3072, bias=False)\n (lora_dropout): ModuleDict(\n (default): Identity()\n )\n (lora_A): ModuleDict(\n (default): Linear(in_features=1024, out_features=16, bias=False)\n )\n (lora_B): ModuleDict(\n (default): Linear(in_features=16, out_features=3072, bias=False)\n )\n (lora_embedding_A): ParameterDict()\n (lora_embedding_B): ParameterDict()\n (lora_magnitude_vector): ModuleDict()\n )\n (up_proj): lora.Linear(\n (base_layer): Linear(in_features=1024, out_features=3072, bias=False)\n (lora_dropout): ModuleDict(\n (default): Identity()\n )\n (lora_A): ModuleDict(\n (default): Linear(in_features=1024, out_features=16, bias=False)\n )\n (lora_B): ModuleDict(\n (default): Linear(in_features=16, out_features=3072, bias=False)\n )\n (lora_embedding_A): ParameterDict()\n (lora_embedding_B): ParameterDict()\n (lora_magnitude_vector): ModuleDict()\n )\n (down_proj): lora.Linear(\n (base_layer): Linear(in_features=3072, out_features=1024, bias=False)\n (lora_dropout): ModuleDict(\n (default): Identity()\n )\n (lora_A): ModuleDict(\n (default): Linear(in_features=3072, out_features=16, bias=False)\n )\n (lora_B): ModuleDict(\n (default): Linear(in_features=16, out_features=1024, bias=False)\n )\n (lora_embedding_A): ParameterDict()\n (lora_embedding_B): ParameterDict()\n (lora_magnitude_vector): ModuleDict()\n )\n (act_fn): SiLU()\n )\n (input_layernorm): Qwen3RMSNorm((1024,), eps=1e-06)\n (post_attention_layernorm): Qwen3RMSNorm((1024,), eps=1e-06)\n )\n )\n (norm): Qwen3RMSNorm((1024,), eps=1e-06)\n (rotary_emb): LlamaRotaryEmbedding()\n )\n (lm_head): Linear(in_features=1024, out_features=151936, bias=False)\n )\n )\n)"
},
"metadata": {}
}
],
"execution_count": null
},
{
"cell_type": "code",
"source": [
"import os\n",
"import torch\n",
"import torch.nn.functional as F\n",
"\n",
"# Enable CUDA debugging for accurate stack trace\n",
"# os.environ['CUDA_LAUNCH_BLOCKING'] = '1'\n",
"\n",
"def classify_text_fixed(text_sample):\n",
" prompt = f\"\"\"Here is a text sample:\n",
"{text_sample}\n",
"\n",
"Classify this text into one of the following:\n",
"class 0: Human\n",
"class 1: AI\n",
"\n",
"SOLUTION\n",
"The correct answer is: class \"\"\"\n",
"\n",
" inputs = tokenizer(prompt, return_tensors=\"pt\")\n",
" device = next(model.parameters()).device\n",
" inputs = {k: v.to(device) for k, v in inputs.items()}\n",
"\n",
" with torch.no_grad():\n",
" outputs = model(**inputs)\n",
"\n",
" # Fix: Get the last token index as a scalar, not tensor\n",
" last_token_idx = (inputs['attention_mask'].sum(1) - 1).item()\n",
" last_logits = outputs.logits[0, last_token_idx, :]\n",
"\n",
" # Debug information\n",
" print(f\"Logits shape: {last_logits.shape}\")\n",
" print(f\"Number token ids: {number_token_ids}\")\n",
" print(f\"Vocab size: {last_logits.shape[0]}\")\n",
"\n",
" # Check if any index is out of bounds\n",
" vocab_size = last_logits.shape[0]\n",
" for i, idx in enumerate(number_token_ids):\n",
" if idx >= vocab_size:\n",
" print(f\"ERROR: Index {idx} (class {i}) is out of bounds for vocab size {vocab_size}\")\n",
" return None, None\n",
"\n",
" probs_all = F.softmax(last_logits, dim=-1)\n",
" probs = probs_all[number_token_ids]\n",
" predicted_class = torch.argmax(probs).item()\n",
" confidence = probs[predicted_class].item()\n",
"\n",
" return predicted_class, confidence"
],
"metadata": {
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T12:52:15.555013Z",
"iopub.execute_input": "2025-05-27T12:52:15.555629Z",
"iopub.status.idle": "2025-05-27T12:52:15.562313Z",
"shell.execute_reply.started": "2025-05-27T12:52:15.555607Z",
"shell.execute_reply": "2025-05-27T12:52:15.561564Z"
},
"id": "Dt5KMR5W0PgQ"
},
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"### Final Testing\n",
"\n",
"Test the deployed model with diverse examples to verify its performance across different text types.\n"
],
"metadata": {
"id": "2ii6Fs5Z5IFz"
}
},
{
"cell_type": "code",
"source": [
"\n",
"NUM_CLASSES = 2\n",
"number_token_ids = []\n",
"for i in range(NUM_CLASSES):\n",
" number_token_ids.append(tokenizer.encode(str(i), add_special_tokens=False)[0])\n",
"\n",
"\n",
"test_texts = [\n",
" # AI-generated content examples\n",
" \"In quiet moments, when I close my eyes,\",\n",
" \"They're both sitting next to each other, looking out the window at the rain. One starts to sniff the other's butt, and the other one just kind of tolerates it for a few seconds before finally pushing them away. It's like they're just trying to pass the time until the storm breaks.\",\n",
"\n",
" # Human-written examples\n",
" \"This was the biggest surprise of the year - bar none.
A great comedy - as in laughs, feel good, and just plain enjoyable.
The plot of the loser who makes good at rock'n'roll second time around is very Jack Black and Rainn Wilson does a GREAT job - there is no recourse to gross or sarcastic humor here - rather it plays on its rock roots, chucks in some stupidity, and some kick-ass tunes, and lots of excellent one liners and like I say totally surprised us as to how genuinely funny and warm-hearted this is.
Great cast and a great script - sure it's not perfect, but after all the pseudo-comedy and angst of 2008 it was refreshing just to sit back and enjoy an entertaining movie.
we loved it and are normally really cynical about comedies - but this rocks and would recommend to anyone as a real effort by all involved to stop being smarter than the audience and just enjoy life - in two and a half words? - it rocks!\",\n",
" \"Château Vaudreuil was a stately residence and college in Montreal, Quebec, Canada. It was constructed between 1723 and 1726 for Philippe de Rigaud, Marquis de Vaudreuil, as his private residence by Gaspard-Joseph Chaussegros de Léry. Though the Château Saint-Louis in Quebec City remained the official residence of the Governors General of New France, the Château Vaudreuil was to remain as their official home in Montreal up until the British Conquest in 1763. In 1767, it was purchased by the Marquis de Lotbinière. He sold it in 1773, when it became the Collège Saint-Raphaël. It was destroyed by a fire in 1803. Completed in 1726, it was built in the classical style of the French Hôtel Particulier by King Louis XV's chief engineer in New France, Gaspard-Joseph Chaussegros de Léry. The central building was flanked by two wings with two sets of semi-circular stairs leading up to a terrace and the main entrance. It stood beyond the end of Rue Saint-Paul, which was kept clear of buildings on that side to afford it a clear view, while formal gardens led up to Notre-Dame Street.\",\n",
" ]\n",
"\n",
"# Test the fixed function\n",
"for text in test_texts:\n",
" # pred, conf = classify_text_fixed(val_df.iloc[4][\"text\"])\n",
" pred, conf = classify_text_fixed(text)\n",
" label = 'Human' if pred == 0 else 'AI'\n",
" print(f\"Text: {text[:50]}\")\n",
" print(f\"Prediction: {label} (confidence: {confidence:.3f})\\n\")"
],
"metadata": {
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T12:55:32.590168Z",
"iopub.execute_input": "2025-05-27T12:55:32.590445Z",
"iopub.status.idle": "2025-05-27T12:55:33.043147Z",
"shell.execute_reply.started": "2025-05-27T12:55:32.590423Z",
"shell.execute_reply": "2025-05-27T12:55:33.042422Z"
},
"id": "h3Ew4agi0PgR",
"outputId": "dce963b4-2e4a-4f59-9ec5-dac64a28f08f"
},
"outputs": [
{
"name": "stdout",
"text": "Logits shape: torch.Size([151936])\nNumber token ids: [15, 16]\nVocab size: 151936\nText: In quiet moments, when I close my eyes,\nPrediction: AI (confidence: 0.992)\n\nLogits shape: torch.Size([151936])\nNumber token ids: [15, 16]\nVocab size: 151936\nText: They're both sitting next to each other, looking o\nPrediction: AI (confidence: 0.992)\n\nLogits shape: torch.Size([151936])\nNumber token ids: [15, 16]\nVocab size: 151936\nText: This was the biggest surprise of the year - bar no\nPrediction: Human (confidence: 0.992)\n\nLogits shape: torch.Size([151936])\nNumber token ids: [15, 16]\nVocab size: 151936\nText: Château Vaudreuil was a stately residence and coll\nPrediction: Human (confidence: 0.992)\n\n",
"output_type": "stream"
}
],
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"---\n",
"\n",
"## Conclusion\n",
"\n",
"This notebook demonstrates a complete pipeline for fine-tuning a language model for AI content detection using:\n",
"\n",
"1. **Custom Architecture**: Modified classification head with token mapping\n",
"2. **Parameter-Efficient Training**: LoRA for reduced computational requirements\n",
"3. **Balanced Dataset**: Carefully curated RAID dataset samples\n",
"4. **Comprehensive Evaluation**: Batch inference with detailed metrics\n",
"5. **Model Deployment**: Easy sharing via Hugging Face Hub\n",
"\n",
"### Key Results:\n",
"- Achieved high accuracy on validation set\n",
"- Efficient training with minimal GPU memory usage\n",
"- Robust performance across different text types\n",
"- Easy deployment and inference capabilities\n",
"\n",
"### Next Steps:\n",
"- Test on additional domains and text types\n",
"- Experiment with different model sizes\n",
"- Implement adversarial robustness testing\n",
"- Deploy for production use cases\n"
],
"metadata": {
"id": "ElwtVlrl5O7n"
}
},
{
"cell_type": "code",
"source": [],
"metadata": {
"id": "47mBkneV5Mn2"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [],
"metadata": {
"id": "6mfXFF665MlW"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [],
"metadata": {
"id": "qXVAL9cq5Mij"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [],
"metadata": {
"id": "8ZvnGpSD5Mf-"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [],
"metadata": {
"id": "6zGLW4qL5Mdm"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [],
"metadata": {
"id": "Wd5YGz-95MaP"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [],
"metadata": {
"id": "JIyWFNyz5MXU"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Saving to float16 for VLLM\n",
"\n",
"We also support saving to `float16` directly. Select `merged_16bit` for float16 or `merged_4bit` for int4. We also allow `lora` adapters as a fallback. Use `push_to_hub_merged` to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens."
],
"metadata": {
"id": "f422JgM9sdVT"
}
},
{
"cell_type": "code",
"source": [
"# # Merge to 16bit\n",
"# if False: model.save_pretrained_merged(\"hf/model\", tokenizer, save_method = \"merged_16bit\",)\n",
"# if False: model.push_to_hub_merged(\"hf/model\", tokenizer, save_method = \"merged_16bit\", token = \"\")\n",
"\n",
"# # Merge to 4bit\n",
"# if False: model.save_pretrained_merged(\"hf/model\", tokenizer, save_method = \"merged_4bit\",)\n",
"# if False: model.push_to_hub_merged(\"hf/model\", tokenizer, save_method = \"merged_4bit\", token = \"\")\n",
"\n",
"# # Just LoRA adapters\n",
"# if False: model.save_pretrained_merged(\"model\", tokenizer, save_method = \"lora\",)\n",
"# if False: model.push_to_hub_merged(\"hf/model\", tokenizer, save_method = \"lora\", token = \"\")"
],
"metadata": {
"id": "iHjt_SMYsd3P",
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T08:46:12.508908Z",
"iopub.execute_input": "2025-05-27T08:46:12.509135Z",
"iopub.status.idle": "2025-05-27T08:46:12.527210Z",
"shell.execute_reply.started": "2025-05-27T08:46:12.509119Z",
"shell.execute_reply": "2025-05-27T08:46:12.526670Z"
}
},
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"### GGUF / llama.cpp Conversion\n",
"To save to `GGUF` / `llama.cpp`, we support it natively now! We clone `llama.cpp` and we default save it to `q8_0`. We allow all methods like `q4_k_m`. Use `save_pretrained_gguf` for local saving and `push_to_hub_gguf` for uploading to HF."
],
"metadata": {
"id": "TCv4vXHd61i7"
}
},
{
"cell_type": "code",
"source": [
"# # Save to 8bit Q8_0\n",
"# if False: model.save_pretrained_gguf(\"model\", tokenizer,)\n",
"# if False: model.push_to_hub_gguf(\"hf/model\", tokenizer, token = \"\")\n",
"\n",
"# # Save to 16bit GGUF\n",
"# if False: model.save_pretrained_gguf(\"model\", tokenizer, quantization_method = \"f16\")\n",
"# if False: model.push_to_hub_gguf(\"hf/model\", tokenizer, quantization_method = \"f16\", token = \"\")\n",
"\n",
"# # Save to q4_k_m GGUF\n",
"# if False: model.save_pretrained_gguf(\"model\", tokenizer, quantization_method = \"q4_k_m\")\n",
"# if False: model.push_to_hub_gguf(\"hf/model\", tokenizer, quantization_method = \"q4_k_m\", token = \"\")"
],
"metadata": {
"id": "FqfebeAdT073",
"trusted": true,
"execution": {
"iopub.status.busy": "2025-05-27T08:46:12.527843Z",
"iopub.execute_input": "2025-05-27T08:46:12.528025Z",
"iopub.status.idle": "2025-05-27T08:46:12.542714Z",
"shell.execute_reply.started": "2025-05-27T08:46:12.528011Z",
"shell.execute_reply": "2025-05-27T08:46:12.541967Z"
}
},
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"source": [
"Now, use the `model-unsloth.gguf` file or `model-unsloth-Q4_K_M.gguf` file in `llama.cpp` or a UI based system like `GPT4All`. You can install GPT4All by going [here](https://gpt4all.io/index.html)."
],
"metadata": {
"id": "bDp0zNpwe6U_"
}
},
{
"cell_type": "markdown",
"source": [
"And we're done! If you have any questions on Unsloth, we have a [Discord](https://discord.gg/u54VK8m8tk) channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!\n",
"\n",
"Some other links:\n",
"1. Zephyr DPO 2x faster [free Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing)\n",
"2. Llama 7b 2x faster [free Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing)\n",
"3. TinyLlama 4x faster full Alpaca 52K in 1 hour [free Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing)\n",
"4. CodeLlama 34b 2x faster [A100 on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing)\n",
"5. Llama 7b [free Kaggle](https://www.kaggle.com/danielhanchen/unsloth-alpaca-t4-ddp)\n",
"6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
"\n",
"