eugeneyan's picture
Update README.md
d1c2966 verified
metadata
license: apache-2.0
tags:
  - semantic-ids
  - recommendation-system
  - video-games
  - product-embeddings
size_categories:
  - 10K<n<100K

Video Games Semantic IDs with Product Titles

This dataset contains semantic ID mappings with product titles for video games products from Amazon.

See writeup and demo here: https://eugeneyan.com/writing/semantic-ids/

Dataset Description

A pre-joined dataset that maps semantic IDs to product titles, making it easy to interpret and work with hierarchical product embeddings. Semantic IDs are learned representations that encode product relationships - similar products share similar ID prefixes.

Semantic ID Structure

Each semantic ID follows this format: <|sid_start|><|sid_X|><|sid_Y|><|sid_Z|><|sid_W|><|sid_end|>

Where:

  • <|sid_start|> and <|sid_end|> are boundary tokens
  • <|sid_X|>, <|sid_Y|>, <|sid_Z|>, <|sid_W|> represent the 4-level hierarchy
  • Each level can have values from 0-1023
  • Similar products share prefixes (e.g., products with same first 2 levels are more similar)

Dataset Statistics

  • Category: Video Games
  • Number of products: 66,097
  • Hierarchy levels: 4
  • Tokens per level: 1024

Columns

  • parent_asin: Amazon Standard Identification Number (primary key)
  • semantic_id: The hierarchical semantic identifier for the product
  • title: Product title/name for human-readable interpretation

Usage

Basic Loading

from datasets import load_dataset
import pandas as pd

# Load the dataset
dataset = load_dataset("eugeneyan/video-games-semantic-ids-mapping")
mapping_df = dataset['train'].to_pandas()

print(f"Loaded {len(mapping_df)} video games products with semantic IDs and
titles")
print(mapping_df.head())

Parsing Semantic IDs

import re
from typing import List

def parse_semantic_id(semantic_id: str) -> List[str]:
    """
    Parse a semantic ID string into its component levels.

    Example input: '<|sid_start|><|sid_127|><|sid_45|><|sid_89|><|sid_12|><|sid_end|>'
    Returns: ['<|sid_127|>', '<|sid_45|>', '<|sid_89|>', '<|sid_12|>']
    """
    # Remove start and end tokens
    sid = semantic_id.replace("<|sid_start|>", "").replace("<|sid_end|>", "")

    # Extract all sid tokens
    pattern = r"<\|sid_\d+\|>"
    levels = re.findall(pattern, sid)

    return levels

Finding Products with Smart Matching

def map_semantic_id_to_titles(semantic_id_str: str, mapping_df: pd.DataFrame) -> dict:
    """
    Map a semantic ID to titles with 4-token exact match and 3-token fallback.

    Returns:
        dict with 'match_level', 'titles', 'count', and 'match_type' keys
    """
    # Parse the input semantic ID
    levels = parse_semantic_id(semantic_id_str)

    if not levels:
        return {"match_level": 0, "titles": [], "count": 0}

    # First try exact match (all 4 tokens)
    exact_matches = mapping_df[mapping_df["semantic_id"] == semantic_id_str]
    if len(exact_matches) > 0:
        titles = exact_matches["title"].tolist()
        return {"match_level": 4, "titles": titles, "count": len(titles), "match_type":
"exact"}

    # Fallback to prefix matching (3 tokens, then 2, then 1)
    for depth in range(min(3, len(levels)), 0, -1):
        # Build the prefix for this depth
        prefix = "<|sid_start|>" + "".join(levels[:depth])

        # Find matches
        matches = mapping_df[mapping_df["semantic_id"].str.startswith(prefix)]

        if len(matches) > 0:
            # Found matches at this level
            titles = matches["title"].tolist()
            return {
                "match_level": depth,
                "titles": titles[:5],  # Limit to 5 for display
                "count": len(titles),
                "match_type": "prefix",
                "prefix_used": prefix
            }

    # No matches found at any level
    return {"match_level": 0, "titles": [], "count": 0, "match_type": "none"}

# Example usage
sid = "<|sid_start|><|sid_8|><|sid_454|><|sid_630|><|sid_768|><|sid_end|>"
result = map_semantic_id_to_titles(sid, mapping_df)
if result["count"] > 0:
    print(f"Found: {result['titles'][0]} (match level: {result['match_level']})")

Extracting and Replacing Multiple Semantic IDs

def extract_semantic_ids_from_text(text: str) -> List[str]:
    """
    Extract all semantic IDs from a text string.

    Returns list of full semantic IDs found in the text.
    """
    # Pattern to match complete semantic IDs
    pattern = r"<\|sid_start\|>(?:<\|sid_\d+\|>)+<\|sid_end\|>"
    semantic_ids = re.findall(pattern, text)
    return semantic_ids

def replace_semantic_ids_with_titles(text: str, mapping_df: pd.DataFrame,
                                    show_match_level: bool = True) -> str:
    """
    Replace all semantic IDs in text with their corresponding titles.

    Args:
        text: Input text containing semantic IDs
        mapping_df: DataFrame with semantic_id to title mapping
        show_match_level: Whether to append match info after the title

    Returns:
        Text with semantic IDs replaced by titles
    """
    # Find all semantic IDs in the text
    semantic_ids = extract_semantic_ids_from_text(text)

    # Create a copy of the text to modify
    result_text = text

    # Replace each semantic ID with its title(s)
    for sid in semantic_ids:
        # Get matching titles
        match_result = map_semantic_id_to_titles(sid, mapping_df)

        if match_result["count"] > 0:
            # Use the first title if multiple matches
            title = match_result["titles"][0]

            # Add match level if requested
            if show_match_level:
                if match_result["match_type"] == "exact":
                    replacement = f'"{title}"'
                else:
                    replacement = f'"{title}" (L{match_result["match_level"]} match)'
            else:
                replacement = f'"{title}"'

            # If multiple matches, indicate this
            if match_result["count"] > 1:
                replacement += f" [+{match_result['count'] - 1} similar]"
        else:
            # No match found
            replacement = "[Unknown Item]"

        # Replace the semantic ID with the title
        result_text = result_text.replace(sid, replacement)

    return result_text

# Example with model output
model_output = "Recommended: <|sid_start|><|sid_8|><|sid_454|><|sid_630|><|sid_768|><|sid_end|>"
readable_output = replace_semantic_ids_with_titles(model_output, mapping_df)
print(readable_output)  # "Recommended: \"Product Title\""

Finding Similar Products

def find_similar_products(semantic_id: str, mapping_df: pd.DataFrame, depth: int = 2) -> pd.DataFrame:
    """
    Find products with similar semantic IDs by matching prefixes.
    Higher depth = more specific similarity.

    Args:
        semantic_id: The reference semantic ID
        mapping_df: DataFrame with semantic_id and title columns
        depth: Number of hierarchy levels to match (1-4)

    Returns:
        DataFrame with similar products
    """
    # Parse the semantic ID
    levels = parse_semantic_id(semantic_id)

    if not levels or depth < 1:
        return pd.DataFrame()

    # Build prefix for matching
    prefix = "<|sid_start|>" + "".join(levels[:min(depth, len(levels))])

    # Find all products with matching prefix
    similar = mapping_df[mapping_df['semantic_id'].str.startswith(prefix)].copy()

    # Add similarity score (higher = more similar)
    similar['similarity_depth'] = depth

    return similar[['parent_asin', 'semantic_id', 'title', 'similarity_depth']]

# Example: Find products similar at different depths
sid = "<|sid_start|><|sid_8|><|sid_454|><|sid_630|><|sid_768|><|sid_end|>"
similar_at_2 = find_similar_products(sid, mapping_df, depth=2)
print(f"Products in same sub-category (depth 2): {len(similar_at_2)}")

Integration with Model

This dataset is designed to work with the semantic ID recommendation model:

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Load model
model_name = "eugeneyan/semantic-id-qwen3-8b-video-games"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)

# Generate recommendation
prompt = "User: <|sid_start|><|sid_8|><|sid_454|><|sid_630|><|sid_768|><|sid_end|>\n<|rec|>"
inputs = tokenizer(prompt, return_tensors="pt")

with torch.no_grad():
    outputs = model.generate(**inputs, max_new_tokens=50, temperature=0.3, do_sample=True)

# Decode and convert to readable format
response = tokenizer.decode(outputs[0], skip_special_tokens=False)
readable = replace_semantic_ids_with_titles(response, mapping_df)
print(readable)

Understanding Semantic Similarity

The hierarchical structure encodes similarity at different levels:

  • Level 1 match: Broad category similarity
  • Level 2 match: Sub-category similarity
  • Level 3 match: Fine-grained similarity
  • Level 4 match: Very similar or variant products

Example:

  • Product A: <|sid_start|><|sid_8|><|sid_454|><|sid_630|><|sid_768|><|sid_end|>
  • Product B: <|sid_start|><|sid_8|><|sid_454|><|sid_599|><|sid_412|><|sid_end|>
  • Product C: <|sid_start|><|sid_8|><|sid_112|><|sid_234|><|sid_567|><|sid_end|>

A and B share first 2 levels -> Similar sub-category A and C share first 1 level -> Same broad category only

Related Resources

Citation

If you use this dataset, please cite:

@dataset{semantic_ids_video_games_with_titles,
author = {Eugene Yan},
title = {Video Games Semantic IDs with Product Titles},
year = {2024},
publisher = {Hugging Face}
}