The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 2 new columns ({'impl_file', 'test_file'})
This happened while the json dataset builder was generating data using
hf://datasets/AdityaNarayan/HS-Repo-Curriculum-Learning/curriculum_learning_unbroken/phase1_foundation.jsonl (at revision f987110b1822515ffcaab383497c7d95b82d3c97)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
type: string
path: string
size_bytes: int64
training_content: string
test_file: string
impl_file: string
to
{'type': Value('string'), 'path': Value('string'), 'size_bytes': Value('int64'), 'training_content': Value('string')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1339, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 972, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 2 new columns ({'impl_file', 'test_file'})
This happened while the json dataset builder was generating data using
hf://datasets/AdityaNarayan/HS-Repo-Curriculum-Learning/curriculum_learning_unbroken/phase1_foundation.jsonl (at revision f987110b1822515ffcaab383497c7d95b82d3c97)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
type
string | path
string | size_bytes
int64 | training_content
string |
|---|---|---|---|
file
|
Cargo.toml
| 1,382
|
// File: Cargo.toml
[workspace]
resolver = "2"
members = ["crates/*"]
package.edition = "2021"
package.rust-version = "1.85.0"
package.license = "Apache-2.0"
[workspace.dependencies]
tracing = { version = "0.1.41" }
# Most of the lint configuration is based on https://github.com/EmbarkStudios/rust-ecosystem/blob/main/lints.toml
[workspace.lints.rust]
unsafe_code = "forbid"
rust_2018_idioms = { level = "warn", priority = -1 } # Remove priority once https://github.com/rust-lang/rust-clippy/pull/12827 is available in stable clippy
unused_qualifications = "warn"
# missing_debug_implementations = "warn"
# missing_docs = "warn"
[workspace.lints.clippy]
as_conversions = "warn"
cloned_instead_of_copied = "warn"
dbg_macro = "warn"
expect_used = "warn"
fn_params_excessive_bools = "warn"
index_refutable_slice = "warn"
indexing_slicing = "warn"
large_futures = "warn"
missing_panics_doc = "warn"
mod_module_files = "warn"
out_of_bounds_indexing = "warn"
panic = "warn"
panic_in_result_fn = "warn"
panicking_unwrap = "warn"
print_stderr = "warn"
print_stdout = "warn"
todo = "warn"
trivially_copy_pass_by_ref = "warn"
unimplemented = "warn"
unnecessary_self_imports = "warn"
unreachable = "warn"
unwrap_in_result = "warn"
unwrap_used = "warn"
use_self = "warn"
wildcard_dependencies = "warn"
# Lints to allow
option_map_unit_fn = "allow"
[profile.release]
strip = true
lto = true
codegen-units = 1
|
file
|
docker-compose-development.yml
| 8,700
|
// File: docker-compose-development.yml
version: "3.8"
volumes:
cargo_cache:
pg_data:
router_build_cache:
scheduler_build_cache:
drainer_build_cache:
redisinsight_store:
networks:
router_net:
services:
### Dependencies
pg:
image: docker.io/postgres:latest
ports:
- "5432:5432"
networks:
- router_net
volumes:
- pg_data:/var/lib/postgresql
environment:
- POSTGRES_USER=db_user
- POSTGRES_PASSWORD=db_pass
- POSTGRES_DB=hyperswitch_db
healthcheck:
test: ["CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
interval: 5s
retries: 3
start_period: 5s
timeout: 5s
redis-standalone:
image: docker.io/redis:7
networks:
- router_net
ports:
- "6379:6379"
healthcheck:
test: ["CMD-SHELL", "redis-cli ping | grep '^PONG$'"]
interval: 5s
retries: 3
start_period: 5s
timeout: 5s
migration_runner:
image: docker.io/debian:trixie-slim
pull_policy: always
command: >
bash -c "
apt-get update && apt-get install -y curl xz-utils &&
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/diesel-rs/diesel/releases/latest/download/diesel_cli-installer.sh | bash &&
curl --proto '=https' --tlsv1.2 -sSf https://just.systems/install.sh | bash -s -- --to /usr/local/bin &&
export PATH="$${PATH}:$${HOME}/.cargo/bin" &&
just migrate"
working_dir: /app
networks:
- router_net
volumes:
- ./:/app
environment:
# format -> postgresql://DB_USER:DB_PASSWORD@HOST:PORT/DATABASE_NAME
- DATABASE_URL=postgresql://db_user:db_pass@pg:5432/hyperswitch_db
superposition:
image: ghcr.io/juspay/superposition-demo:latest
ports:
- "8081:8080"
networks:
- router_net
environment:
- API_HOSTNAME=http://localhost:8081
profiles:
- superposition
### Application services
hyperswitch-server:
build:
dockerfile_inline: |
FROM rust:latest
RUN apt-get update && \
apt-get install -y protobuf-compiler
RUN rustup component add rustfmt clippy
command: cargo run --bin router -- -f ./config/docker_compose.toml
working_dir: /app
ports:
- "8080:8080"
networks:
- router_net
volumes:
- ./:/app
- cargo_cache:/cargo_cache
- router_build_cache:/cargo_build_cache
environment:
- CARGO_HOME=/cargo_cache
- CARGO_TARGET_DIR=/cargo_build_cache
depends_on:
pg:
condition: service_healthy
redis-standalone:
condition: service_healthy
labels:
logs: "promtail"
healthcheck:
test: curl --fail http://localhost:8080/health || exit 1
interval: 120s
retries: 4
start_period: 20s
timeout: 10s
hyperswitch-producer:
image: docker.io/rust:latest
command: cargo run --bin scheduler -- -f ./config/docker_compose.toml
working_dir: /app
networks:
- router_net
profiles:
- scheduler
volumes:
- ./:/app
- cargo_cache:/cargo_cache
- scheduler_build_cache:/cargo_build_cache
environment:
- CARGO_HOME=/cargo_cache
- CARGO_TARGET_DIR=/cargo_build_cache
- SCHEDULER_FLOW=producer
depends_on:
hyperswitch-consumer:
condition: service_healthy
labels:
logs: "promtail"
hyperswitch-consumer:
image: docker.io/rust:latest
command: cargo run --bin scheduler -- -f ./config/docker_compose.toml
working_dir: /app
networks:
- router_net
profiles:
- scheduler
volumes:
- ./:/app
- cargo_cache:/cargo_cache
- scheduler_build_cache:/cargo_build_cache
environment:
- CARGO_HOME=/cargo_cache
- CARGO_TARGET_DIR=/cargo_build_cache
- SCHEDULER_FLOW=consumer
depends_on:
hyperswitch-server:
condition: service_started
labels:
logs: "promtail"
healthcheck:
test: (ps -e | grep scheduler) || exit 1
interval: 120s
retries: 4
start_period: 30s
timeout: 10s
hyperswitch-drainer:
image: docker.io/rust:latest
command: cargo run --bin drainer -- -f ./config/docker_compose.toml
working_dir: /app
deploy:
replicas: ${DRAINER_INSTANCE_COUNT:-1}
networks:
- router_net
profiles:
- full_kv
volumes:
- ./:/app
- cargo_cache:/cargo_cache
- drainer_build_cache:/cargo_build_cache
environment:
- CARGO_HOME=/cargo_cache
- CARGO_TARGET_DIR=/cargo_build_cache
restart: unless-stopped
depends_on:
hyperswitch-server:
condition: service_started
labels:
logs: "promtail"
### Clustered Redis setup
redis-cluster:
image: docker.io/redis:7
deploy:
replicas: ${REDIS_CLUSTER_COUNT:-3}
command: redis-server /usr/local/etc/redis/redis.conf
profiles:
- clustered_redis
volumes:
- ./config/redis.conf:/usr/local/etc/redis/redis.conf
networks:
- router_net
ports:
- "6379"
- "16379"
redis-init:
image: docker.io/redis:7
profiles:
- clustered_redis
depends_on:
- redis-cluster
networks:
- router_net
command: "bash -c 'export COUNT=${REDIS_CLUSTER_COUNT:-3}
\ if [ $$COUNT -lt 3 ]
\ then
\ echo \"Minimum 3 nodes are needed for redis cluster\"
\ exit 1
\ fi
\ HOSTS=\"\"
\ for ((c=1; c<=$$COUNT;c++))
\ do
\ NODE=$COMPOSE_PROJECT_NAME-redis-cluster-$$c:6379
\ echo $$NODE
\ HOSTS=\"$$HOSTS $$NODE\"
\ done
\ echo Creating a cluster with $$HOSTS
\ redis-cli --cluster create $$HOSTS --cluster-yes
\ '"
### Monitoring
grafana:
image: docker.io/grafana/grafana:latest
ports:
- "3000:3000"
networks:
- router_net
profiles:
- monitoring
restart: unless-stopped
environment:
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_BASIC_ENABLED=false
volumes:
- ./config/grafana.ini:/etc/grafana/grafana.ini
- ./config/grafana-datasource.yaml:/etc/grafana/provisioning/datasources/datasource.yml
promtail:
image: docker.io/grafana/promtail:latest
volumes:
- ./logs:/var/log/router
- ./config:/etc/promtail
- /var/run/docker.sock:/var/run/docker.sock
command: -config.file=/etc/promtail/promtail.yaml
profiles:
- monitoring
networks:
- router_net
loki:
image: docker.io/grafana/loki:latest
ports:
- "3100"
command: -config.file=/etc/loki/loki.yaml
networks:
- router_net
profiles:
- monitoring
volumes:
- ./config:/etc/loki
otel-collector:
image: docker.io/otel/opentelemetry-collector-contrib:latest
command: --config=/etc/otel-collector.yaml
networks:
- router_net
profiles:
- monitoring
volumes:
- ./config/otel-collector.yaml:/etc/otel-collector.yaml
ports:
- "4317"
- "8888"
- "8889"
prometheus:
image: docker.io/prom/prometheus:latest
networks:
- router_net
profiles:
- monitoring
volumes:
- ./config/prometheus.yaml:/etc/prometheus/prometheus.yml
ports:
- "9090"
restart: unless-stopped
tempo:
image: docker.io/grafana/tempo:latest
command: -config.file=/etc/tempo.yaml
volumes:
- ./config/tempo.yaml:/etc/tempo.yaml
networks:
- router_net
profiles:
- monitoring
ports:
- "3200" # tempo
- "4317" # otlp grpc
restart: unless-stopped
redis-insight:
image: docker.io/redislabs/redisinsight:latest
networks:
- router_net
profiles:
- full_kv
ports:
- "8001:8001"
volumes:
- redisinsight_store:/db
hyperswitch-control-center:
image: docker.juspay.io/juspaydotin/hyperswitch-control-center:latest
pull_policy: always
networks:
- router_net
ports:
- "9000:9000"
environment:
- configPath=/tmp/dashboard-config.toml
volumes:
- ./config/dashboard.toml:/tmp/dashboard-config.toml
hyperswitch-web-sdk:
build:
dockerfile_inline: |
FROM node:lts
RUN git clone https://github.com/juspay/hyperswitch-web.git --depth 1
WORKDIR hyperswitch-web
RUN npm i --force
command: bash -c 'npm run re:build && npx run webpack serve --config webpack.dev.js --host 0.0.0.0'
ports:
- "9050:9050"
environment:
sdkEnv: local
envSdkUrl: http://localhost:9050
envBackendUrl: http://localhost:8080
envLoggingUrl: http://localhost:8207
|
file
|
CHANGELOG.md
| 1,165,830
| "// File: CHANGELOG.md\n\n# Changelog\n\nAll notable changes to HyperSwitch will be documented here.(...TRUNCATED)
|
file
|
diesel_v2.toml
| 324
| "// File: diesel_v2.toml\n\n# For documentation on how to configure this file,\n# see diesel.rs/guid(...TRUNCATED)
|
file
|
Cargo.lock
| 270,680
| "// File: Cargo.lock\n\n# This file is automatically @generated by Cargo.\n# It is not intended for (...TRUNCATED)
|
file
|
cog.toml
| 859
| "// File: cog.toml\n\ntag_prefix = \"v\"\nignore_merge_commits = true\n\n# the HTML comments (`<!-- (...TRUNCATED)
|
file
|
.deepsource.toml
| 190
| "// File: .deepsource.toml\n\nversion = 1\n\n[[analyzers]]\nname = \"docker\"\nenabled = true\n\n[[a(...TRUNCATED)
|
file
|
README.md
| 10,857
| "// File: README.md\n\n<p align=\"center\">\n <img src=\"./docs/imgs/hyperswitch-logo-dark.svg#gh-d(...TRUNCATED)
|
file
|
.clippy.toml
| 140
| "// File: .clippy.toml\n\nallow-dbg-in-tests = true\nallow-expect-in-tests = true\nallow-panic-in-te(...TRUNCATED)
|
file
|
package-lock.json
| 47,917
| "// File: package-lock.json\n\n{\n \"name\": \"hyperswitch\",\n \"version\": \"0.0.0\",\n \"lockf(...TRUNCATED)
|
Hyperswitch Curriculum Learning Dataset (Unbroken)
A comprehensive dataset for continued pre-training (CPT) of large language models on the Hyperswitch payment processing codebase, organized into curriculum learning phases with complete, unbroken entries.
🎯 Dataset Overview
This dataset contains the complete Hyperswitch repository knowledge extracted from:
- Source code files (.rs, .toml, .yaml, .json, .md)
- Git commit history with full diffs
- GitHub Pull Requests with reviews and discussions
- Test-implementation pairs
Key Feature: Unlike the chunked version, each entry is stored complete without breaking at token boundaries, allowing dynamic chunking during training for any sequence length (8K, 16K, 32K, 64K+).
📊 Dataset Structure
Curriculum Learning Phases
The dataset is organized into 3 progressive phases:
Phase 1: Code Foundation (phase1_foundation.jsonl)
- Content: Repository files + test-implementation pairs
- Purpose: Learn codebase structure, syntax, and testing patterns
- Training: 2 epochs
- Entries: Complete files and test pairs (unbroken)
Phase 2: Evolution Patterns (phase2_evolution.jsonl)
- Content: Git commits (chronological) + small PRs
- Purpose: Understand code evolution, change patterns, and incremental development
- Training: 2-3 epochs
- Entries: Complete commits with full diffs, small PRs (unbroken)
Phase 3: PR Mastery (phase3_pr_mastery.jsonl)
- Content: Medium and large PRs with reviews and discussions
- Purpose: Master complex changes, code review practices, and collaboration patterns
- Training: 3-4 epochs
- Entries: Complete PRs with all reviews and comments (unbroken)
📁 Data Format
Each entry is a single JSON object per line (JSONL format):
File Entry
{
"type": "file",
"path": "crates/hyperswitch_connectors/src/connectors/paypal/transformers.rs",
"size_bytes": 140434,
"training_content": "// File: crates/hyperswitch_connectors/src/connectors/paypal/transformers.rs\n\n<complete_file_content>"
}
Commit Entry
{
"type": "commit",
"commit_hash": "73203ebd05beab57f243e8460f259707bb856921",
"author": "vasanthp-jus",
"date": "2025-11-27T12:18:26+05:30",
"message": "fix-postman-collection",
"training_content": "Commit: \"fix-postman-collection\"\nAuthor: vasanthp-jus\nDate: 2025-11-27T12:18:26+05:30\n\nDiff:\n<complete_git_diff>"
}
PR Entry
{
"type": "pr_diff",
"pr_number": 1234,
"title": "Add PayPal connector support",
"state": "merged",
"author": "developer-name",
"created_at": "2025-11-15T10:30:00Z",
"training_content": "PR #1234: Add PayPal connector support\n\n<description>\n\nReviews:\n<complete_reviews>\n\nComments:\n<complete_comments>"
}
Test Pair Entry
{
"type": "test_pair",
"test_file": "crates/router/tests/connector_tests.rs",
"impl_file": "crates/router/src/connector.rs",
"training_content": "Test-Implementation Pair:\n\nTest: <test_content>\n\nImplementation: <impl_content>"
}
🔢 Dataset Statistics
| Phase | Entries | Content Types | Avg Entry Size |
|---|---|---|---|
| Phase 1 | ~15K | Files, Test Pairs | Varies (complete files) |
| Phase 2 | ~5K | Commits, Small PRs | Varies (complete commits/PRs) |
| Phase 3 | ~1K | Medium/Large PRs | Large (complete PR threads) |
Total: ~21K complete, unbroken entries
💡 Unbroken vs Chunked
Unbroken (This Dataset)
✅ Complete semantic units preserved
✅ No artificial breaks in code/diffs
✅ Flexible for any sequence length
✅ Chunk dynamically during training
✅ Smaller dataset file size (no overlap)
Chunked (Alternative)
- Pre-chunked at fixed token limit (e.g., 8K)
- Ready for immediate training
- Fixed sequence length
- Includes chunk overlap for continuity
🚀 Usage
Loading the Dataset
import json
def load_phase(phase_file):
"""Load a curriculum phase."""
entries = []
with open(phase_file, 'r', encoding='utf-8') as f:
for line in f:
entries.append(json.loads(line))
return entries
# Load Phase 1
phase1 = load_phase('phase1_foundation.jsonl')
Dynamic Chunking for Training
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("your-model")
max_length = 32768 # 32K tokens
def chunk_entry(entry, tokenizer, max_length):
"""Chunk a complete entry for training."""
text = entry['training_content']
# Tokenize
tokens = tokenizer(text, truncation=False, return_tensors='pt')
# Split into chunks if needed
chunks = []
token_ids = tokens['input_ids'][0]
for i in range(0, len(token_ids), max_length):
chunk = token_ids[i:i + max_length]
chunks.append(chunk)
return chunks
# Process entries
for entry in phase1:
chunks = chunk_entry(entry, tokenizer, max_length)
for chunk in chunks:
# Use chunk for training
pass
Recommended Training Schedule
# Phase 1: Code Foundation (2 epochs)
train(phase1_foundation, epochs=2, lr=1e-5)
# Phase 2: Evolution Patterns (2-3 epochs)
train(phase2_evolution, epochs=3, lr=8e-6)
# Phase 3: PR Mastery (3-4 epochs)
train(phase3_pr_mastery, epochs=4, lr=5e-6)
🎓 Curriculum Learning Benefits
- Progressive complexity: Start simple, increase difficulty
- Better convergence: 25-40% improvement over random training
- Domain adaptation: Learn repository-specific patterns
- Code understanding: Syntax → Changes → Collaboration
- Efficient training: Focused learning objectives per phase
📝 Technical Details
Repository
- Source: Hyperswitch
- Language: Primarily Rust
- Domain: Payment processing, financial technology
- Components: Connectors, API models, routing logic, state machines
Data Collection
- Files: Pattern-based extraction (Rust, TOML, YAML, JSON, Markdown)
- Commits: Full git history from repository inception
- PRs: Merged and closed PRs with reviews and comments via GitHub API
- Tests: Automatic pairing of test files with implementations
🔧 Sequence Length Flexibility
This unbroken dataset works with any sequence length:
| Sequence Length | Use Case | Chunking Strategy |
|---|---|---|
| 8K tokens | Base models | Chunk with overlap |
| 16K tokens | Extended context | Fewer chunks needed |
| 32K tokens | Long context models | Most files fit whole |
| 64K+ tokens | Ultra-long context | Complete commits/PRs |
🙏 Acknowledgments
- Hyperswitch Team at Juspay for the amazing open-source payment processing platform
- Dataset curated and organized by Aditya Narayan
- Dataset generated using custom extraction pipeline with curriculum organization
📧 Contact & Citation
If you use this dataset, please cite:
@dataset{hyperswitch_curriculum2025,
title = {AdityaNarayan/HS-Repo-Curriculum-Learning},
author = {Aditya Narayan},
year = {2025},
url = {https://huggingface.co/datasets/AdityaNarayan/HS-Repo-Curriculum-Learning},
publisher = {HuggingFace},
note = {Dataset derived from Hyperswitch repository}
}
- Downloads last month
- 43