Test your strategies
against crashes
that haven't happened yet.
Generate, store, analyze, and visualize synthetic market scenarios — calibrated to real assets, powered by AI. Via browser, Python SDK, or REST API.
Free tier at launch. No credit card required.
Your strategy survived 2008.
But did it survive 2008?
Historical data gives you one path through every crisis. One speed. One depth. One recovery shape. If your strategy survives that sequence, you don't know if it's genuinely robust — or just lucky on that particular accident.
The alternative is a quant spending days manually fitting stochastic models, calibrating parameters, and writing simulation code from scratch. We do it in one click or one API call.
Test strategies against scenarios that haven't happened yet
Historical data is one sequence of events. Generate thousands of statistically realistic variations — different crash speeds, depths, recovery shapes — and know if your strategy survives the regime, not just the accident.
→ Strategy backtesting · ML training data · Factor stress testsExplore tail risk without a quant or a Bloomberg terminal
Simulate flash crashes, rate shocks, and bear markets on demand via the browser. No code required. Built-in Sharpe, VaR, and drawdown analysis — computed natively, no export needed.
→ VaR modeling · Scenario planning · Stress testingGenerate training data for rare market regimes
Real data is sparse for tail events by definition. Build robo-advisors, risk engines, and portfolio tools on top of synthetic data that passes rigorous statistical validity tests — not just random noise.
→ Model training · Regime exposure · Embedded simulationThree stochastic models
GBM for baselines. Heston for stochastic volatility and fat tails. Bates adds jump diffusion for flash crash behavior.
Named market regimes
Pre-built: low_vol_bull, flash_crash, bear_market,
rate_shock, high_vol_choppy. Ready to use without parameter tuning.
Natural language scenarios
Describe any market condition in plain text — "a 2008-style crisis with faster recovery" — and the AI translates it to calibrated parameters automatically.
Auto-calibrated to real assets
Pass any ticker and get paths that mirror its real statistical behavior — volatility clustering, tail risk, mean reversion. No manual tuning.
Persistent dataset library
Every generated dataset is stored with a unique ID. Named, tagged, searchable. Retrieve, don't regenerate. Your work accumulates across sessions.
Native financial metrics
Sharpe ratio, VaR, max drawdown, realised volatility, and a statistical quality score — all computed natively. No export, no pandas, no manual work.
Interactive fan charts
Fan chart with percentile bands, return distribution histogram, single path explorer, and side-by-side scenario comparison with metric diffs.
Three ways to access
Browser GUI for non-technical users. Python SDK for developers. Raw REST API for full control. Same platform, same data, same capabilities.
Sign up and get your API key
From signup to first dataset in under 5 minutes. Free tier included, no credit card required.
Generate a scenario dataset
Pick a named scenario, describe one in plain English, or pass a ticker for auto-calibration. The platform generates, runs analysis, and stores the dataset in one step.
Retrieve, analyze, compare
Datasets persist with a unique ID. Pull built-in metrics — Sharpe, VaR, drawdown — or retrieve and compare two scenarios side by side. No regeneration needed.
Your interface, your workflow
Browser GUI for no-code exploration. Python SDK for programmatic access. Raw REST API for full control. Same platform, same data, all three ways.
from qpaths import Client client = Client(api_key="...") # Generate and store a dataset ds = client.generate( scenario="flash_crash", asset="AAPL", # auto-calibrate paths=1000 ) # Retrieve later — no regeneration ds = client.datasets.get("ds_abc123") # Native analysis ds.sharpe() ds.var(confidence=0.95) ds.max_drawdown() ds.summary() # all metrics at once # Compare two scenarios client.datasets.compare( "ds_abc123", "ds_xyz456" ) # Drop into pandas df = ds.to_dataframe()
Be first on the platform.
We're building. Join the waitlist and help shape what we ship.