Now in beta: Stop tuning manually, start optimizing Explore

Find the optimal settings for your model. Without infrastructure nightmare.

Hyperoptimizer is a fully managed optimization service. You bring your Docker container; we run hundreds of trials to find the optimal settings for your trading strategies, ML models, or simulations.

HyperOptimizer dashboard

You know your model has more potential.

But finding the right settings? Weeks of manual tuning. Endless server provisioning scripts. Trials that fail silently, costing you days of compute time.

Hyperoptimizer was built to end that frustration.

We automate the infrastructure and the search, so you can get back to the work that actually matters - building better models.

What is Hyperoptimizer?

Built for quants, ML engineers, and anyone running parameter-heavy experiments who would rather spend time on their model than on infrastructure.

It's a service

You don't install software. Sign up, upload your Docker container, and define your parameter ranges in our dashboard.

We run the experiments

Our infrastructure runs hundreds of trials in parallel, using Bayesian optimization to intelligently search for the best combination.

You get the answer

We deliver a clear leaderboard, convergence plots, and the optimal parameter set - ready to deploy.

Why HyperOptimizer?

You focus on your model. We handle the rest.

Optimizer comparison Bayesian/TPE
Grid
160 trials
4.2h
Random
98 trials
2.7h
Bayes
38 trials
1.3h
Best score
2.32
Same peak, fewer evaluations
Convergence
Best params
lookback_window50 atr_multiplier1.8 max_positions4

Smarter Search

Get better results, faster

Our intelligent search focuses on promising regions, not random guesses. Find peak performance in a fraction of the time.

No SDK lock-in CLI args stdout metrics
$ python backtest.py \
--hpo-lookback-window=50 \
--hpo-atr-multiplier=1.8
hpo.metrics.sharpe=2.32
hpo.metrics.max_drawdown=0.08

Simple setup

Deploy in minutes, not weeks

No clusters to set up, no scripts to maintain. Push your container, configure parameters in our dashboard, and we handle the rest.

Trial lifecycle Code never stored
Container Metrics Deleted

Isolation

Your IP stays yours

Your code runs in isolated containers. We only see the metrics you choose to output. Zero access to your source, data, or trading logic.

5 workers 3 active
trial-0412 running
trial-0413 running
trial-0414 running
trial-0415 queued

Scale

Run more experiments in less time

Trials run in parallel automatically. No cluster setup, no job queues - just faster answers.

Trials Pareto Logs
#paramssharpedd
1w=50 atr=1.82.320.08
2w=40 atr=2.02.180.10
3w=30 atr=1.62.030.12
Selected
Sharpe2.32
Max DD0.08
CAGR0.35

Clarity

Know exactly which settings win

Ranked leaderboards, convergence plots, and detailed run comparisons - so you can pick the best configuration with confidence, not guesswork.

Bring your own cloud

Your data never leaves your environment

Need complete control? Connect your cloud account and we run optimization trials on your infrastructure. Same powerful optimizer, same dashboard - but your code, data, and compute stay entirely in your hands.

Your virtual private cloud (VPC) / account
Managed trial scheduler
Encrypted metrics + dashboard sync

How it works

Up and running in minutes, not weeks

No SDK, no client library, no lock-in. Two small changes to your code, and we run hundreds of trials for you.

1

Keep your existing code

We pass parameters as --hpo-* CLI flags at runtime. Parse them with argparse or any library you already use - no SDK to install, no vendor lock-in.

$ python backtest.py \
--hpo-lookback-window=50 \
--hpo-atr-multiplier=1.8
2

Print metrics to stdout

After your run finishes, print your metrics with the hpo.metrics. prefix. We pick them up automatically - no integration code required.

Backtesting 2020–2024...
Trades: 412 Win rate: 54.1%
hpo.metrics.sharpe=2.31
hpo.metrics.max_drawdown=0.08
hpo.metrics.cagr=0.34
3

Watch the best settings emerge

Trials run in parallel and the optimizer learns from each result. Watch your leaderboard update live as the best configuration rises to the top.

# params sharpe max dd
1 w=50 atr=1.8 2.31 0.08
2 w=30 atr=2.0 1.94 0.11
3 w=20 atr=1.5 1.85 0.13

Frequently asked questions

Can't find the answer you're looking for? Reach out to our support team.

Is my code and strategy IP safe?

Your code runs inside fully isolated containers on our infrastructure. We have zero access to your source code, strategy logic, or proprietary data. The only thing we read is stdout metric lines you explicitly print (e.g. hpo.metrics.sharpe=2.1). Containers are destroyed after each trial completes. We never store, inspect, or log your application's internal state, internal outputs, or filesystem.

Can I use my own Docker images?

Yes, that's the whole point. You package your code in a Docker container, and we run it as trials during optimization. You have full control over your objective function, dependencies, and runtime environment.

Can I run trials on my own cloud?

We're building Bring your own Cloud: you connect your cloud account and we schedule trials on your infrastructure so your data and code never leave your environment. Same optimizer and dashboard - we just use your compute. For early access, contact us or see our Bring your own cloud page.

How does billing work?

We're finalizing our pricing model. Join the beta to get early-access pricing.

Do you support parallel optimization?

Yes. Multiple trials run in parallel across our infrastructure. The optimization algorithm suggests new hyperparameter combinations based on results from completed trials.

What optimization algorithms are supported?

We support Bayesian optimization (e.g. TPE) and other search strategies. You configure the parameters, their ranges, and the metrics to optimize; we handle the rest.

What happens if a trial fails or times out?

Each trial runs independently. If one fails, it doesn't affect the others. Results from completed trials are always preserved, and the optimizer continues with the remaining budget.

Stop tuning. Start shipping.

We're opening 10 free spots for teams ready to find better settings in a fraction of the time. Try the platform, share your feedback, and help shape the future of optimization. Want us to support your framework or stack? Let us know.