Integration guide
Getting started
This guide explains how to configure any program to work with HyperOptimizer. You build a Docker image and define which parameters to optimize in our web dashboard. We run the experiment for you: we execute your container for each trial with different parameter values (using e.g. Bayesian optimization), collect the metrics you print to stdout, and surface results in the dashboard. You don't run the trials or the optimization on your machine.
Your role vs ours
| You | We |
|---|---|
| Build the Docker image and push it (or make it available to us) | Run your image for each trial with different parameter values |
| In the dashboard: configure which parameters to optimize, their ranges, and the objective metric | Use optimization algorithms (e.g. Bayesian) to choose the next parameter set |
In your code: parse --hpo-* CLI args, run your logic, print metrics in our format | Parse container stdout, record metrics per trial, and show results in the dashboard |
How each trial runs
- We run your container for each trial with one set of
--hpo-*arguments. You choose the parameter names and ranges in the dashboard; our optimization algorithm (e.g. Bayesian) chooses which values to try next. - Your program parses those arguments, runs the trial (training, backtest, simulation, etc.), and prints metrics to stdout in a fixed format.
- We parse container stdout, record metrics per trial, and show results in the dashboard.
We schedule trials in parallel (e.g. 5 at a time by default) so experiments finish faster. Each container is one trial with one parameter set. The exact arguments depend on the algorithm; the following is illustrative:
We run your image — one command per trial (example, 5 in parallel)
Container 1: uv run backtest.py --hpo-lookback-window=20 --hpo-atr-multiplier=1.5
Container 2: uv run backtest.py --hpo-lookback-window=20 --hpo-atr-multiplier=1.8
Container 3: uv run backtest.py --hpo-lookback-window=50 --hpo-atr-multiplier=1.2
Container 4: uv run backtest.py --hpo-lookback-window=100 --hpo-atr-multiplier=2.0
Container 5: uv run backtest.py --hpo-lookback-window=30 --hpo-atr-multiplier=1.6
…then more trials with new parameter sets as the algorithm suggestsThe algorithm uses completed trial metrics to pick the next parameter sets; you just build the image and print metrics in our format.
Below: a minimal Python example with a Dockerfile, argparse, a placeholder for your logic, and the required output format.
1. Dockerfile: image and dependencies
Use any base image and install whatever your app needs. Copy your code and set the default command.
FROM python:3.12-slim
WORKDIR /app
# Install dependencies (example: your requirements)
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application
COPY main.py .
# We run this for each trial; HPO args are appended by the platform
CMD ["python", "main.py"]2. Parse HPO parameters with argparse
We inject arguments like --hpo-<name>=<value>. Parse them with argparse (or any CLI library) and use the values in your program.
import argparse
import json
def parse_args():
p = argparse.ArgumentParser(description="HPO trial: run with CLI hyperparameters.")
p.add_argument("--hpo-lookback-window", type=int, default=20)
p.add_argument("--hpo-ema-multiplier", type=float, default=1.0)
return p.parse_args()
def main():
args = parse_args()
# Do something with the parameters (your training, backtest, simulation, etc.)
lookback_window = args.hpo_lookback_window
ema_multiplier = args.hpo_ema_multiplier
# ... your code here ...
# result = run_backtest(lookback_window=lookback_window, ema_multiplier=ema_multiplier)
# Example: pretend we computed some metrics
metrics = {
"sharpe": 1.85,
"max_drawdown": 0.12,
}
# Emit metrics so our system can collect them (see step 3)
for key, value in metrics.items():
print(f"hpo.metrics.{key}={json.dumps(value)}")
if __name__ == "__main__":
main()3. Emit metrics to stdout
Our metric collector scans container stdout for lines in this form:
hpo.metrics.<key>=<json.dumps(value)>
- One line per metric; exact prefix
hpo.metrics. - Values must be JSON-serializable. Use
json.dumps(value, default=str)for non-JSON types (e.g. decimals, timestamps).
Example output:
hpo.metrics.sharpe=1.85
hpo.metrics.max_drawdown=0.12
That's it. Once your program prints lines like these, we associate them with the trial and show them in the dashboard.
Summary
- Package your app in Docker (install deps, copy code, set
CMD) and make the image available to us. - In the dashboard: create an experiment and configure which parameters to optimize (names, ranges) and the objective metric.
- In your code: parse
--hpo-*arguments, run your logic, and print each metric ashpo.metrics.<key>=<json.dumps(value)>to stdout.
We run the trials, collect metrics, and show results in the dashboard.