foreBlocks is a modular PyTorch toolkit for time-series forecasting, experiment management, and companion utilities.
This repository is structured as two cooperating packages:
foreblocks: forecasting models, training, evaluation, preprocessing, DARTS search, and conformal uncertainty.foretools: companion utilities for synthetic data, feature engineering, decomposition, and hyperparameter search.
The recommended workflow is:
- start with the stable top-level public API in
foreblocks - validate one small training loop end to end
- add preprocessing, search, or specialist tooling only when the baseline path works
This package requires Python 3.10 or newer.
pip install foreblocks| Extra | Adds |
|---|---|
preprocessing |
TimeSeriesHandler, windowing, scaling, filtering, imputation, and time-feature generation |
darts |
DARTS architecture search, evaluation, and NAS dependencies |
mltracker |
experiment tracking API, local dashboard, and CLI TUI |
studio |
Studio frontend launcher and bundled server command |
vmd |
VMD decomposition, search support, and analysis helpers |
wavelets |
wavelet preprocessing and attention-head utilities |
benchmark |
external forecasting baselines and spreadsheet readers |
foreminer |
changepoint detection, dataset mining, and analysis utilities |
all |
all runtime extras above |
Examples:
pip install "foreblocks[darts]"
pip install "foreblocks[mltracker]"
pip install "foreblocks[studio]"
pip install "foreblocks[vmd,wavelets]"
pip install "foreblocks[all]"git clone https://github.com/lseman/foreblocks.git
cd foreblocks
pip install -e ".[dev]"foreblocks-studioBy default, this opens a browser on 127.0.0.1 or localhost.
Optional flags:
foreblocks-studio --open
foreblocks-studio --no-open
foreblocks-studio --host 0.0.0.0 --port 8080For detailed guides, examples, and API reference:
- Getting Started - Quickstart with a minimal training loop
- Overview - Architecture and mental model
- Public API - Stable import surface
- DARTS Guide - Architecture search
- Preprocessor Guide - Raw series handling
Full documentation: https://foreblocks.laioseman.com/docs/
docs/ - VitePress source for the documentation site
web/ - Static landing page assets for the published site
examples/ - Runnable demos and notebooks
The smallest reliable path is a direct forecasting model with a custom head. This path avoids extra dependencies and verifies that the public API is wired correctly.
import numpy as np
import torch
import torch.nn as nn
from foreblocks import (
ForecastingModel,
ModelEvaluator,
Trainer,
TrainingConfig,
create_dataloaders,
)
# === Configuration ===
# Shapes: X = [N, T, F], y = [N, H]
seq_len = 24 # input sequence length
horizon = 6 # forecast horizon
n_features = 4 # number of input features
batch_size = 16
# === Generate synthetic data ===
rng = np.random.default_rng(0)
X_train = rng.normal(size=(64, seq_len, n_features)).astype("float32")
y_train = rng.normal(size=(64, horizon)).astype("float32")
X_val = rng.normal(size=(16, seq_len, n_features)).astype("float32")
y_val = rng.normal(size=(16, horizon)).astype("float32")
# === Build dataloaders ===
train_loader, val_loader = create_dataloaders(
X_train, y_train, X_val, y_val, batch_size=batch_size,
)
# === Define a simple head ===
head = nn.Sequential(
nn.Flatten(),
nn.Linear(seq_len * n_features, 64),
nn.GELU(),
nn.Linear(64, horizon),
)
# === Assemble model ===
model = ForecastingModel(
head=head,
forecasting_strategy="direct",
model_type="head_only",
target_len=horizon,
)
# === Train ===
trainer = Trainer(
model,
config=TrainingConfig(
num_epochs=5,
batch_size=batch_size,
patience=3,
use_amp=False,
),
auto_track=False,
)
history = trainer.train(train_loader, val_loader)
# === Evaluate ===
evaluator = ModelEvaluator(trainer)
metrics = evaluator.compute_metrics(torch.tensor(X_val), torch.tensor(y_val))
print(f"Final training loss: {history.train_losses[-1]:.4f}")
print(f"Metrics: {metrics}")- validates that the import surface works
- checks dataloader shapes and model output sizes
- avoids optional subsystems during the first run
- keeps the first success criterion small and confirmable
If you start from a raw [T, D] array instead of pre-built windows, use TimeSeriesHandler after installing foreblocks[preprocessing]:
from foreblocks import TimeSeriesHandler
pre = TimeSeriesHandler(
window_size=seq_len,
horizon=horizon,
normalize=True,
)
X, y, processed, time_feats = pre.fit_transform(raw_data, time_stamps=timestamps)See Preprocessor Guide for more details.
The most stable first imports are exposed from the top-level foreblocks package:
| Import | Purpose |
|---|---|
ForecastingModel |
Core forecasting wrapper for direct, autoregressive, and seq2seq-style models |
Trainer |
Training loop with NAS hooks, MLTracker integration, and optional conformal support |
ModelEvaluator |
Prediction helpers, metrics, cross-validation, and training-curve plots |
TimeSeriesHandler |
Raw-series preprocessing, windowing, scaling, and imputation bridge |
TimeSeriesDataset |
Dataset wrapper used by the dataloader helper |
create_dataloaders |
Build train/validation PyTorch dataloaders from NumPy arrays |
ModelConfig, TrainingConfig |
Lightweight configuration dataclasses |
LSTMEncoder, LSTMDecoder, GRUEncoder, GRUDecoder |
Recurrent encoder/decoder blocks |
TransformerEncoder, TransformerDecoder |
Transformer backbones and related advanced features |
AttentionLayer |
Attention module for custom architectures |
| Path | What it contains |
|---|---|
foreblocks/core |
ForecastingModel, heads, conformal utilities, sampling |
foreblocks/training |
Trainer, training loop, quantization utilities |
foreblocks/evaluation |
ModelEvaluator, benchmarking helpers |
foreblocks/ts_handler |
TimeSeriesHandler, imputation, filtering, outlier handling |
foreblocks/tf |
transformer stack, attention variants, MoE, norms, embeddings |
foreblocks/darts |
neural architecture search pipeline and evaluation |
foreblocks/mltracker |
experiment tracking server, logging, and TUI integration |
foreblocks/kan |
Kolmogorov-Arnold Network backbone |
foreblocks/mamba |
Mamba SSM backbone with MoE and positional encoding |
foreblocks/hybrid_mamba |
Hybrid Mamba SSM blocks for forecasting |
foreblocks/blocks |
Reusable building blocks: dropout, NBeats, popular blocks |
foreblocks/wavelets |
Wavelet-based preprocessing and attention utilities |
foreblocks/benchmark |
External forecasting baselines and spreadsheet readers |
foretools |
synthetic time series, BOHB search, feature engineering, decomposition |
examples/ |
runnable demos and notebooks |
web/ |
static landing page assets for the published site root |
docs/ |
VitePress source for the documentation site |
Start here if you are new to the repository:
Topic guides:
- Preprocessor Guide
- Custom Blocks Guide
- Transformer Guide
- Mixture of Experts Guide
- Hybrid Mamba Guide
- DARTS Guide
- Evaluation & Metrics
- Uncertainty Quantification
- Web UI
- Troubleshooting
Companion tooling:
Examples and notebooks:
examples/adaptive_mrmr_demo.pyforetools/tsgen/ts_gen_complete_series.ipynbforetools/tsgen/ts_gen_doc.ipynbforetools/
There is a repository-local docs navigation file at docs/.vitepress/config.js.
- The repository is broad and still evolving. Some subsystems are more mature than others.
- The top-level imports listed above are the safest place to start.
Trainersupports MLTracker and conformal prediction; useauto_track=Falseduring local smoke tests.- Decoder-based seq2seq and transformer workflows have stricter dimension contracts than the direct forecasting path.
TrainingConfignow centralizes trainer, NAS, MLTracker, and conformal settings.
Documentation improvements are especially valuable here because foreblocks spans forecasting models, search, preprocessing, and auxiliary tooling. If you add or change a public API, update:
- this
README.md - the relevant guide under
docs/ - at least one runnable example or notebook