Skip to content

OKDP/okdp-examples

ci Release License Apache2

A collection of hands-on examples, helper utilities, Jupyter notebooks, Airflow DAGs, and data workflows showcasing how to work with the OKDP Platform. This repository is meant to help you explore OKDP capabilities around compute, object storage, data catalog, SQL engines, Spark, workflow orchestration, and analytics.

Over time, these examples will be extended with lakehouse-oriented features, such as:

  • Open table formats (e.g. Apache Iceberg and/or Delta Lake).
  • Shared metadata with stronger schema enforcement and evolution.
  • Snapshot-based table management (time travel, retention, cleanup).
  • Incremental processing and analytics-ready datasets, etc.

Notebooks

The notebooks analyze datasets stored as Parquet on S3-compatible storage (MinIO). The same underlying dataset is queried using Trino and Spark.

An index.ipynb notebook is also provided as an entry point.

Trino notebooks

The following notebooks query data using Trino:

  • Querying data using Trino (Python/SQLAlchemy).
  • Querying data using Trino (SQL engine).

These notebooks use Trino external tables defined over Parquet data stored in object storage and registered via a metadata service.

PySpark notebook

A PySpark notebook is included to showcase Spark-native exploratory data analysis on the same dataset.

Superset

Use Apache Superset (SQL Lab) to query Trino and build visualizations/dashboards on top of the same datasets.

Airflow

The airflow/ directory contains example DAGs orchestrated by Apache Airflow on the OKDP platform. They demonstrate how to:

  • Submit Spark jobs to Spark Operator via SparkApplication custom resources from a DAG.
  • Build daily ETL pipelines reading from and writing to S3-compatible storage (SeaweedFS).
  • Use Airflow gitSync to pull DAGs directly from this repository at runtime.

See airflow/README.md for the full list of DAGs and quick-start instructions.

Running the examples:

Using okdp-ui, deploy the following components:

About the datasets

At deployment time, the Helm chart:

  1. Downloads public datasets.
  2. Uploads them into object storage.
  3. Creates the corresponding Trino external tables.

ℹ️ NOTE

The datasets are not bundled in this repository and are not baked into container images.

About

A collection of hands-on examples, helper utilities, Jupyter notebooks, and data workflows to explore the OKDP Platform.

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors