The multi-tenant lakehouse
built on Apache Iceberg.
DRLS is an operational data platform that unifies open table formats, federated catalogs, real-time feature serving, and ML training. One stack — Iceberg, Trino, Spark, Ray, Feast, MLflow — running across workspaces with first-class isolation, lineage, and governance.
Open lakehouse, federated
Apache Iceberg as the table format, with adapters for Polaris, AWS Glue, Snowflake Polaris, and Unity Catalog — so workloads see one logical catalog across vendor silos.
Real-time feature serving
Feast-compatible online and offline stores, multi-backend (Dragonfly + Cassandra), with sub-200ms Kafka → Online streaming and on-demand transforms.
Training to inference, end-to-end
Ray and Spark on a shared cluster fabric. Megatron pre-training, distributed fine-tuning via Kubeflow Trainer, MLflow-tracked, served through vLLM and Ray Serve.
Governance built in
Column-level lineage via Apache AGE, RFC-driven data contracts, business glossary, and graph-native authorization tied to the lineage graph itself.