Deploy
PublicDeploy Docker images and other packages
Zero downtime failed, main history:
- Unknown error in basic:
Docker compose failed: docker compose -f/dev/fd/3 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-462c045e/materialize/nightly/test/0dt exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@mz_new:6875 --materialize-internal-url=postgres://materialize@mz_new:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=cluster=cluster --default-timeout=300s --seed=1 --persist-blob-url=file:///mzdata/persist/blob --persist-consensus-url=postgres://root@mz_new:26257?options=--search_path=consensus --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-462c045e/materialize/nightly/test/0dt/mzcompose.py:539
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-462c045e/materialize/nightly/test/0dt/mzcompose.py:539
- Unknown error in ddl:
Docker compose failed: docker compose -f/dev/fd/3 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-462c045e/materialize/nightly/test/0dt exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@mz_new:6875 --materialize-internal-url=postgres://materialize@mz_new:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=cluster=cluster --default-timeout=300s --seed=1 --persist-blob-url=file:///mzdata/persist/blob --persist-consensus-url=postgres://root@mz_new:26257?options=--search_path=consensus --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-462c045e/materialize/nightly/test/0dt/mzcompose.py:1782
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-462c045e/materialize/nightly/test/0dt/mzcompose.py:1782
- Unknown error in read-only:
Docker compose failed: docker compose -f/dev/fd/4 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-462c045e/materialize/nightly/test/0dt exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@mz_old:6875 --materialize-internal-url=postgres://materialize@mz_old:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=cluster=cluster --default-timeout=300s --seed=1 --persist-blob-url=file:///mzdata/persist/blob --persist-consensus-url=postgres://root@mz_old:26257?options=--search_path=consensus --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-462c045e/materialize/nightly/test/0dt/mzcompose.py:238
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-462c045e/materialize/nightly/test/0dt/mzcompose.py:238
Test details & reproducer
Explicit deterministic tests for read-only mode and zero downtime deploys (same version, no upgrade).bin/mzcompose --find 0dt run default
Scalability benchmark (read & write) against merge base or 'latest' failed, main history:
- Unknown error in Workload 'SelectLimitWorkload':
New regression against v0.138.2 (a20722bef)
Regression in workload 'SelectLimitWorkload' at concurrency 2 with MaterializeContainer (None specified as HEAD): 642.73 tps vs. 1040.29 tps (-397.56 tps; -38.22%)
Test details & reproducer
Benchmark for how various queries scale, compares against old Materialize versions.bin/mzcompose --find scalability run default --target HEAD --target common-ancestor --regression-against common-ancestor --workload-group-marker DmlDqlWorkload --max-concurrency 256
Testdrive (before Kafka source versioning) with blob store failed, main history:
- Unknown error in kafka-progress.td:
kafka-progress.td:39:1: non-matching rows: expected:
[[" query timestamp: <> <>\n oracle read timestamp: <> <>\nlargest not in advance of upper: <> <>\n upper:[<> <>]\n since:[<> <>]\n can respond immediately: <>\n timeline: Some(EpochMilliseconds)\n session wall time: <> <>\n\nsource materialize.public.data_progress (<>, storage):\n read frontier:[<> <>]\n write frontier:[<> <>]\n"]]
got:
[[" query timestamp: <> <>\n oracle read timestamp: <> <>\nlargest not in advance of upper: <> <>\n upper:[<> <>]\n since:[<> <>]\n can respond immediately: <>\n timeline: Some(EpochMilliseconds)\n session wall time: <> <>\n\nsource materialize.public.data_progress (<>, storage):\n read frontier:[<> <>]\n write frontier:[<> <>]\n\nbinding constraints:\nlower:\n (IsolationLevel(StrictSerializable)): [<> <>]\n"]]
got raw rows:
[[" query timestamp: 1743120037591 (2025-03-28 00:00:37.591)\n oracle read timestamp: 1743120037591 (2025-03-28 00:00:37.591)\nlargest not in advance of upper: 1743120038000 (2025-03-28 00:00:38.000)\n upper:[1743120038001 (2025-03-28 00:00:38.001)]\n since:[1743120037000 (2025-03-28 00:00:37.000)]\n can respond immediately: true\n timeline: Some(EpochMilliseconds)\n session wall time: 1743120038062 (2025-03-28 00:00:38.062)\n\nsource materialize.public.data_progress (u654, storage):\n read frontier:[1743120037000 (2025-03-28 00:00:37.000)]\n write frontier:[1743120038001 (2025-03-28 00:00:38.001)]\n\nbinding constraints:\nlower:\n (IsolationLevel(StrictSerializable)): [1743120037591 (2025-03-28 00:00:37.591)]\n"]]
Poor diff:
- " query timestamp: <> <>\n oracle read timestamp: <> <>\nlargest not in advance of upper: <> <>\n upper:[<> <>]\n since:[<> <>]\n can respond immediately: <>\n timeline: Some(EpochMilliseconds)\n session wall time: <> <>\n\nsource materialize.public.data_progress (<>, storage):\n read frontier:[<> <>]\n write frontier:[<> <>]\n"
+ " query timestamp: <> <>\n oracle read timestamp: <> <>\nlargest not in advance of upper: <> <>\n upper:[<> <>]\n since:[<> <>]\n can respond immediately: <>\n timeline: Some(EpochMilliseconds)\n session wall time: <> <>\n\nsource materialize.public.data_progress (<>, storage):\n read frontier:[<> <>]\n write frontier:[<> <>]\n\nbinding constraints:\nlower:\n (IsolationLevel(StrictSerializable)): [<> <>]\n"
|
38 | $ set-regex match=(\s{12}0|\d{13,20}|u\d{1,5}|\(\d+-\d\d-\d\d\s\d\d:\d\d:\d\d\.\d\d\d\)|true|false) replacement=<>
39 | > EXPLAIN TIMESTAMP FOR SELECT * FROM data_progress
| ^
- Unknown error in load-generator.td:
load-generator.td:188:1: non-matching rows: expected:
[[" query timestamp: <> <>\n oracle read timestamp: <> <>\nlargest not in advance of upper: <> <>\n upper:[<> <>]\n since:[<> <>]\n can respond immediately: <>\n timeline: Some(EpochMilliseconds)\n session wall time: <> <>\n\nsource materialize.another.auction_house_progress (<>, storage):\n read frontier:[<> <>]\n write frontier:[<> <>]\n"]]
got:
[[" query timestamp: <> <>\n oracle read timestamp: <> <>\nlargest not in advance of upper: <> <>\n upper:[<> <>]\n since:[<> <>]\n can respond immediately: <>\n timeline: Some(EpochMilliseconds)\n session wall time: <> <>\n\nsource materialize.another.auction_house_progress (<>, storage):\n read frontier:[<> <>]\n write frontier:[<> <>]\n\nbinding constraints:\nlower:\n (IsolationLevel(StrictSerializable)): [<> <>]\n"]]
got raw rows:
[[" query timestamp: 1743120667125 (2025-03-28 00:11:07.125)\n oracle read timestamp: 1743120667125 (2025-03-28 00:11:07.125)\nlargest not in advance of upper: 1743120668000 (2025-03-28 00:11:08.000)\n upper:[1743120668001 (2025-03-28 00:11:08.001)]\n since:[1743120667000 (2025-03-28 00:11:07.000)]\n can respond immediately: true\n timeline: Some(EpochMilliseconds)\n session wall time: 1743120668085 (2025-03-28 00:11:08.085)\n\nsource materialize.another.auction_house_progress (u822, storage):\n read frontier:[1743120667000 (2025-03-28 00:11:07.000)]\n write frontier:[1743120668001 (2025-03-28 00:11:08.001)]\n\nbinding constraints:\nlower:\n (IsolationLevel(StrictSerializable)): [1743120667125 (2025-03-28 00:11:07.125)]\n"]]
Poor diff:
- " query timestamp: <> <>\n oracle read timestamp: <> <>\nlargest not in advance of upper: <> <>\n upper:[<> <>]\n since:[<> <>]\n can respond immediately: <>\n timeline: Some(EpochMilliseconds)\n session wall time: <> <>\n\nsource materialize.another.auction_house_progress (<>, storage):\n read frontier:[<> <>]\n write frontier:[<> <>]\n"
+ " query timestamp: <> <>\n oracle read timestamp: <> <>\nlargest not in advance of upper: <> <>\n upper:[<> <>]\n since:[<> <>]\n can respond immediately: <>\n timeline: Some(EpochMilliseconds)\n session wall time: <> <>\n\nsource materialize.another.auction_house_progress (<>, storage):\n read frontier:[<> <>]\n write frontier:[<> <>]\n\nbinding constraints:\nlower:\n (IsolationLevel(StrictSerializable)): [<> <>]\n"
|
13 | $ postgres-execute c ... [rest of line truncated for security]
187 | $ set-regex match=(\s{12}0|\d{13,20}|u\d{1,5}|\(\d+-\d\d-\d\d\s\d\d:\d\d:\d\d\.\d\d\d\)|true|false) replacement=<>
188 | > EXPLAIN TIMESTAMP FOR SELECT * FROM another.auction_house_progress
| ^
- Unknown error in workflow-default:
Docker compose failed: docker compose -f/dev/fd/4 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-8cpu-16gb-751eec39/materialize/nightly/test/testdrive-old-kafka-src-syntax run -eCLUSTER_REPLICA_SIZES testdrive --junit-report=junit_testdrive-old-kafka-src-syntax_0195d9f3-28e9-4b64-a71c-d0ef20bb85b1.xml --var=default-replica-size=4-4 --var=default-storage-size=4-1 *.td
Container testdrive-old-kafka-src-syntax-fivetran-destination-1 Running
importer.proto:2:1: warning: Import empty.proto is unused.
^^^ +++
^^^ +++
importer.proto:2:1: warning: Import empty.proto is unused.
+++ !!! Error Report
2 errors were encountered during execution
files involved: kafka-progress.td load-generator.td
Test details & reproducer
Testdrive is the basic framework and language for defining product tests under the expected-result/actual-result (aka golden testing) paradigm. A query is retried until it produces the desired result.bin/mzcompose --find testdrive-old-kafka-src-syntax run default --azurite
Parallel Workload (0dt deploy) succeeded with known error logs, main history: 




- Known issue parallel-workload: 0dt: thread 'coordinator' panicked at src/storage-controller/src/lib.rs:703:17: dependency since has advanced past dependent (u417) upper (#8425) in services.log:
parallel-workload-materialized2-1 | 2025-03-28T00:05:19.338471Z thread 'coordinator' panicked at src/storage-controller/src/lib.rs:973:17: dependency since has advanced past dependent (u496) upper
Test details & reproducer
Runs a randomized parallel workload stressing all parts of Materialize, can mostly find panics and unexpected errors. See zippy for a sequential randomized tests which can verify correctness.bin/mzcompose --find parallel-workload run default --runtime=1500 --scenario=0dt-deploy --threads=16
Postgres CDC tests (before source versioning) failed, main history:
- Unknown error in cdc:
Docker compose failed: docker compose -f/dev/fd/4 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-4cpu-8gb-e518f123/materialize/nightly/test/pg-cdc-old-syntax run -eCLUSTER_REPLICA_SIZES testdrive --var=ssl-ca=<CERTIFICATE> --var=ssl-cert=<CERTIFICATE> --var=ssl-key=<PRIVATE KEY> --var=ssl-wrong-cert=<CERTIFICATE> --var=ssl-wrong-key=<PRIVATE KEY> --var=default-replica-size=4-4 --var=default-storage-size=4-1 pg-cdc.td
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
files involved: pg-cdc.td
Test details & reproducer
Native Postgres source tests, functional.bin/mzcompose --find pg-cdc-old-syntax run default
Checks 0dt upgrade across four versions 1 failed, main history:
- Unknown error in workflow-default:
Docker compose failed: docker compose -f/dev/fd/3 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-a9089eb4/materialize/nightly/test/platform-checks exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@materialized:6875 --materialize-internal-url=postgres://materialize@materialized:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=statement_timeout='300s' --default-timeout=300s --seed=1 --persist-blob-url=s3://minioadmin:minioadmin@persist/persist?endpoint=http://minio:9000/®ion=minio --persist-consensus-url=postgres://root@materialized:26257?options=--search_path=consensus --var=replicas=1 --var=default-replica-size=4-4 --var=default-storage-size=4-1 --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-a9089eb4/materialize/nightly/misc/python/materialize/checks/all_checks/materialized_views.py:255 --materialize-url=postgres://materialize@mz_3:6875 --materialize-internal-url=postgres://mz_system@mz_3:6877 --persist-consensus-url=postgres://root@mz_3:26257?options=--search_path=consensus
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-a9089eb4/materialize/nightly/misc/python/materialize/checks/all_checks/materialized_views.py:255
Test details & reproducer
Write a single set of .td fragments for a particular feature or functionality and then have Zippy execute them in upgrade, 0dt-upgrade, restart, recovery and failure contexts.BUILDKITE_PARALLEL_JOB=0 BUILDKITE_PARALLEL_JOB_COUNT=2 bin/mzcompose --find platform-checks run default --scenario=ZeroDowntimeUpgradeEntireMzFourVersions --seed=0195d9f1-061a-4523-89b6-2577fa68f6dc
Checks preflight-check and roll back upgrade 1 failed, main history:
- Unknown error in workflow-default:
Docker compose failed: docker compose -f/dev/fd/4 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-5209e8f8/materialize/nightly/test/platform-checks exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@materialized:6875 --materialize-internal-url=postgres://materialize@materialized:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=statement_timeout='300s' --default-timeout=300s --seed=1 --persist-blob-url=s3://minioadmin:minioadmin@persist/persist?endpoint=http://minio:9000/®ion=minio --persist-consensus-url=postgres://root@materialized:26257?options=--search_path=consensus --var=replicas=1 --var=default-replica-size=4-4 --var=default-storage-size=4-1 --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-5209e8f8/materialize/nightly/misc/python/materialize/checks/all_checks/materialized_views.py:255
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-5209e8f8/materialize/nightly/misc/python/materialize/checks/all_checks/materialized_views.py:255
Test details & reproducer
Write a single set of .td fragments for a particular feature or functionality and then have Zippy execute them in upgrade, 0dt-upgrade, restart, recovery and failure contexts.BUILDKITE_PARALLEL_JOB=0 BUILDKITE_PARALLEL_JOB_COUNT=2 bin/mzcompose --find platform-checks run default --scenario=PreflightCheckRollback --seed=0195d9f1-061a-4523-89b6-2577fa68f6dc
Checks 0dt upgrade, whole-Mz restart 2 succeeded with known error logs, main history: 




platform-checks-mz_3-1 | 2025-03-27T23:54:03.974126Z thread 'coordinator' panicked at src/compute-client/src/as_of_selection.rs:392:25: failed to apply hard as-of constraint (id=u424, bounds=[[] .. []], constraint=Constraint { type_: Hard, bound_type: Upper, frontier: Antichain { elements: [1743119641088] }, reason: "storage export u424 write frontier" })
Test details & reproducer
Write a single set of .td fragments for a particular feature or functionality and then have Zippy execute them in upgrade, 0dt-upgrade, restart, recovery and failure contexts.BUILDKITE_PARALLEL_JOB=1 BUILDKITE_PARALLEL_JOB_COUNT=2 bin/mzcompose --find platform-checks run default --scenario=ZeroDowntimeUpgradeEntireMz --seed=0195d9f1-061a-4523-89b6-2577fa68f6dc
bin/ci-builder run nightly ci/deploy/devsite.shbin/ci-builder run nightly ci/deploy/devsite.sh
Waited 20s
Ran in 15m 21s
bin/ci-builder run stable bin/pyactivate -m ci....bin/ci-builder run stable bin/pyactivate -m ci.deploy.docker
Waited 12s
Ran in 57s
bin/ci-builder run stable bin/pyactivate -m ci....bin/ci-builder run stable bin/pyactivate -m ci.deploy.pypi
Waited 57s
Ran in 3m 0s
bin/ci-builder run stable bin/pyactivate -m ci....bin/ci-builder run stable bin/pyactivate -m ci.deploy.npm
Waited 1m 24s
Ran in 1m 2s
Total Job Run Time: 21m 45s