Nightly
PublicTests that are too slow or non-deterministic for the regular Test pipeline
Scheduled build
Failed in 8h 5m
Feature benchmark against merge base or 'latest' 2 failed, main history:
- Unknown error in Scenario 'ManyKafkaSourcesOnSameCluster':
New regression against v0.141.2
NAME | TYPE | THIS | OTHER | UNIT | THRESHOLD | Regression? | 'THIS' is
--------------------------------------------------------------------------------------------------------------------------------------------------------
ManyKafkaSourcesOnSameCluster | wallclock | 29.864 | 26.436 | s | 10% | !!YES!! | worse: 13.0% slower
ManyKafkaSourcesOnSameCluster | memory_mz | 2548.218 | 2549.171 | MB | 20% | no | better: 0.0% less
ManyKafkaSourcesOnSameCluster | memory_clusterd | 70.591 | 37.899 | MB | 50% | !!YES!! | worse: 86.3% more
Test details & reproducer
Simple benchmark of mostly individual queries using testdrive. Can find wallclock/memorys regressions in single-connection query executions, not suitable for concurrency.BUILDKITE_PARALLEL_JOB=1 BUILDKITE_PARALLEL_JOB_COUNT=8 bin/mzcompose --find feature-benchmark run default --other-tag common-ancestor
Scalability benchmark (read & write) against merge base or 'latest' failed, main history:
- Unknown error in Workload 'SelectStarWorkload':
New regression against v0.141.2 (e783ed422)
Regression in workload 'SelectStarWorkload' at concurrency 2 with MaterializeContainer (None specified as HEAD): 606.77 tps vs. 1059.59 tps (-452.82 tps; -42.74%)
Test details & reproducer
Benchmark for how various queries scale, compares against old Materialize versions.bin/mzcompose --find scalability run default --target HEAD --target common-ancestor --regression-against common-ancestor --workload-group-marker DmlDqlWorkload --max-concurrency 256
Parallel Workload (0dt deploy) succeeded with known error logs, main history: 




- Known issue parallel-workload: 0dt: thread 'coordinator' panicked at src/storage-controller/src/lib.rs:703:17: dependency since has advanced past dependent (u417) upper (#8425) in services.log:
parallel-workload-materialized2-1 | 2025-04-12T23:49:11.578231Z thread 'coordinator' panicked at src/storage-controller/src/lib.rs:974:17: dependency since has advanced past dependent (u143) upper
Test details & reproducer
Runs a randomized parallel workload stressing all parts of Materialize, can mostly find panics and unexpected errors. See zippy for a sequential randomized tests which can verify correctness.bin/mzcompose --find parallel-workload run default --runtime=1500 --scenario=0dt-deploy --threads=16
Checks 0dt upgrade across two versions 1 succeeded with known error logs, main history: 




platform-checks-mz_4-1 | 2025-04-12T23:55:43.312508Z thread 'coordinator' panicked at src/compute-client/src/as_of_selection.rs:392:25: failed to apply hard as-of constraint (id=u510, bounds=[[] .. []], constraint=Constraint { type_: Hard, bound_type: Upper, frontier: Antichain { elements: [1744502139592] }, reason: "storage export u510 write frontier" })
Test details & reproducer
Write a single set of .td fragments for a particular feature or functionality and then have Zippy execute them in upgrade, 0dt-upgrade, restart, recovery and failure contexts.BUILDKITE_PARALLEL_JOB=0 BUILDKITE_PARALLEL_JOB_COUNT=2 bin/mzcompose --find platform-checks run default --scenario=ZeroDowntimeUpgradeEntireMzTwoVersions --seed=01962c56-bd09-412b-bbe3-a4e119913759
Checks 0dt restart of the entire Mz with forced migrations 1 succeeded with known error logs, main history: 




platform-checks-mz_4-1 | 2025-04-12T23:55:15.870629Z thread 'coordinator' panicked at src/compute-client/src/as_of_selection.rs:392:25: failed to apply hard as-of constraint (id=u510, bounds=[[] .. []], constraint=Constraint { type_: Hard, bound_type: Upper, frontier: Antichain { elements: [1744502113435] }, reason: "storage export u510 write frontier" })
Test details & reproducer
Write a single set of .td fragments for a particular feature or functionality and then have Zippy execute them in upgrade, 0dt-upgrade, restart, recovery and failure contexts.BUILDKITE_PARALLEL_JOB=0 BUILDKITE_PARALLEL_JOB_COUNT=2 bin/mzcompose --find platform-checks run default --scenario=ZeroDowntimeRestartEntireMzForcedMigrations --seed=01962c56-bd09-412b-bbe3-a4e119913759
Checks 0dt upgrade, whole-Mz restart 1 succeeded with known error logs, main history: 




- Known issue parallel-workload: 0dt: thread 'coordinator' panicked at src/storage-controller/src/lib.rs:703:17: dependency since has advanced past dependent (u417) upper (#8425) in services.log:
platform-checks-mz_3-1 | 2025-04-12T23:54:43.751229Z thread 'coordinator' panicked at src/storage-controller/src/lib.rs:974:17: dependency since has advanced past dependent (u494) upper
Test details & reproducer
Write a single set of .td fragments for a particular feature or functionality and then have Zippy execute them in upgrade, 0dt-upgrade, restart, recovery and failure contexts.BUILDKITE_PARALLEL_JOB=0 BUILDKITE_PARALLEL_JOB_COUNT=2 bin/mzcompose --find platform-checks run default --scenario=ZeroDowntimeUpgradeEntireMz --seed=01962c56-bd09-412b-bbe3-a4e119913759
Total Job Run Time: 5d 23h