Pull Request validation tests

Zero downtime failed, main history: :bk-status-failed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

Docker compose failed: docker compose -f/dev/fd/3 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-462c045e/materialize/nightly/test/0dt exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@mz_new:6875 --materialize-internal-url=postgres://materialize@mz_new:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=cluster=cluster --default-timeout=300s --seed=1 --persist-blob-url=file:///mzdata/persist/blob --persist-consensus-url=postgres://root@mz_new:26257?options=--search_path=consensus --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-462c045e/materialize/nightly/test/0dt/mzcompose.py:539
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-462c045e/materialize/nightly/test/0dt/mzcompose.py:539

Docker compose failed: docker compose -f/dev/fd/3 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-462c045e/materialize/nightly/test/0dt exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@mz_new:6875 --materialize-internal-url=postgres://materialize@mz_new:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=cluster=cluster --default-timeout=300s --seed=1 --persist-blob-url=file:///mzdata/persist/blob --persist-consensus-url=postgres://root@mz_new:26257?options=--search_path=consensus --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-462c045e/materialize/nightly/test/0dt/mzcompose.py:1782
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-462c045e/materialize/nightly/test/0dt/mzcompose.py:1782

Docker compose failed: docker compose -f/dev/fd/4 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-462c045e/materialize/nightly/test/0dt exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@mz_old:6875 --materialize-internal-url=postgres://materialize@mz_old:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=cluster=cluster --default-timeout=300s --seed=1 --persist-blob-url=file:///mzdata/persist/blob --persist-consensus-url=postgres://root@mz_old:26257?options=--search_path=consensus --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-462c045e/materialize/nightly/test/0dt/mzcompose.py:238
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-462c045e/materialize/nightly/test/0dt/mzcompose.py:238

Test details & reproducer Explicit deterministic tests for read-only mode and zero downtime deploys (same version, no upgrade).
bin/mzcompose --find 0dt run default 

Scalability benchmark (read & write) against merge base or 'latest' failed, main history: :bk-status-failed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

New regression against v0.138.2 (a20722bef)
Regression in workload 'SelectLimitWorkload' at concurrency 2 with MaterializeContainer (None specified as HEAD): 642.73 tps vs. 1040.29 tps (-397.56 tps; -38.22%)
Test details & reproducer Benchmark for how various queries scale, compares against old Materialize versions.
bin/mzcompose --find scalability run default --target HEAD --target common-ancestor --regression-against common-ancestor --workload-group-marker DmlDqlWorkload --max-concurrency 256 

Testdrive (before Kafka source versioning) with :azure: blob store failed, main history: :bk-status-failed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

kafka-progress.td:39:1: non-matching rows: expected:
[["                query timestamp: <> <>\n          oracle read timestamp: <> <>\nlargest not in advance of upper: <> <>\n                          upper:[<> <>]\n                          since:[<> <>]\n        can respond immediately: <>\n                       timeline: Some(EpochMilliseconds)\n              session wall time: <> <>\n\nsource materialize.public.data_progress (<>, storage):\n                  read frontier:[<> <>]\n                 write frontier:[<> <>]\n"]]
got:
[["                query timestamp: <> <>\n          oracle read timestamp: <> <>\nlargest not in advance of upper: <> <>\n                          upper:[<> <>]\n                          since:[<> <>]\n        can respond immediately: <>\n                       timeline: Some(EpochMilliseconds)\n              session wall time: <> <>\n\nsource materialize.public.data_progress (<>, storage):\n                  read frontier:[<> <>]\n                 write frontier:[<> <>]\n\nbinding constraints:\nlower:\n  (IsolationLevel(StrictSerializable)): [<> <>]\n"]]
got raw rows:
[["                query timestamp: 1743120037591 (2025-03-28 00:00:37.591)\n          oracle read timestamp: 1743120037591 (2025-03-28 00:00:37.591)\nlargest not in advance of upper: 1743120038000 (2025-03-28 00:00:38.000)\n                          upper:[1743120038001 (2025-03-28 00:00:38.001)]\n                          since:[1743120037000 (2025-03-28 00:00:37.000)]\n        can respond immediately: true\n                       timeline: Some(EpochMilliseconds)\n              session wall time: 1743120038062 (2025-03-28 00:00:38.062)\n\nsource materialize.public.data_progress (u654, storage):\n                  read frontier:[1743120037000 (2025-03-28 00:00:37.000)]\n                 write frontier:[1743120038001 (2025-03-28 00:00:38.001)]\n\nbinding constraints:\nlower:\n  (IsolationLevel(StrictSerializable)): [1743120037591 (2025-03-28 00:00:37.591)]\n"]]
Poor diff:
- "                query timestamp: <> <>\n          oracle read timestamp: <> <>\nlargest not in advance of upper: <> <>\n                          upper:[<> <>]\n                          since:[<> <>]\n        can respond immediately: <>\n                       timeline: Some(EpochMilliseconds)\n              session wall time: <> <>\n\nsource materialize.public.data_progress (<>, storage):\n                  read frontier:[<> <>]\n                 write frontier:[<> <>]\n"
+ "                query timestamp: <> <>\n          oracle read timestamp: <> <>\nlargest not in advance of upper: <> <>\n                          upper:[<> <>]\n                          since:[<> <>]\n        can respond immediately: <>\n                       timeline: Some(EpochMilliseconds)\n              session wall time: <> <>\n\nsource materialize.public.data_progress (<>, storage):\n                  read frontier:[<> <>]\n                 write frontier:[<> <>]\n\nbinding constraints:\nlower:\n  (IsolationLevel(StrictSerializable)): [<> <>]\n"

     |
  38 | $ set-regex match=(\s{12}0|\d{13,20}|u\d{1,5}|\(\d+-\d\d-\d\d\s\d\d:\d\d:\d\d\.\d\d\d\)|true|false) replacement=<>
  39 | > EXPLAIN TIMESTAMP FOR SELECT * FROM data_progress
     | ^

load-generator.td:188:1: non-matching rows: expected:
[["                query timestamp: <> <>\n          oracle read timestamp: <> <>\nlargest not in advance of upper: <> <>\n                          upper:[<> <>]\n                          since:[<> <>]\n        can respond immediately: <>\n                       timeline: Some(EpochMilliseconds)\n              session wall time: <> <>\n\nsource materialize.another.auction_house_progress (<>, storage):\n                  read frontier:[<> <>]\n                 write frontier:[<> <>]\n"]]
got:
[["                query timestamp: <> <>\n          oracle read timestamp: <> <>\nlargest not in advance of upper: <> <>\n                          upper:[<> <>]\n                          since:[<> <>]\n        can respond immediately: <>\n                       timeline: Some(EpochMilliseconds)\n              session wall time: <> <>\n\nsource materialize.another.auction_house_progress (<>, storage):\n                  read frontier:[<> <>]\n                 write frontier:[<> <>]\n\nbinding constraints:\nlower:\n  (IsolationLevel(StrictSerializable)): [<> <>]\n"]]
got raw rows:
[["                query timestamp: 1743120667125 (2025-03-28 00:11:07.125)\n          oracle read timestamp: 1743120667125 (2025-03-28 00:11:07.125)\nlargest not in advance of upper: 1743120668000 (2025-03-28 00:11:08.000)\n                          upper:[1743120668001 (2025-03-28 00:11:08.001)]\n                          since:[1743120667000 (2025-03-28 00:11:07.000)]\n        can respond immediately: true\n                       timeline: Some(EpochMilliseconds)\n              session wall time: 1743120668085 (2025-03-28 00:11:08.085)\n\nsource materialize.another.auction_house_progress (u822, storage):\n                  read frontier:[1743120667000 (2025-03-28 00:11:07.000)]\n                 write frontier:[1743120668001 (2025-03-28 00:11:08.001)]\n\nbinding constraints:\nlower:\n  (IsolationLevel(StrictSerializable)): [1743120667125 (2025-03-28 00:11:07.125)]\n"]]
Poor diff:
- "                query timestamp: <> <>\n          oracle read timestamp: <> <>\nlargest not in advance of upper: <> <>\n                          upper:[<> <>]\n                          since:[<> <>]\n        can respond immediately: <>\n                       timeline: Some(EpochMilliseconds)\n              session wall time: <> <>\n\nsource materialize.another.auction_house_progress (<>, storage):\n                  read frontier:[<> <>]\n                 write frontier:[<> <>]\n"
+ "                query timestamp: <> <>\n          oracle read timestamp: <> <>\nlargest not in advance of upper: <> <>\n                          upper:[<> <>]\n                          since:[<> <>]\n        can respond immediately: <>\n                       timeline: Some(EpochMilliseconds)\n              session wall time: <> <>\n\nsource materialize.another.auction_house_progress (<>, storage):\n                  read frontier:[<> <>]\n                 write frontier:[<> <>]\n\nbinding constraints:\nlower:\n  (IsolationLevel(StrictSerializable)): [<> <>]\n"

     |
  13 | $ postgres-execute c ... [rest of line truncated for security]
 187 | $ set-regex match=(\s{12}0|\d{13,20}|u\d{1,5}|\(\d+-\d\d-\d\d\s\d\d:\d\d:\d\d\.\d\d\d\)|true|false) replacement=<>
 188 | > EXPLAIN TIMESTAMP FOR SELECT * FROM another.auction_house_progress
     | ^

Docker compose failed: docker compose -f/dev/fd/4 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-8cpu-16gb-751eec39/materialize/nightly/test/testdrive-old-kafka-src-syntax run -eCLUSTER_REPLICA_SIZES testdrive --junit-report=junit_testdrive-old-kafka-src-syntax_0195d9f3-28e9-4b64-a71c-d0ef20bb85b1.xml --var=default-replica-size=4-4 --var=default-storage-size=4-1 *.td
Container testdrive-old-kafka-src-syntax-fivetran-destination-1  Running
importer.proto:2:1: warning: Import empty.proto is unused.
^^^ +++
^^^ +++
importer.proto:2:1: warning: Import empty.proto is unused.
+++ !!! Error Report
2 errors were encountered during execution
files involved: kafka-progress.td load-generator.td

Test details & reproducer Testdrive is the basic framework and language for defining product tests under the expected-result/actual-result (aka golden testing) paradigm. A query is retried until it produces the desired result.
bin/mzcompose --find testdrive-old-kafka-src-syntax run default --azurite 
Parallel Workload (0dt deploy) succeeded with known error logs, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:
parallel-workload-materialized2-1    | 2025-03-28T00:05:19.338471Z  thread 'coordinator' panicked at src/storage-controller/src/lib.rs:973:17: dependency since has advanced past dependent (u496) upper 
Test details & reproducer Runs a randomized parallel workload stressing all parts of Materialize, can mostly find panics and unexpected errors. See zippy for a sequential randomized tests which can verify correctness.
bin/mzcompose --find parallel-workload run default --runtime=1500 --scenario=0dt-deploy --threads=16 

Postgres CDC tests (before source versioning) failed, main history: :bk-status-failed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

Docker compose failed: docker compose -f/dev/fd/4 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-4cpu-8gb-e518f123/materialize/nightly/test/pg-cdc-old-syntax run -eCLUSTER_REPLICA_SIZES testdrive --var=ssl-ca=<CERTIFICATE>  --var=ssl-cert=<CERTIFICATE>  --var=ssl-key=<PRIVATE KEY>  --var=ssl-wrong-cert=<CERTIFICATE>  --var=ssl-wrong-key=<PRIVATE KEY>  --var=default-replica-size=4-4 --var=default-storage-size=4-1 pg-cdc.td
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
files involved: pg-cdc.td

Test details & reproducer Native Postgres source tests, functional.
bin/mzcompose --find pg-cdc-old-syntax run default 

Checks 0dt upgrade across four versions 1 failed, main history: :bk-status-failed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

Docker compose failed: docker compose -f/dev/fd/3 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-a9089eb4/materialize/nightly/test/platform-checks exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@materialized:6875 --materialize-internal-url=postgres://materialize@materialized:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=statement_timeout='300s' --default-timeout=300s --seed=1 --persist-blob-url=s3://minioadmin:minioadmin@persist/persist?endpoint=http://minio:9000/&region=minio --persist-consensus-url=postgres://root@materialized:26257?options=--search_path=consensus --var=replicas=1 --var=default-replica-size=4-4 --var=default-storage-size=4-1 --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-a9089eb4/materialize/nightly/misc/python/materialize/checks/all_checks/materialized_views.py:255 --materialize-url=postgres://materialize@mz_3:6875 --materialize-internal-url=postgres://mz_system@mz_3:6877 --persist-consensus-url=postgres://root@mz_3:26257?options=--search_path=consensus
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-a9089eb4/materialize/nightly/misc/python/materialize/checks/all_checks/materialized_views.py:255

Test details & reproducer Write a single set of .td fragments for a particular feature or functionality and then have Zippy execute them in upgrade, 0dt-upgrade, restart, recovery and failure contexts.
BUILDKITE_PARALLEL_JOB=0 BUILDKITE_PARALLEL_JOB_COUNT=2 bin/mzcompose --find platform-checks run default --scenario=ZeroDowntimeUpgradeEntireMzFourVersions --seed=0195d9f1-061a-4523-89b6-2577fa68f6dc 

Checks preflight-check and roll back upgrade 1 failed, main history: :bk-status-failed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

Docker compose failed: docker compose -f/dev/fd/4 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-5209e8f8/materialize/nightly/test/platform-checks exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@materialized:6875 --materialize-internal-url=postgres://materialize@materialized:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=statement_timeout='300s' --default-timeout=300s --seed=1 --persist-blob-url=s3://minioadmin:minioadmin@persist/persist?endpoint=http://minio:9000/&region=minio --persist-consensus-url=postgres://root@materialized:26257?options=--search_path=consensus --var=replicas=1 --var=default-replica-size=4-4 --var=default-storage-size=4-1 --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-5209e8f8/materialize/nightly/misc/python/materialize/checks/all_checks/materialized_views.py:255
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-5209e8f8/materialize/nightly/misc/python/materialize/checks/all_checks/materialized_views.py:255

Test details & reproducer Write a single set of .td fragments for a particular feature or functionality and then have Zippy execute them in upgrade, 0dt-upgrade, restart, recovery and failure contexts.
BUILDKITE_PARALLEL_JOB=0 BUILDKITE_PARALLEL_JOB_COUNT=2 bin/mzcompose --find platform-checks run default --scenario=PreflightCheckRollback --seed=0195d9f1-061a-4523-89b6-2577fa68f6dc 
Checks 0dt upgrade, whole-Mz restart 2 succeeded with known error logs, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:
platform-checks-mz_3-1              | 2025-03-27T23:54:03.974126Z  thread 'coordinator' panicked at src/compute-client/src/as_of_selection.rs:392:25: failed to apply hard as-of constraint (id=u424, bounds=[[] .. []], constraint=Constraint { type_: Hard, bound_type: Upper, frontier: Antichain { elements: [1743119641088] }, reason: "storage export u424 write frontier" })
Test details & reproducer Write a single set of .td fragments for a particular feature or functionality and then have Zippy execute them in upgrade, 0dt-upgrade, restart, recovery and failure contexts.
BUILDKITE_PARALLEL_JOB=1 BUILDKITE_PARALLEL_JOB_COUNT=2 bin/mzcompose --find platform-checks run default --scenario=ZeroDowntimeUpgradeEntireMz --seed=0195d9f1-061a-4523-89b6-2577fa68f6dc 
:pipeline:expeditor buildkite trigger-pipeline .expeditor/verify.pipeline.yml
Waited 4s
·
Ran in 9s
run-lint-and-specs-ruby-3.1.expeditor/run_linux_tests.sh rake
Waited 10s
·
Ran in 1m 53s
:pipeline: generate-test-build-stepsexpeditor buildkite trigger-pipeline test/generate_steps.rb
Waited 4s
·
Ran in 9s
test-build (openssl 1.0.2zg)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=1.0.2zg -e CI builder
Waited 4s
·
Ran in 5m 6s
test-build (openssl 3.2.4)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=3.2.4 -e CI builder
Waited 2s
·
Ran in 10m 42s
test-build (openssl 3.3.3)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=3.3.3 -e CI builder
Waited 3s
·
Ran in 10m 46s
test-build (openssl 3.4.1)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=3.4.1 -e CI builder
Waited 3s
·
Ran in 11m 7s
test-build (openssl 3.0.15)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=3.0.15 -e CI builder
Waited 4s
·
Ran in 8m 19s
test-build (openssl 3.0.12)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=3.0.12 -e CI builder
Waited 8s
·
Ran in 7m 41s
test-build (openssl 3.0.11)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=3.0.11 -e CI builder
Waited 8s
·
Ran in 7m 25s
test-build (openssl 3.0.9)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=3.0.9 -e CI builder
Waited 1m 59s
·
Ran in 7m 39s
test-build (openssl 3.0.5)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=3.0.5 -e CI builder
Waited 1m 59s
·
Ran in 7m 39s
test-build (openssl 3.0.4)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=3.0.4 -e CI builder
Waited 2m 0s
·
Ran in 7m 24s
test-build (openssl 3.0.3)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=3.0.3 -e CI builder
Waited 2m 0s
·
Ran in 7m 33s
test-build (openssl 3.0.1)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=3.0.1 -e CI builder
Waited 2m 0s
·
Ran in 8m 0s
test-build (openssl 1.1.1t)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=1.1.1t -e CI builder
Waited 2m 0s
·
Ran in 5m 8s
test-build (openssl 1.1.1q)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=1.1.1q -e CI builder
Waited 2m 1s
·
Ran in 5m 2s
test-build (openssl 1.1.1p)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=1.1.1p -e CI builder
Waited 2m 0s
·
Ran in 5m 22s
test-build (openssl 1.1.1o)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=1.1.1o -e CI builder
Waited 2m 1s
·
Ran in 5m 15s
test-build (openssl 1.1.1m)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=1.1.1m -e CI builder
Waited 2m 1s
·
Ran in 5m 30s
test-build (openssl 1.1.1l)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=1.1.1l -e CI builder
Waited 2m 1s
·
Ran in 5m 21s
test-build (openssl 1.1.1w)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=1.1.1w -e CI builder
Waited 2m 1s
·
Ran in 5m 5s
test-build (openssl 1.0.2zb)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=1.0.2zb -e CI builder
Waited 2m 2s
·
Ran in 4m 52s
test-build (openssl 1.0.2za)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=1.0.2za -e CI builder
Waited 2m 2s
·
Ran in 4m 56s
test-build (openssl 1.0.2ze)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=1.0.2ze -e CI builder
Waited 2m 3s
·
Ran in 4m 49s
test-build (openssl 1.0.2zf)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=1.0.2zf -e CI builder
Waited 2m 3s
·
Ran in 5m 8s
test-build (openssl 1.0.2zi)docker-compose run --rm -e SOFTWARE=openssl -e VERSION=1.0.2zi -e CI builder
Waited 2m 12s
·
Ran in 6m 41s
Total Job Run Time: 2h 44m