https://github.com/PTsolvers/JustRelax.jl

Merge branch 'main' into adm/sinkingballs

Failed in 10h 35m
Parallel Workload (0dt deploy) succeeded with known error logs, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:
parallel-workload-materialized2-1    | 2025-04-02T23:48:26.014846Z  thread 'coordinator' panicked at src/storage-controller/src/lib.rs:973:17: dependency since has advanced past dependent (u172) upper 
Test details & reproducer Runs a randomized parallel workload stressing all parts of Materialize, can mostly find panics and unexpected errors. See zippy for a sequential randomized tests which can verify correctness.
bin/mzcompose --find parallel-workload run default --runtime=1500 --scenario=0dt-deploy --threads=16 

SQLsmith explain failed, main history: :bk-status-failed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

sqlsmith-mz_1-1  | 2025-04-02T23:43:45.853702Z  thread 'tokio:work-37' panicked at src/expr/src/scalar/func.rs:2154:31: Datum::unwrap_string called on Float64(10.0)
sqlsmith-mz_1-1  | 2025-04-02T23:52:22.761235Z  thread 'tokio:work-6' panicked at src/expr/src/scalar/func.rs:2154:31: Datum::unwrap_string called on Float64(1e-307)
sqlsmith-mz_1-1  | 2025-04-02T23:52:56.697187Z  thread 'tokio:work-6' panicked at src/expr/src/scalar/func.rs:2154:31: Datum::unwrap_string called on True
sqlsmith-mz_2-1  | 2025-04-02T23:55:38.895692Z  thread 'tokio:work-1' panicked at src/expr/src/scalar/func.rs:2154:31: Datum::unwrap_string called on UInt16(0)
sqlsmith-mz_2-1  | 2025-04-03T00:02:32.869962Z  thread 'tokio:work-2' panicked at src/expr/src/scalar/func.rs:2154:31: Datum::unwrap_string called on False
sqlsmith-mz_1-1  | 2025-04-03T00:04:24.551997Z  thread 'tokio:work-17' panicked at src/expr/src/scalar/func.rs:2154:31: Datum::unwrap_string called on Float32(1.0)
sqlsmith-mz_1-1  | 2025-04-03T00:04:55.346727Z  thread 'tokio:work-7' panicked at src/expr/src/scalar/func.rs:2154:31: Datum::unwrap_string called on False
Test details & reproducer Use SQLsmith to generate random queries (AST/code based) and run them against Materialize: https://github.com/MaterializeInc/sqlsmith The queries can be complex, but we can't verify correctness or performance.
bin/mzcompose --find sqlsmith run default --max-joins=15 --explain-only --runtime=1500 

Canary Deploy in Staging Cloud failed, main history: :bk-status-failed::bk-status-failed::bk-status-failed::bk-status-failed::bk-status-failed:

builtins.ValueError: Redpanda API call failed: 400 {"code":"INVALID_ARGUMENT","message":"throughput tier tier-1-aws-v2-arm is not available in region us-east-1 for cluster type TYPE_DEDICATED","details":[{"@type":"google.rpc.ErrorInfo","reason":"REASON_THROUGHPUT_TIER_NOT_AVAILABLE_IN_REGION","domain":"redpanda.com/controlplane","metadata":{"cluster_type":"TYPE_DEDICATED","region":"us-east-1","throughput_tier_name":"tier-1-aws-v2-arm"}}]}
Test details & reproducer Deploy the current version on a real Staging Cloud, and run some basic verifications, like ingesting data from Kafka and Redpanda Cloud using AWS Privatelink. Runs only on main and release branches.
bin/mzcompose --find cloud-canary run default 

💡 SQL logic tests 8 failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

OutputFailure:test/sqllogictest/cardinality.slt:433
        expected: Values(["Explained Query:\n  Project (#0{x}, #1{y}, #0{x}, #3{y}, #0{x}, #5{y}, #0{x}, #7{y}, #0{x}, #9{y}, #0{x}, #11{y}, #0{x}, #13{y}, #0{x}, #15{y}, #0{x}, #17{y}, #0{x}, #19{y})\n    Join on=(#0{x} = #2{x} = #4{x} = #6{x} = #8{x} = #10{x} = #12{x} = #14{x} = #16{x} = #18{x}) type=delta\n      implementation\n        %0:t » %1:t2[#0]KA|10000| » %2:t3[#0]K|169| » %9:t10[#0]K|260| » %8:t9[#0]K|273| » %5:t6[#0]K|299| » %6:t7[#0]K|299| » %7:t8[#0]K|299| » %4:t5[#0]K|494| » %3:t4[#0]K|611|\n        %1:t2 » %0:t[#0]KA|4| » %2:t3[#0]K|169| » %9:t10[#0]K|260| » %8:t9[#0]K|273| » %5:t6[#0]K|299| » %6:t7[#0]K|299| » %7:t8[#0]K|299| » %4:t5[#0]K|494| » %3:t4[#0]K|611|\n        %2:t3 » %0:t[#0]KA|4| » %1:t2[#0]KA|10000| » %9:t10[#0]K|260| » %8:t9[#0]K|273| » %5:t6[#0]K|299| » %6:t7[#0]K|299| » %7:t8[#0]K|299| » %4:t5[#0]K|494| » %3:t4[#0]K|611|\n        %3:t4 » %0:t[#0]KA|4| » %1:t2[#0]KA|10000| » %2:t3[#0]K|169| » %9:t10[#0]K|260| » %8:t9[#0]K|273| » %5:t6[#0]K|299| » %6:t7[#0]K|299| » %7:t8[#0]K|299| » %4:t5[#0]K|494|\n        %4:t5 » %0:t[#0]KA|4| » %1:t2[#0]KA|10000| » %2:t3[#0]K|169| » %9:t10[#0]K|260| » %8:t9[#0]K|273| » %5:t6[#0]K|299| » %6:t7[#0]K|299| » %7:t8[#0]K|299| » %3:t4[#0]K|611|\n        %5:t6 » %0:t[#0]KA|4| » %1:t2[#0]KA|10000| » %2:t3[#0]K|169| » %9:t10[#0]K|260| » %8:t9[#0]K|273| » %6:t7[#0]K|299| » %7:t8[#0]K|299| » %4:t5[#0]K|494| » %3:t4[#0]K|611|\n        %6:t7 » %0:t[#0]KA|4| » %1:t2[#0]KA|10000| » %2:t3[#0]K|169| » %9:t10[#0]K|260| » %8:t9[#0]K|273| » %5:t6[#0]K|299| » %7:t8[#0]K|299| » %4:t5[#0]K|494| » %3:t4[#0]K|611|\n        %7:t8 » %0:t[#0]KA|4| » %1:t2[#0]KA|10000| » %2:t3[#0]K|169| » %9:t10[#0]K|260| » %8:t9[#0]K|273| » %5:t6[#0]K|299| » %6:t7[#0]K|299| » %4:t5[#0]K|494| » %3:t4[#0]K|611|\n        %8:t9 » %0:t[#0]KA|4| » %1:t2[#0]KA|10000| » %2:t3[#0]K|169| » %9:t10[#0]K|260| » %5:t6[#0]K|299| » %6:t7[#0]K|299| » %7:t8[#0]K|299| » %4:t5[#0]K|494| » %3:t4[#0]K|611|\n        %9:t10 » %0:t[#0]KA|4| » %1:t2[#0]KA|10000| » %2:t3[#0]K|169| » %8:t9[#0]K|273| » %5:t6[#0]K|299| » %6:t7[#0]K|299| » %7:t8[#0]K|299| » %4:t5[#0]K|494| » %3:t4[#0]K|611|\n      ArrangeBy keys=[[#0{x}]]\n        ReadIndex on=t t_x=[delta join 1st input (full scan)]\n      ArrangeBy keys=[[#0{x}]]\n        ReadIndex on=t2 tt_x=[delta join lookup]\n      ArrangeBy keys=[[#0{x}]]\n        ReadStorage materialize.public.t3\n      ArrangeBy keys=[[#0{x}]]\n        ReadStorage materialize.public.t4\n      ArrangeBy keys=[[#0{x}]]\n        ReadStorage materialize.public.t5\n      ArrangeBy keys=[[#0{x}]]\n        ReadStorage materialize.public.t6\n      ArrangeBy keys=[[#0{x}]]\n        ReadStorage materialize.public.t7\n      ArrangeBy keys=[[#0{x}]]\n        ReadStorage materialize.public.t8\n      ArrangeBy keys=[[#0{x}]]\n        ReadStorage materialize.public.t9\n      ArrangeBy keys=[[#0{x}]]\n        ReadStorage materialize.public.t10\n\nSource materialize.public.t3\nSource materialize.public.t4\nSource materialize.public.t5\nSource materialize.public.t6\nSource materialize.public.t7\nSource materialize.public.t8\nSource materialize.public.t9\nSource materialize.public.t10\n\nUsed Indexes:\n  - materialize.public.t_x (delta join 1st input (full scan))\n  - materialize.public.tt_x (delta join lookup)\n\nTarget cluster: quickstart\n"])
        actually: Values(["Explained Query:\n  Project (#0{x}, #1{y}, #0{x}, #3{y}, #0{x}, #5{y}, #0{x}, #7{y}, #0{x}, #9{y}, #0{x}, #11{y}, #0{x}, #13{y}, #0{x}, #15{y}, #0{x}, #17{y}, #0{x}, #19{y})\n    Join on=(#0{x} = #2{x} = #4{x} = #6{x} = #8{x} = #10{x} = #12{x} = #14{x} = #16{x} = #18{x}) type=delta\n      implementation\n        %0:t » %1:t2[#0]KA » %2:t3[#0]K » %3:t4[#0]K » %4:t5[#0]K » %5:t6[#0]K » %6:t7[#0]K » %7:t8[#0]K » %8:t9[#0]K » %9:t10[#0]K\n        %1:t2 » %0:t[#0]KA » %2:t3[#0]K » %3:t4[#0]K » %4:t5[#0]K » %5:t6[#0]K » %6:t7[#0]K » %7:t8[#0]K » %8:t9[#0]K » %9:t10[#0]K\n        %2:t3 » %0:t[#0]KA » %1:t2[#0]KA » %3:t4[#0]K » %4:t5[#0]K » %5:t6[#0]K » %6:t7[#0]K » %7:t8[#0]K » %8:t9[#0]K » %9:t10[#0]K\n        %3:t4 » %0:t[#0]KA » %1:t2[#0]KA » %2:t3[#0]K » %4:t5[#0]K » %5:t6[#0]K » %6:t7[#0]K » %7:t8[#0]K » %8:t9[#0]K » %9:t10[#0]K\n        %4:t5 » %0:t[#0]KA » %1:t2[#0]KA » %2:t3[#0]K » %3:t4[#0]K » %5:t6[#0]K » %6:t7[#0]K » %7:t8[#0]K » %8:t9[#0]K » %9:t10[#0]K\n        %5:t6 » %0:t[#0]KA » %1:t2[#0]KA » %2:t3[#0]K » %3:t4[#0]K » %4:t5[#0]K » %6:t7[#0]K » %7:t8[#0]K » %8:t9[#0]K » %9:t10[#0]K\n        %6:t7 » %0:t[#0]KA » %1:t2[#0]KA » %2:t3[#0]K » %3:t4[#0]K » %4:t5[#0]K » %5:t6[#0]K » %7:t8[#0]K » %8:t9[#0]K » %9:t10[#0]K\n        %7:t8 » %0:t[#0]KA » %1:t2[#0]KA » %2:t3[#0]K » %3:t4[#0]K » %4:t5[#0]K » %5:t6[#0]K » %6:t7[#0]K » %8:t9[#0]K » %9:t10[#0]K\n        %8:t9 » %0:t[#0]KA » %1:t2[#0]KA » %2:t3[#0]K » %3:t4[#0]K » %4:t5[#0]K » %5:t6[#0]K » %6:t7[#0]K » %7:t8[#0]K » %9:t10[#0]K\n        %9:t10 » %0:t[#0]KA » %1:t2[#0]KA » %2:t3[#0]K » %3:t4[#0]K » %4:t5[#0]K » %5:t6[#0]K » %6:t7[#0]K » %7:t8[#0]K » %8:t9[#0]K\n      ArrangeBy keys=[[#0{x}]]\n        ReadIndex on=t t_x=[delta join 1st input (full scan)]\n      ArrangeBy keys=[[#0{x}]]\n        ReadIndex on=t2 tt_x=[delta join lookup]\n      ArrangeBy keys=[[#0{x}]]\n        ReadStorage materialize.public.t3\n      ArrangeBy keys=[[#0{x}]]\n        ReadStorage materialize.public.t4\n      ArrangeBy keys=[[#0{x}]]\n        ReadStorage materialize.public.t5\n      ArrangeBy keys=[[#0{x}]]\n        ReadStorage materialize.public.t6\n      ArrangeBy keys=[[#0{x}]]\n        ReadStorage materialize.public.t7\n      ArrangeBy keys=[[#0{x}]]\n        ReadStorage materialize.public.t8\n      ArrangeBy keys=[[#0{x}]]\n        ReadStorage materialize.public.t9\n      ArrangeBy keys=[[#0{x}]]\n        ReadStorage materialize.public.t10\n\nSource materialize.public.t3\nSource materialize.public.t4\nSource materialize.public.t5\nSource materialize.public.t6\nSource materialize.public.t7\nSource materialize.public.t8\nSource materialize.public.t9\nSource materialize.public.t10\n\nUsed Indexes:\n  - materialize.public.t_x (delta join 1st input (full scan))\n  - materialize.public.tt_x (delta join lookup)\n\nTarget cluster: quickstart\n"])
        actual raw: [Row { columns: [Column { name: "Optimized Plan", table_oid: None, column_id: None, type: Text }] }]
Test details & reproducer Run SQL tests using an instance of Mz that is embedded in the sqllogic binary itself. Good for basic SQL tests, but can't interact with sources like MySQL/Kafka, see Testdrive for that.
BUILDKITE_PARALLEL_JOB=7 BUILDKITE_PARALLEL_JOB_COUNT=10 bin/mzcompose --find sqllogictest run slow-tests 
Matrix
CUDA Julia 1.10julia -e 'println("--- :julia: Instantiating project") && using Pkg && Pkg.develop(; path=pwd())' || exit 3 && julia -e 'println("+++ :julia: Running tests") && using Pkg && Pkg.test("JustRelax"; test_args=["--backend=CUDA"], coverage=true)'
Waited 10s
·
Ran in 29m 55s
Matrix
CUDA Julia 1.11julia -e 'println("--- :julia: Instantiating project") && using Pkg && Pkg.develop(; path=pwd())' || exit 3 && julia -e 'println("+++ :julia: Running tests") && using Pkg && Pkg.test("JustRelax"; test_args=["--backend=CUDA"], coverage=true)'
Waited 6s
·
Ran in 43m 29s
Matrix
AMDGPU Julia 1.10julia -e 'println("--- :julia: Instantiating project") && using Pkg && Pkg.develop(; path=pwd())' || exit 3 && julia -e 'println("+++ :julia: Running tests") && using Pkg && Pkg.test("JustRelax"; test_args=["--backend=AMDGPU"], coverage=true)'
Waited 2h 54m
·
Ran in 13m 14s
Matrix
AMDGPU Julia 1.11julia -e 'println("--- :julia: Instantiating project") && using Pkg && Pkg.develop(; path=pwd())' || exit 3 && julia -e 'println("+++ :julia: Running tests") && using Pkg && Pkg.test("JustRelax"; test_args=["--backend=AMDGPU"], coverage=true)'
Waited 9h 57m
·
Ran in 38m 10s
Total Job Run Time: 2h 4m