Test

Public

Run fast unit and integration tests

tests: 2 replicas in more places

Failed in 46m 54s
:pipeline:
mkpipeline
:rust: Cargo test
Restart test
Yugabyte CDC tests
Short Zippy
Feature benchmark (Kafka only)
Persistence tests
Cluster isolation test
chbench smoke test
Metabase smoke test
dbt-materialize tests
Storage Usage Table Test
Tracing Fast Path
mz-debug tool

Cluster tests 1 failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

builtins.AssertionError: unexpected create_instance count: 1.0
Test details & reproducer Functional tests which require separate clusterd containers (instead of the usual clusterd included in the materialized container).
BUILDKITE_PARALLEL_JOB=0 BUILDKITE_PARALLEL_JOB_COUNT=4 bin/mzcompose --find cluster run default 

Restart test failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

Docker compose failed: docker compose -f/dev/fd/3 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-8b79e8b1/materialize/test/test/restart exec -T testdrive_no_reset testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@materialized:6875 --materialize-internal-url=postgres://materialize@materialized:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --default-timeout=360s --persist-blob-url=file:///mzdata/persist/blob --persist-consensus-url=postgres://root@materialized:26257?options=--search_path=consensus --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-8b79e8b1/materialize/test/test/restart/mzcompose.py:645
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-8b79e8b1/materialize/test/test/restart/mzcompose.py:645

Test details & reproducer Testdrive-based tests involving restarting materialized (including its clusterd processes). See cluster tests for separate clusterds, see platform-checks for further restart scenarios.
bin/mzcompose --find restart run default 

Testdrive 4 failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

session.td:18:1: non-matching rows: expected:
[["DateStyle", "ISO, MDY", "Sets the display format for date and time values (PostgreSQL)."], ["IntervalStyle", "postgres", "Sets the display format for interval values (PostgreSQL)."], ["TimeZone", "UTC", "Sets the time zone for displaying and interpreting time stamps (PostgreSQL)."], ["allowed_cluster_replica_sizes", "", "The allowed sizes when creating a new cluster replica (Materialize)."], ["application_name", "", "Sets the application name to be reported in statistics and logs (PostgreSQL)."], ["auto_route_catalog_queries", "on", "Whether to force queries that depend only on system tables, to run on the mz_catalog_server cluster (Materialize)."], ["client_encoding", "UTF8", "Sets the client's character set encoding (PostgreSQL)."], ["client_min_messages", "notice", "Sets the message levels that are sent to the client (PostgreSQL)."], ["cluster", "<VARIES>", "Sets the current cluster (Materialize)."], ["cluster_replica", "", "Sets a target cluster replica for SELECT queries (Materialize)."], ["current_object_missing_warnings", "on", "Whether to emit warnings when the current database, schema, or cluster is missing (Materialize)."], ["database", "materialize", "Sets the current database (CockroachDB)."], ["emit_introspection_query_notice", "on", "Whether to print a notice when querying per-replica introspection sources."], ["emit_plan_insights_notice", "off", "Boolean flag indicating whether to send a NOTICE with JSON-formatted plan insights before executing a SELECT statement (Materialize)."], ["emit_timestamp_notice", "off", "Boolean flag indicating whether to send a NOTICE with timestamp explanations of queries (Materialize)."], ["emit_trace_id_notice", "off", "Boolean flag indicating whether to send a NOTICE specifying the trace id when available (Materialize)."], ["enable_consolidate_after_union_negate", "on", "consolidation after Unions that have a Negated input (Materialize)."], ["enable_rbac_checks", "on", "User facing global boolean flag indicating whether to apply RBAC checks before executing statements (Materialize)."], ["enable_reduce_reduction", "on", "split complex reductions in to simpler ones and a join (Materialize)."], ["enable_session_rbac_checks", "off", "User facing session boolean flag indicating whether to apply RBAC checks before executing statements (Materialize)."], ["extra_float_digits", "3", "Adjusts the number of digits displayed for floating-point values (PostgreSQL)."], ["failpoints", "<omitted>", "Allows failpoints to be dynamically activated."], ["force_source_table_syntax", "off", "Force use of new source model (CREATE TABLE .. FROM SOURCE) and migrate existing sources"], ["idle_in_transaction_session_timeout", "2 min", "Sets the maximum allowed duration that a session can sit idle in a transaction before being terminated. If this value is specified without units, it is taken as milliseconds. A value of zero disables the timeout (PostgreSQL)."], ["integer_datetimes", "on", "Reports whether the server uses 64-bit-integer dates and times (PostgreSQL)."], ["is_superuser", "off", "Reports whether the current session is a superuser (PostgreSQL)."], ["max_aws_privatelink_connections", "0", "The maximum number of AWS PrivateLink connections in the region, across all schemas (Materialize)."], ["max_clusters", "10", "The maximum number of clusters in the region (Materialize)."], ["max_connections", "5000", "The maximum number of concurrent connections (PostgreSQL)."], ["max_continual_tasks", "100", "The maximum number of continual tasks in the region, across all schemas (Materialize)."], ["max_copy_from_size", "1073741824", "The maximum size in bytes we buffer for COPY FROM statements (Materialize)."], ["max_credit_consumption_rate", "1024", "The maximum rate of credit consumption in a region. Credits are consumed based on the size of cluster replicas in use (Materialize)."], ["max_databases", "1000", "The maximum number of databases in the region (Materialize)."], ["max_identifier_length", "255", "The maximum length of object identifiers in bytes (PostgreSQL)."], ["max_kafka_connections", "1000", "The maximum number of Kafka connections in the region, across all schemas (Materialize)."], ["max_materialized_views", "100", "The maximum number of materialized views in the region, across all schemas (Materialize)."], ["max_mysql_connections", "1000", "The maximum number of MySQL connections in the region, across all schemas (Materialize)."], ["max_network_policies", "25", "The maximum number of network policies in the region."], ["max_objects_per_schema", "1000", "The maximum number of objects in a schema (Materialize)."], ["max_postgres_connections", "1000", "The maximum number of PostgreSQL connections in the region, across all schemas (Materialize)."], ["max_query_result_size", "1GB", "The maximum size in bytes for a single query's result (Materialize)."], ["max_replicas_per_cluster", "5", "The maximum number of replicas of a single cluster (Materialize)."], ["max_result_size", "1GB", "The maximum size in bytes for an internal query result (Materialize)."], ["max_roles", "1000", "The maximum number of roles in the region (Materialize)."], ["max_rules_per_network_policy", "25", "The maximum number of rules per network policies."], ["max_schemas_per_database", "1000", "The maximum number of schemas in a database (Materialize)."], ["max_secrets", "100", "The maximum number of secrets in the region, across all schemas (Materialize)."], ["max_sinks", "25", "The maximum number of sinks in the region, across all schemas (Materialize)."], ["max_sources", "200", "The maximum number of sources in the region, across all schemas (Materialize)."], ["max_sql_server_connections", "1000", "The maximum number of SQL Server connections in the region, across all schemas (Materialize)."], ["max_tables", "200", "The maximum number of tables in the region, across all schemas (Materialize)."], ["mz_version", "<VARIES>", "Shows the Materialize server version (Materialize)."], ["network_policy", "default", "Sets the fallback network policy applied to all users without an explicit policy."], ["optimizer_e2e_latency_warning_threshold", "500 ms", "Sets the duration that a query can take to compile; queries that take longer will trigger a warning. If this value is specified without units, it is taken as milliseconds. A value of zero disables the timeout (Materialize)."], ["real_time_recency", "off", "Feature flag indicating whether real time recency is enabled (Materialize)."], ["real_time_recency_timeout", "10 s", "Sets the maximum allowed duration of SELECTs that actively use real-time recency, i.e. reach out to an external system to determine their most recencly exposed data (Materialize)."], ["search_path", "public", "Sets the schema search order for names that are not schema-qualified (PostgreSQL)."], ["server_version", "9.5.0", "Shows the PostgreSQL compatible server version (PostgreSQL)."], ["server_version_num", "90500", "Shows the PostgreSQL compatible server version as an integer (PostgreSQL)."], ["sql_safe_updates", "off", "Prohibits SQL statements that may be overly destructive (CockroachDB)."], ["standard_conforming_strings", "on", "Causes '...' strings to treat backslashes literally (PostgreSQL)."], ["statement_logging_default_sample_rate", "0.01", "The default value of `statement_logging_sample_rate` for new sessions (Materialize)."], ["statement_logging_max_sample_rate", "0.01", "The maximum rate at which statements may be logged. If this value is less than that of `statement_logging_sample_rate`, the latter is ignored (Materialize)."], ["statement_logging_sample_rate", "0.01", "User-facing session variable indicating how many statement executions should be logged, subject to constraint by the system variable `statement_logging_max_sample_rate` (Materialize)."], ["statement_timeout", "1 min", "Sets the maximum allowed duration of INSERT...SELECT, UPDATE, and DELETE operations. If this value is specified without units, it is taken as milliseconds."], ["superuser_reserved_connections", "3", "The number of connections that are reserved for superusers (PostgreSQL)."], ["transaction_isolation", "strict serializable", "Sets the current transaction's isolation level (PostgreSQL)."], ["unsafe_new_transaction_wall_time", "", "Sets the wall time for all new explicit or implicit transactions to control the value of `now()`. If not set, uses the system's clock."], ["welcome_message", "on", "Whether to send a notice with a welcome message after a successful connection (Materialize)."]]
got:
[["DateStyle", "ISO, MDY", "Sets the display format for date and time values (PostgreSQL)."], ["IntervalStyle", "postgres", "Sets the display format for interval values (PostgreSQL)."], ["TimeZone", "UTC", "Sets the time zone for displaying and interpreting time stamps (PostgreSQL)."], ["allowed_cluster_replica_sizes", "", "The allowed sizes when creating a new cluster replica (Materialize)."], ["application_name", "", "Sets the application name to be reported in statistics and logs (PostgreSQL)."], ["auto_route_catalog_queries", "on", "Whether to force queries that depend only on system tables, to run on the mz_catalog_server cluster (Materialize)."], ["client_encoding", "UTF8", "Sets the client's character set encoding (PostgreSQL)."], ["client_min_messages", "notice", "Sets the message levels that are sent to the client (PostgreSQL)."], ["cluster", "<VARIES>", "Sets the current cluster (Materialize)."], ["cluster_replica", "", "Sets a target cluster replica for SELECT queries (Materialize)."], ["current_object_missing_warnings", "on", "Whether to emit warnings when the current database, schema, or cluster is missing (Materialize)."], ["database", "materialize", "Sets the current database (CockroachDB)."], ["default_cluster_replication_factor", "1", "Default cluster replication factor (Materialize)."], ["emit_introspection_query_notice", "on", "Whether to print a notice wh [...]
Test details & reproducer Testdrive is the basic framework and language for defining product tests under the expected-result/actual-result (aka golden testing) paradigm. A query is retried until it produces the desired result.
BUILDKITE_PARALLEL_JOB=3 BUILDKITE_PARALLEL_JOB_COUNT=8 bin/mzcompose --find testdrive run default 

Fast SQL logic tests 1 failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

OutputFailure:test/sqllogictest/web-console.slt:118
        expected: Values(["u2", "NULL", "weewoo1", "u2", "weewoo1", "weewoo2"])
        actually: Values(["u4", "NULL", "weewoo1", "u4", "weewoo1", "weewoo2"])
        actual raw: [Row { columns: [Column { name: "id", table_oid: None, column_id: None, type: Text }, Column { name: "previous_name", table_oid: None, column_id: None, type: Text }, Column { name: "new_name", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "id", table_oid: None, column_id: None, type: Text }, Column { name: "previous_name", table_oid: None, column_id: None, type: Text }, Column { name: "new_name", table_oid: None, column_id: None, type: Text }] }]
Bail:test/sqllogictest/github-5717.slt:43 PlanFailure:test/sqllogictest/github-5717.slt:43:
db error: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_arrangement_batches_raw
    mz_arrangement_records_raw
    mz_arrangement_batcher_records_raw
    mz_arrangement_batcher_size_raw
    mz_arrangement_batcher_capacity_raw
    mz_arrangement_batcher_allocations_raw
    mz_arrangement_heap_capacity_raw
    mz_arrangement_heap_allocations_raw
    mz_arrangement_heap_size_raw
    mz_dataflow_operators_per_worker
    mz_dataflow_addresses_per_worker
    mz_dataflow_operators_per_worker
    mz_dataflow_addresses_per_worker
    mz_dataflow_operators_per_worker
    mz_dataflow_addresses_per_worker
    mz_arrangement_batches_raw
    mz_arrangement_records_raw
    mz_arrangement_batcher_records_raw
    mz_arrangement_batcher_size_raw
    mz_arrangement_batcher_capacity_raw
    mz_arrangement_batcher_allocations_raw
    mz_arrangement_heap_capacity_raw
    mz_arrangement_heap_allocations_raw
    mz_arrangement_heap_size_raw
    mz_dataflow_operators_per_worker
    mz_dataflow_addresses_per_worker
    mz_dataflow_operators_per_worker
    mz_dataflow_addresses_per_worker
    mz_scheduling_parks_histogram_raw
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_arrangement_batches_raw
    mz_arrangement_records_raw
    mz_arrangement_batcher_records_raw
    mz_arrangement_batcher_size_raw
    mz_arrangement_batcher_capacity_raw
    mz_arrangement_batcher_allocations_raw
    mz_arrangement_heap_capacity_raw
    mz_arrangement_heap_allocations_raw
    mz_arrangement_heap_size_raw
    mz_dataflow_operators_per_worker
    mz_dataflow_addresses_per_worker
    mz_dataflow_operators_per_worker
    mz_dataflow_addresses_per_worker
    mz_dataflow_operators_per_worker
    mz_dataflow_addresses_per_worker
    mz_arrangement_batches_raw
    mz_arrangement_records_raw
    mz_arrangement_batcher_records_raw
    mz_arrangement_batcher_size_raw
    mz_arrangement_batcher_capacity_raw
    mz_arrangement_batcher_allocations_raw
    mz_arrangement_heap_capacity_raw
    mz_arrangement_heap_allocations_raw
    mz_arrangement_heap_size_raw
    mz_dataflow_operators_per_worker
    mz_dataflow_addresses_per_worker
    mz_dataflow_operators_per_worker
    mz_dataflow_addresses_per_worker
    mz_scheduling_parks_histogram_raw
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.
Bail:test/sqllogictest/unstable.slt:37 PlanFailure:test/sqllogictest/unstable.slt:37:
db error: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_compute_exports_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_compute_exports_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.
Test details & reproducer Run SQL tests using an instance of Mz that is embedded in the sqllogic binary itself. Good for basic SQL tests, but can't interact with sources like MySQL/Kafka, see Testdrive for that.
BUILDKITE_PARALLEL_JOB=0 BUILDKITE_PARALLEL_JOB_COUNT=6 bin/mzcompose --find sqllogictest run fast-tests 

Fast SQL logic tests 3 failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

InconsistentViewOutcome:test/sqllogictest/show_clusters.slt:29
        expected from query: OutputFailure { expected_output: Values(["bar", "r1 (1), r2 (1)", "foo", "NULL", "mz_analytics", "NULL", "mz_catalog_server", "r1 (2)", "mz_probe", "r1 (2)", "mz_support", "NULL", "mz_system", "r1 (2)", "quickstart", "r1 (2)"]), actual_raw_output: [Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "replicas", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "replicas", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "replicas", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "replicas", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "replicas", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "replicas", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "replicas", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "replicas", table_oid: None, column_id: None, type: Text }] }], actual_output: Values(["bar", "r1 (1), r2 (1)", "foo", "NULL", "mz_analytics", "NULL", "mz_catalog_server", "r1 (2)", "mz_probe", "r1 (2)", "mz_support", "NULL", "mz_system", "r1 (2)", "quickstart", "r1 (2), r2 (2)"]), location: Location { file: "test/sqllogictest/show_clusters.slt", line: 29 } }
        actually from indexed view: PlanFailure { error: db error: ERROR: SHOW commands are not allowed in views

Caused by:
    ERROR: SHOW commands are not allowed in views, location: Location { file: "test/sqllogictest/show_clusters.slt", line: 29 } }
        
Test details & reproducer Run SQL tests using an instance of Mz that is embedded in the sqllogic binary itself. Good for basic SQL tests, but can't interact with sources like MySQL/Kafka, see Testdrive for that.
BUILDKITE_PARALLEL_JOB=2 BUILDKITE_PARALLEL_JOB_COUNT=6 bin/mzcompose --find sqllogictest run fast-tests 

Fast SQL logic tests 2 failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

  • Unknown error in test/sqllogictest/transform/normalize_lets.slt:
Bail:test/sqllogictest/transform/normalize_lets.slt:492 PlanFailure:test/sqllogictest/transform/normalize_lets.slt:492:
db error: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_scheduling_elapsed_raw
    mz_compute_import_frontiers_per_worker
    mz_dataflow_operators_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_scheduling_elapsed_raw
    mz_compute_import_frontiers_per_worker
    mz_dataflow_operators_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.
OutputFailure:test/sqllogictest/cluster.slt:454
        expected: Values(["mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2", "quickstart", "size_1", "1"])
        actually: Values(["mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2", "quickstart", "r2", "2", "quickstart", "size_1", "1"])
        actual raw: [Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }]
OutputFailure:test/sqllogictest/cluster.slt:466
        expected: Values(["foo", "size_1", "1", "foo", "size_2", "2", "mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2", "quickstart", "size_1", "1"])
        actually: Values(["foo", "size_1", "1", "foo", "size_2", "2", "mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2", "quickstart", "r2", "2", "quickstart", "size_1", "1"])
        actual raw: [Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }]
OutputFailure:test/sqllogictest/cluster.slt:495
        expected: Values(["mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2"])
        actually: Values(["mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2", "quickstart", "r2", "2"])
        actual raw: [Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }]
OutputFailure:test/sqllogictest/cluster.slt:671
        expected: Values(["r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "size_1", "1", "1", "18446744073709000000", "18446744073709551615", "1", "1", "size_1_8g", "1-8G", "1", "18446744073709000000", "8589934592", "1", "1", "size_2_2", "2-2", "2", "18446744073709000000", "18446744073709551615", "2", "2", "size_32", "32", "1", "18446744073709000000", "18446744073709551615", "32", "1"])
        actually: Values(["r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r2", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "size_1", "1", "1", "18446744073709000000", "18446744073709551615", "1", "1", "size_1_8g", "1-8G", "1", "18446744073709000000", "8589934592", "1", "1", "size_2_2", "2-2", "2", "18446744073709000000", "18446744073709551615", "2", "2", "size_32", "32", "1", "18446744073709000000", "18446744073709551615", "32", "1"])
        actual raw: [Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }, Column { name: "processes", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "cpu_nano_cores", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "memory_bytes", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "workers", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "credits_per_hour", table_oid: None, column_id: None, type: Numeric }] }, Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }, Column { name: "processes", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "cpu_nano_cores", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "memory_bytes", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "workers", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "credits_per_hour", table_oid: None, column_id: None, type: Numeric }] }, Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }, Column { name: "processes", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "cpu_nano_cores", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "memory_bytes", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "workers", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "credits_per_hour", table_oid: None, column_id: None, type: Numeric }] }, Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }, Column { name: "processes", table_oid: None, column_id: None, type: Ot [...]
Test details & reproducer Run SQL tests using an instance of Mz that is embedded in the sqllogic binary itself. Good for basic SQL tests, but can't interact with sources like MySQL/Kafka, see Testdrive for that.
BUILDKITE_PARALLEL_JOB=1 BUILDKITE_PARALLEL_JOB_COUNT=6 bin/mzcompose --find sqllogictest run fast-tests 

Fast SQL logic tests 5 failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

  • Unknown error in test/sqllogictest/introspection/attribution_sources.slt:
PlanFailure:test/sqllogictest/introspection/attribution_sources.slt:35:
db error: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_compute_dataflow_global_ids_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_compute_dataflow_global_ids_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.
PlanFailure:test/sqllogictest/introspection/attribution_sources.slt:41:
db error: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_compute_lir_mapping_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_compute_lir_mapping_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.
PlanFailure:test/sqllogictest/introspection/attribution_sources.slt:55:
db error: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_compute_operator_durations_histogram_raw
    mz_compute_lir_mapping_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_compute_operator_durations_histogram_raw
    mz_compute_lir_mapping_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.
PlanFailure:test/sqllogictest/introspection/attribution_sources.slt:72:
db error: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_arrangement_batches_raw
    mz_arrangement_records_raw
    mz_arrangement_batcher_records_raw
    mz_arrangement_batcher_size_raw
    mz_arrangement_batcher_capacity_raw
    mz_arrangement_batcher_allocations_raw
    mz_arrangement_heap_capacity_raw
    mz_arrangement_heap_allocations_raw
    mz_arrangement_heap_size_raw
    mz_compute_lir_mapping_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_arrangement_batches_raw
    mz_arrangement_records_raw
    mz_arrangement_batcher_records_raw
    mz_arrangement_batcher_size_raw
    mz_arrangement_batcher_capacity_raw
    mz_arrangement_batcher_allocations_raw
    mz_arrangement_heap_capacity_raw
    mz_arrangement_heap_allocations_raw
    mz_arrangement_heap_size_raw
    mz_compute_lir_mapping_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.
PlanFailure:test/sqllogictest/introspection/attribution_sources.slt:95:
db error: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_compute_dataflow_global_ids_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_compute_dataflow_global_ids_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.
PlanFailure:test/sqllogictest/introspection/attribution_sources.slt:100:
db error: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_compute_lir_mapping_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_compute_lir_mapping_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.
PlanFailure:test/sqllogictest/introspection/attribution_sources.slt:120:
db error: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_compute_dataflow_global_ids_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_compute_dataflow_global_ids_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.
PlanFailure:test/sqllogictest/introspection/attribution_sources.slt:126:
db error: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_compute_lir_mapping_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_compute_lir_mapping_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.
PlanFailure:test/sqllogictest/introspection/attribution_sources.slt:138:
db error: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_compute_operator_durations_histogram_raw
    mz_compute_lir_mapping_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_compute_operator_durations_histogram_raw
    mz_compute_lir_mapping_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.
PlanFailure:test/sqllogictest/introspection/attribution_sources.slt:153:
db error: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_arrangement_batches_raw
    mz_arrangement_records_raw
    mz_arrangement_batcher_records_raw
    mz_arrangement_batcher_size_raw
    mz_arrangement_batcher_capacity_raw
    mz_arrangement_batcher_allocations_raw
    mz_arrangement_heap_capacity_raw
    mz_arrangement_heap_allocations_raw
    mz_arrangement_heap_size_raw
    mz_compute_lir_mapping_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_arrangement_batches_raw
    mz_arran [...]
PlanFailure:test/sqllogictest/github-5174.slt:14:
db error: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_scheduling_elapsed_raw
    mz_dataflow_operators_per_worker
    mz_dataflow_addresses_per_worker
    mz_arrangement_batches_raw
    mz_arrangement_records_raw
    mz_arrangement_batcher_records_raw
    mz_arrangement_batcher_size_raw
    mz_arrangement_batcher_capacity_raw
    mz_arrangement_batcher_allocations_raw
    mz_arrangement_heap_capacity_raw
    mz_arrangement_heap_allocations_raw
    mz_arrangement_heap_size_raw
    mz_dataflow_operators_per_worker
    mz_dataflow_addresses_per_worker
    mz_dataflow_operators_per_worker
    mz_dataflow_addresses_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_scheduling_elapsed_raw
    mz_dataflow_operators_per_worker
    mz_dataflow_addresses_per_worker
    mz_arrangement_batches_raw
    mz_arrangement_records_raw
    mz_arrangement_batcher_records_raw
    mz_arrangement_batcher_size_raw
    mz_arrangement_batcher_capacity_raw
    mz_arrangement_batcher_allocations_raw
    mz_arrangement_heap_capacity_raw
    mz_arrangement_heap_allocations_raw
    mz_arrangement_heap_size_raw
    mz_dataflow_operators_per_worker
    mz_dataflow_addresses_per_worker
    mz_dataflow_operators_per_worker
    mz_dataflow_addresses_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.
OutputFailure:test/sqllogictest/managed_cluster.slt:125
        expected: Values(["mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2"])
        actually: Values(["mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2", "quickstart", "r2", "2"])
        actual raw: [Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }]
OutputFailure:test/sqllogictest/managed_cluster.slt:139
        expected: Values(["foo", "r1", "1", "foo", "r2", "1", "foo", "r3", "1", "mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2"])
        actually: Values(["foo", "r1", "1", "foo", "r2", "1", "foo", "r3", "1", "mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2", "quickstart", "r2", "2"])
        actual raw: [Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }]
OutputFailure:test/sqllogictest/managed_cluster.slt:158
        expected: Values(["foo", "r1", "1", "foo", "r2", "1", "foo", "r3", "1", "mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2"])
        actually: Values(["foo", "r1", "1", "foo", "r2", "1", "foo", "r3", "1", "mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2", "quickstart", "r2", "2"])
        actual raw: [Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }]
OutputFailure:test/sqllogictest/managed_cluster.slt:186
        expected: Values(["foo", "r1", "1", "foo", "r2", "1", "foo", "r3", "1", "mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2"])
        actually: Values(["foo", "r1", "1", "foo", "r2", "1", "foo", "r3", "1", "mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2", "quickstart", "r2", "2"])
        actual raw: [Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }]
OutputFailure:test/sqllogictest/managed_cluster.slt:202
        expected: Values(["foo", "r1", "u3", "foo", "r2", "u4", "foo", "r3", "u5", "mz_catalog_server", "r1", "s2", "mz_probe", "r1", "s3", "mz_system", "r1", "s1", "quickstart", "r1", "u1"])
        actually: Values(["foo", "r1", "u5", "foo", "r2", "u6", "foo", "r3", "u7", "mz_catalog_server", "r1", "s2", "mz_probe", "r1", "s3", "mz_system", "r1", "s1", "quickstart", "r1", "u2", "quickstart", "r2", "u3"])
        actual raw: [Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "id", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "id", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "id", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "name", table_oid: None, column_id: Non [...]
Test details & reproducer Run SQL tests using an instance of Mz that is embedded in the sqllogic binary itself. Good for basic SQL tests, but can't interact with sources like MySQL/Kafka, see Testdrive for that.
BUILDKITE_PARALLEL_JOB=4 BUILDKITE_PARALLEL_JOB_COUNT=6 bin/mzcompose --find sqllogictest run fast-tests 

:rust: Cargo test failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

        FAIL [   1.515s] mz-catalog::open test_persist_open

(2 occurrences)

thread 'test_persist_open' panicked at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/insta-1.42.2/src/runtime.rs:679:13:
snapshot assertion for 'initial_audit_log' failed in line 513
stack backtrace:
   0: rust_begin_unwind
             at /rustc/05f9846f893b09a1be1fc8560e33fc3c815cfecb/library/std/src/panicking.rs:695:5
   1: core::panicking::panic_fmt
             at /rustc/05f9846f893b09a1be1fc8560e33fc3c815cfecb/library/core/src/panicking.rs:75:14
   2: finalize
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/insta-1.42.2/src/runtime.rs:679:13
   3: assert_snapshot
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/insta-1.42.2/src/runtime.rs:848:9
   4: {async_fn#0}
             at ./tests/open.rs:513:9
   5: {async_fn#0}
             at ./tests/open.rs:478:30
   6: {async_block#0}
             at ./tests/open.rs:473:1
   7: poll<&mut dyn core::future::future::Future<Output=()>>
             at /usr/local/lib/rustlib/src/rust/library/core/src/future/future.rs:124:9
   8: poll<core::pin::Pin<&mut dyn core::future::future::Future<Output=()>>>
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tracing-0.1.41/src/instrument.rs:321:9
   9: poll<&mut tracing::instrument::Instrumented<core::pin::Pin<&mut dyn core::future::future::Future<Output=()>>>>
             at /usr/local/lib/rustlib/src/rust/library/core/src/future/future.rs:124:9
  10: {closure#0}<core::pin::Pin<&mut tracing::instrument::Instrumented<core::pin::Pin<&mut dyn core::future::future::Future<Output=()>>>>>
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.44.2/src/runtime/scheduler/current_thread/mod.rs:733:54
  11: with_budget<core::task::poll::Poll<()>, tokio::runtime::scheduler::current_thread::{impl#8}::block_on::{closure#0}::{closure#0}::{closure_env#0}<core::pin::Pin<&mut tracing::instrument::Instrumented<core::pin::Pin<&mut dyn core::future::future::Future<Output=()>>>>>>
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.44.2/src/task/coop/mod.rs:167:5
  12: budget<core::task::poll::Poll<()>, tokio::runtime::scheduler::current_thread::{impl#8}::block_on::{closure#0}::{closure#0}::{closure_env#0}<core::pin::Pin<&mut tracing::instrument::Instrumented<core::pin::Pin<&mut dyn core::future::future::Future<Output=()>>>>>>
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.44.2/src/task/coop/mod.rs:133:5
  13: {closure#0}<core::pin::Pin<&mut tracing::instrument::Instrumented<core::pin::Pin<&mut dyn core::future::future::Future<Output=()>>>>>
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.44.2/src/runtime/scheduler/current_thread/mod.rs:733:25
  14: <tokio::runtime::scheduler::current_thread::Context>::enter::<core::task::poll::Poll<()>, <tokio::runtime::scheduler::current_thread::CoreGuard>::block_on<core::pin::Pin<&mut tracing::instrument::Instrumented<core::pin::Pin<&mut dyn core::future::future::Future<Output = ()>>>>>::{closure#0}::{closure#0}>
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.44.2/src/runtime/scheduler/current_thread/mod.rs:432:19
  15: {closure#0}<core::pin::Pin<&mut tracing::instrument::Instrumented<core::pin::Pin<&mut dyn core::future::future::Future<Output=()>>>>>
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.44.2/src/runtime/scheduler/current_thread/mod.rs:732:36
  16: <tokio::runtime::scheduler::current_thread::CoreGuard>::enter::<<tokio::runtime::scheduler::current_thread::CoreGuard>::block_on<core::pin::Pin<&mut tracing::instrument::Instrumented<core::pin::Pin<&mut dyn core::future::future::Future<Output = ()>>>>>::{closure#0}, core::option::Option<()>>::{closure#0}
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.44.2/src/runtime/scheduler/current_thread/mod.rs:820:68
  17: <tokio::runtime::context::scoped::Scoped<tokio::runtime::scheduler::Context>>::set::<<tokio::runtime::scheduler::current_thread::CoreGuard>::enter<<tokio::runtime::scheduler::current_thread::CoreGuard>::block_on<core::pin::Pin<&mut tracing::instrument::Instrumented<core::pin::Pin<&mut dyn core::future::future::Future<Output = ()>>>>>::{closure#0}, core::option::Option<()>>::{closure#0}, (alloc::boxed::Box<tokio::runtime::scheduler::current_thread::Core>, core::option::Option<()>)>
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.44.2/src/runtime/context/scoped.rs:40:9
  18: tokio::runtime::context::set_scheduler::<(alloc::boxed::Box<tokio::runtime::scheduler::current_thread::Core>, core::option::Option<()>), <tokio::runtime::scheduler::current_thread::CoreGuard>::enter<<tokio::runtime::scheduler::current_thread::CoreGuard>::block_on<core::pin::Pin<&mut tracing::instrument::Instrumented<core::pin::Pin<&mut dyn core::future::future::Future<Output = ()>>>>>::{closure#0}, core::option::Option<()>>::{closure#0}>::{closure#0}
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.44.2/src/runtime/context.rs:180:26
  19: try_with<tokio::runtime::context::Context, tokio::runtime::context::set_scheduler::{closure_env#0}<(alloc::boxed::Box<tokio::runtime::scheduler::current_thread::Core, alloc::alloc::Global>, core::option::Option<()>), tokio::runtime::scheduler::current_thread::{impl#8}::enter::{closure_env#0}<tokio::runtime::scheduler::current_thread::{impl#8}::block_on::{closure_env#0}<core::pin::Pin<&mut tracing::instrument::Instrumented<core::pin::Pin<&mut dyn core::future::future::Future<Output=()>>>>>, core::option::Option<()>>>, (alloc::boxed::Box<tokio::runtime::scheduler::current_thread::Core, alloc::alloc::Global>, core::option::Option<()>)>
             at /usr/local/lib/rustlib/src/rust/library/std/src/thread/local.rs:310:12
  20: <std::thread::local::LocalKey<tokio::runtime::context::Context>>::with::<tokio::runtime::context::set_scheduler<(alloc::boxed::Box<tokio::runtime::scheduler::current_thread::Core>, core::option::Option<()>), <tokio::runtime::scheduler::current_thread::CoreGuard>::enter<<tokio::runtime::scheduler::current_thread::CoreGuard>::block_on<core::pin::Pin<&mut tracing::instrument::Instrumented<core::pin::Pin<&mut dyn core::future::future::Future<Output = ()>>>>>::{closure#0}, core::option::Option<()>>::{closure#0}>::{closure#0}, (alloc::boxed::Box<tokio::runtime::scheduler::current_thread::Core>, core::option::Option<()>)>
             at /usr/local/lib/rustlib/src/rust/library/std/src/thread/local.rs:274:15
  21: tokio::runtime::context::set_scheduler::<(alloc::boxed::Box<tokio::runtime::scheduler::current_thread::Core>, core::option::Option<()>), <tokio::runtime::scheduler::current_thread::CoreGuard>::enter<<tokio::runtime::scheduler::current_thread::CoreGuard>::block_on<core::pin::Pin<&mut tracing::instrument::Instrumented<core::pin::Pin<&mut dyn core::future::future::Future<Output = ()>>>>>::{closure#0}, core::option::Option<()>>::{closure#0}>
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.44.2/src/runtime/context.rs:180:9
  22: <tokio::runtime::scheduler::current_thread::CoreGuard>::enter::<<tokio::runtime::scheduler::current_thread::CoreGuard>::block_on<core::pin::Pin<&mut tracing::instrument::Instrumented<core::pin::Pin<&mut dyn core::future::future::Future<Output = ()>>>>>::{closure#0}, core::option::Option<()>>
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.44.2/src/runtime/scheduler/current_thread/mod.rs:820:27
  23: <tokio::runtime::scheduler::current_thread::CoreGuard>::block_on::<core::pin::Pin<&mut tracing::instrument::Instrumented<core::pin::Pin<&mut dyn core::future::future::Future<Output = ()>>>>>
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.44.2/src/runtime/scheduler/current_thread/mod.rs:720:19
  24: {closure#0}<tracing::instrument::Instrumented<core::pin::Pin<&mut dyn core::future::future::Future<Output=()>>>>
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.44.2/src/runtime/scheduler/current_thread/mod.rs:200:28
  25: tokio::runtime::context::runtime::enter_runtime::<<tokio::runtime::scheduler::current_thread::CurrentThread>::block_on<tracing::instrument::Instrumented<core::pin::Pin<&mut dyn core::future::future::Future<Output = ()>>>>::{closure#0}, ()>
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.44.2/src/runtime/context/runtime.rs:65:16
  26: block_on<tracing::instrument::Instrumented<core::pin::Pin<&mut dyn core::future::future::Future<Output=()>>>>
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.44.2/src/runtime/scheduler/current_thread/mod.rs:188:9
  27: <tokio::runtime::runtime::Runtime>::block_on_inner::<core::pin::Pin<&mut dyn core::future::future::Future<Output = ()>>>
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.44.2/src/runtime/runtime.rs:368:47
  28: block_on<core::pin::Pin<&mut dyn core::future::future::Future<Output=()>>>
             at /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tokio-1.44.2/src/runtime/runtime.rs:342:13
  29: test_persist_open
             at ./tests/open.rs:473:1
  30: open::test_persist_open::{closure#0}
             at ./tests/open.rs:473:29
  31: <open::test_persist_open::{closure#0} as core::ops::function::FnOnce<()>>::call_once
             at /usr/local/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
  32: core::ops::function::FnOnce::call_once
             at /rustc/05f9846f893b09a1be1fc8560e33fc3c815cfecb/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
subprocess.CalledProcessError: Command '['cargo', 'nextest', 'run', '--workspace', '--all-features', '--profile=ci', '--cargo-profile=ci', '--partition=count:1/1']' returned non-zero exit status 100.
Test details & reproducer Runs the Rust-based unit tests in Debug mode.
bin/mzcompose --find cargo-test run default 

Checks without restart or upgrade 5 failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

Docker compose failed: docker compose -f/dev/fd/4 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-c424504c/materialize/test/test/platform-checks exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@materialized:6875 --materialize-internal-url=postgres://materialize@materialized:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=statement_timeout='300s' --default-timeout=300s --seed=1 --persist-blob-url=s3://minioadmin:minioadmin@persist/persist?endpoint=http://minio:9000/&region=minio --persist-consensus-url=postgres://root@materialized:26257?options=--search_path=consensus --var=replicas=1 --var=default-replica-size=4-4 --var=default-storage-size=4-1 --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-c424504c/materialize/test/misc/python/materialize/checks/all_checks/webhook.py:140
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-c424504c/materialize/test/misc/python/materialize/checks/all_checks/webhook.py:140

Test details & reproducer Write a single set of .td fragments for a particular feature or functionality and then have Zippy execute them in upgrade, 0dt-upgrade, restart, recovery and failure contexts.
BUILDKITE_PARALLEL_JOB=4 BUILDKITE_PARALLEL_JOB_COUNT=6 bin/mzcompose --find platform-checks run default --scenario=NoRestartNoUpgrade --default-replication-factor=1 --seed=01963883-35d2-4300-b782-ecdb93d2fabf 

Checks + restart of environmentd & storage clusterd 5 failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

Docker compose failed: docker compose -f/dev/fd/4 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-56eb1148/materialize/test/test/platform-checks exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@materialized:6875 --materialize-internal-url=postgres://materialize@materialized:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=statement_timeout='300s' --default-timeout=300s --seed=1 --persist-blob-url=s3://minioadmin:minioadmin@persist/persist?endpoint=http://minio:9000/&region=minio --persist-consensus-url=postgres://root@materialized:26257?options=--search_path=consensus --var=replicas=1 --var=default-replica-size=4-4 --var=default-storage-size=4-1 --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-56eb1148/materialize/test/misc/python/materialize/checks/all_checks/webhook.py:140
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-56eb1148/materialize/test/misc/python/materialize/checks/all_checks/webhook.py:140

Test details & reproducer Write a single set of .td fragments for a particular feature or functionality and then have Zippy execute them in upgrade, 0dt-upgrade, restart, recovery and failure contexts.
BUILDKITE_PARALLEL_JOB=4 BUILDKITE_PARALLEL_JOB_COUNT=6 bin/mzcompose --find platform-checks run default --scenario=RestartEnvironmentdClusterdStorage --default-replication-factor=1 --seed=01963883-35d2-4300-b782-ecdb93d2fabf 
:rust: Cargo test
Waited 2m 13s
·
Ran in 26m 20s
4/8
Testdrive 4
Waited 1m 0s
·
Ran in 23m 15s
1/4
Cluster tests 1
Waited 1m 15s
·
Ran in 23m 38s
1/6
Fast SQL logic tests 1
Waited 1m 4s
·
Ran in 20m 24s
2/6
Fast SQL logic tests 2
Waited 1m 0s
·
Ran in 20m 12s
3/6
Fast SQL logic tests 3
Waited 1m 6s
·
Ran in 20m 9s
4/6
Fast SQL logic tests 4
Waited 1m 9s
·
Ran in 11m 37s
5/6
Fast SQL logic tests 5
Waited 1m 8s
·
Ran in 18m 53s
Restart test
Waited 1m 9s
·
Ran in 23m 43s
5/6
Checks + restart of environmentd & storage clusterd 5
Waited 1m 3s
·
Ran in 11m 14s
5/6
Checks without restart or upgrade 5
Waited 1m 13s
·
Ran in 11m 15s
Total Job Run Time: 17h 30m