Test

Public

Run fast unit and integration tests

Merge pull request #31116 from MaterializeInc/dependabot/pip/ci/builder/pytest-8.3.4

Passed in 1h 17m
:pipeline:
bootstrap stable x86_64
bootstrap nightly x86_64
bootstrap min x86_64
bootstrap stable aarch64
bootstrap nightly aarch64
bootstrap min aarch64
mkpipeline
:rust: Cargo test
Restart test
Yugabyte CDC tests
Short Zippy
Feature benchmark (Kafka only)
Persistence tests
Cluster isolation test
chbench smoke test
Metabase smoke test
dbt-materialize tests
Storage Usage Table Test
Tracing Fast Path
Skip Version Upgrade
Deploy website
Deploy

Cluster tests 2 failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

Docker compose failed: docker compose -f/dev/fd/3 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-3a6173ea/materialize/test/test/cluster exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@materialized:6875 --materialize-internal-url=postgres://materialize@materialized:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --default-timeout=360s --persist-blob-url=file:///mzdata/persist/blob --persist-consensus-url=postgres://root@materialized:26257?options=--search_path=consensus --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-3a6173ea/materialize/test/test/cluster/mzcompose.py:3838
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-3a6173ea/materialize/test/test/cluster/mzcompose.py:3838

builtins.AssertionError: got 2.0
Test details & reproducer Functional tests which require separate clusterd containers (instead of the usual clusterd included in the materialized container).
BUILDKITE_PARALLEL_JOB=1 BUILDKITE_PARALLEL_JOB_COUNT=4 bin/mzcompose --find cluster run default 

Restart test failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

Docker compose failed: docker compose -f/dev/fd/3 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-e637fc46/materialize/test/test/restart exec -T testdrive_no_reset testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@materialized:6875 --materialize-internal-url=postgres://materialize@materialized:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --default-timeout=360s --persist-blob-url=file:///mzdata/persist/blob --persist-consensus-url=postgres://root@materialized:26257?options=--search_path=consensus --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-e637fc46/materialize/test/test/restart/mzcompose.py:645
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-e637fc46/materialize/test/test/restart/mzcompose.py:645

Test details & reproducer Testdrive-based tests involving restarting materialized (including its clusterd processes). See cluster tests for separate clusterds, see platform-checks for further restart scenarios.
bin/mzcompose --find restart run default 

Cluster tests 3 failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

psycopg.errors.FeatureNotSupported: log source reads must target a replica DETAIL:  The query references the following log sources:     mz_dataflow_operators_per_worker HINT:  Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.
Test details & reproducer Functional tests which require separate clusterd containers (instead of the usual clusterd included in the materialized container).
BUILDKITE_PARALLEL_JOB=2 BUILDKITE_PARALLEL_JOB_COUNT=4 bin/mzcompose --find cluster run default 

Cluster tests 1 failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

builtins.AssertionError: unexpected create_instance count: 1.0
Test details & reproducer Functional tests which require separate clusterd containers (instead of the usual clusterd included in the materialized container).
BUILDKITE_PARALLEL_JOB=0 BUILDKITE_PARALLEL_JOB_COUNT=4 bin/mzcompose --find cluster run default 

Testdrive 4 failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

session.td:18:1: non-matching rows: expected:
[["DateStyle", "ISO, MDY", "Sets the display format for date and time values (PostgreSQL)."], ["IntervalStyle", "postgres", "Sets the display format for interval values (PostgreSQL)."], ["TimeZone", "UTC", "Sets the time zone for displaying and interpreting time stamps (PostgreSQL)."], ["allowed_cluster_replica_sizes", "", "The allowed sizes when creating a new cluster replica (Materialize)."], ["application_name", "", "Sets the application name to be reported in statistics and logs (PostgreSQL)."], ["auto_route_catalog_queries", "on", "Whether to force queries that depend only on system tables, to run on the mz_catalog_server cluster (Materialize)."], ["client_encoding", "UTF8", "Sets the client's character set encoding (PostgreSQL)."], ["client_min_messages", "notice", "Sets the message levels that are sent to the client (PostgreSQL)."], ["cluster", "<VARIES>", "Sets the current cluster (Materialize)."], ["cluster_replica", "", "Sets a target cluster replica for SELECT queries (Materialize)."], ["current_object_missing_warnings", "on", "Whether to emit warnings when the current database, schema, or cluster is missing (Materialize)."], ["database", "materialize", "Sets the current database (CockroachDB)."], ["emit_introspection_query_notice", "on", "Whether to print a notice when querying per-replica introspection sources."], ["emit_plan_insights_notice", "off", "Boolean flag indicating whether to send a NOTICE with JSON-formatted plan insights before executing a SELECT statement (Materialize)."], ["emit_timestamp_notice", "off", "Boolean flag indicating whether to send a NOTICE with timestamp explanations of queries (Materialize)."], ["emit_trace_id_notice", "off", "Boolean flag indicating whether to send a NOTICE specifying the trace id when available (Materialize)."], ["enable_consolidate_after_union_negate", "on", "consolidation after Unions that have a Negated input (Materialize)."], ["enable_rbac_checks", "on", "User facing global boolean flag indicating whether to apply RBAC checks before executing statements (Materialize)."], ["enable_reduce_reduction", "on", "split complex reductions in to simpler ones and a join (Materialize)."], ["enable_session_rbac_checks", "off", "User facing session boolean flag indicating whether to apply RBAC checks before executing statements (Materialize)."], ["extra_float_digits", "3", "Adjusts the number of digits displayed for floating-point values (PostgreSQL)."], ["failpoints", "<omitted>", "Allows failpoints to be dynamically activated."], ["force_source_table_syntax", "off", "Force use of new source model (CREATE TABLE .. FROM SOURCE) and migrate existing sources"], ["idle_in_transaction_session_timeout", "2 min", "Sets the maximum allowed duration that a session can sit idle in a transaction before being terminated. If this value is specified without units, it is taken as milliseconds. A value of zero disables the timeout (PostgreSQL)."], ["integer_datetimes", "on", "Reports whether the server uses 64-bit-integer dates and times (PostgreSQL)."], ["is_superuser", "off", "Reports whether the current session is a superuser (PostgreSQL)."], ["max_aws_privatelink_connections", "0", "The maximum number of AWS PrivateLink connections in the region, across all schemas (Materialize)."], ["max_clusters", "10", "The maximum number of clusters in the region (Materialize)."], ["max_connections", "5000", "The maximum number of concurrent connections (PostgreSQL)."], ["max_continual_tasks", "100", "The maximum number of continual tasks in the region, across all schemas (Materialize)."], ["max_copy_from_size", "1073741824", "The maximum size in bytes we buffer for COPY FROM statements (Materialize)."], ["max_credit_consumption_rate", "1024", "The maximum rate of credit consumption in a region. Credits are consumed based on the size of cluster replicas in use (Materialize)."], ["max_databases", "1000", "The maximum number of databases in the region (Materialize)."], ["max_identifier_length", "255", "The maximum length of object identifiers in bytes (PostgreSQL)."], ["max_kafka_connections", "1000", "The maximum number of Kafka connections in the region, across all schemas (Materialize)."], ["max_materialized_views", "100", "The maximum number of materialized views in the region, across all schemas (Materialize)."], ["max_mysql_connections", "1000", "The maximum number of MySQL connections in the region, across all schemas (Materialize)."], ["max_network_policies", "25", "The maximum number of network policies in the region."], ["max_objects_per_schema", "1000", "The maximum number of objects in a schema (Materialize)."], ["max_postgres_connections", "1000", "The maximum number of PostgreSQL connections in the region, across all schemas (Materialize)."], ["max_query_result_size", "1GB", "The maximum size in bytes for a single query's result (Materialize)."], ["max_replicas_per_cluster", "5", "The maximum number of replicas of a single cluster (Materialize)."], ["max_result_size", "1GB", "The maximum size in bytes for an internal query result (Materialize)."], ["max_roles", "1000", "The maximum number of roles in the region (Materialize)."], ["max_rules_per_network_policy", "25", "The maximum number of rules per network policies."], ["max_schemas_per_database", "1000", "The maximum number of schemas in a database (Materialize)."], ["max_secrets", "100", "The maximum number of secrets in the region, across all schemas (Materialize)."], ["max_sinks", "25", "The maximum number of sinks in the region, across all schemas (Materialize)."], ["max_sources", "200", "The maximum number of sources in the region, across all schemas (Materialize)."], ["max_sql_server_connections", "1000", "The maximum number of SQL Server connections in the region, across all schemas (Materialize)."], ["max_tables", "200", "The maximum number of tables in the region, across all schemas (Materialize)."], ["mz_version", "<VARIES>", "Shows the Materialize server version (Materialize)."], ["network_policy", "default", "Sets the fallback network policy applied to all users without an explicit policy."], ["optimizer_e2e_latency_warning_threshold", "500 ms", "Sets the duration that a query can take to compile; queries that take longer will trigger a warning. If this value is specified without units, it is taken as milliseconds. A value of zero disables the timeout (Materialize)."], ["real_time_recency", "off", "Feature flag indicating whether real time recency is enabled (Materialize)."], ["real_time_recency_timeout", "10 s", "Sets the maximum allowed duration of SELECTs that actively use real-time recency, i.e. reach out to an external system to determine their most recencly exposed data (Materialize)."], ["search_path", "public", "Sets the schema search order for names that are not schema-qualified (PostgreSQL)."], ["server_version", "9.5.0", "Shows the PostgreSQL compatible server version (PostgreSQL)."], ["server_version_num", "90500", "Shows the PostgreSQL compatible server version as an integer (PostgreSQL)."], ["sql_safe_updates", "off", "Prohibits SQL statements that may be overly destructive (CockroachDB)."], ["standard_conforming_strings", "on", "Causes '...' strings to treat backslashes literally (PostgreSQL)."], ["statement_logging_default_sample_rate", "0.01", "The default value of `statement_logging_sample_rate` for new sessions (Materialize)."], ["statement_logging_max_sample_rate", "0.01", "The maximum rate at which statements may be logged. If this value is less than that of `statement_logging_sample_rate`, the latter is ignored (Materialize)."], ["statement_logging_sample_rate", "0.01", "User-facing session variable indicating how many statement executions should be logged, subject to constraint by the system variable `statement_logging_max_sample_rate` (Materialize)."], ["statement_timeout", "1 min", "Sets the maximum allowed duration of INSERT...SELECT, UPDATE, and DELETE operations. If this value is specified without units, it is taken as milliseconds."], ["superuser_reserved_connections", "3", "The number of connections that are reserved for superusers (PostgreSQL)."], ["transaction_isolation", "strict serializable", "Sets the current transaction's isolation level (PostgreSQL)."], ["unsafe_new_transaction_wall_time", "", "Sets the wall time for all new explicit or implicit transactions to control the value of `now()`. If not set, uses the system's clock."], ["welcome_message", "on", "Whether to send a notice with a welcome message after a successful connection (Materialize)."]]
got:
[["DateStyle", "ISO, MDY", "Sets the display format for date and time values (PostgreSQL)."], ["IntervalStyle", "postgres", "Sets the display format for interval values (PostgreSQL)."], ["TimeZone", "UTC", "Sets the time zone for displaying and interpreting time stamps (PostgreSQL)."], ["allowed_cluster_replica_sizes", "", "The allowed sizes when creating a new cluster replica (Materialize)."], ["application_name", "", "Sets the application name to be reported in statistics and logs (PostgreSQL)."], ["auto_route_catalog_queries", "on", "Whether to force queries that depend only on system tables, to run on the mz_catalog_server cluster (Materialize)."], ["client_encoding", "UTF8", "Sets the client's character set encoding (PostgreSQL)."], ["client_min_messages", "notice", "Sets the message levels that are sent to the client (PostgreSQL)."], ["cluster", "<VARIES>", "Sets the current cluster (Materialize)."], ["cluster_replica", "", "Sets a target cluster replica for SELECT queries (Materialize)."], ["current_object_missing_warnings", "on", "Whether to emit warnings when the current database, schema, or cluster is missing (Materialize)."], ["database", "materialize", "Sets the current database (CockroachDB)."], ["default_cluster_replication_factor", "1", "Default cluster replication factor (Materialize)."], ["emit_introspection_query_notice", "on", "Whether to print a notice wh [...]
Test details & reproducer Testdrive is the basic framework and language for defining product tests under the expected-result/actual-result (aka golden testing) paradigm. A query is retried until it produces the desired result.
BUILDKITE_PARALLEL_JOB=3 BUILDKITE_PARALLEL_JOB_COUNT=8 bin/mzcompose --find testdrive run default 

Checks + restart of environmentd & storage clusterd 3 failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

Docker compose failed: docker compose -f/dev/fd/3 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-07b1ac9a/materialize/test/test/platform-checks exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@materialized:6875 --materialize-internal-url=postgres://materialize@materialized:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=statement_timeout='300s' --default-timeout=300s --seed=1 --persist-blob-url=s3://minioadmin:minioadmin@persist/persist?endpoint=http://minio:9000/&region=minio --persist-consensus-url=postgres://root@materialized:26257?options=--search_path=consensus --var=replicas=1 --var=default-replica-size=4-4 --var=default-storage-size=4-1 --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-07b1ac9a/materialize/test/misc/python/materialize/checks/all_checks/cluster.py:37
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-07b1ac9a/materialize/test/misc/python/materialize/checks/all_checks/cluster.py:37

Test details & reproducer Write a single set of .td fragments for a particular feature or functionality and then have Zippy execute them in upgrade, 0dt-upgrade, restart, recovery and failure contexts.
BUILDKITE_PARALLEL_JOB=2 BUILDKITE_PARALLEL_JOB_COUNT=6 bin/mzcompose --find platform-checks run default --scenario=RestartEnvironmentdClusterdStorage --seed=0196383b-f390-40b8-bc2b-5d7ced8599a8 

Checks without restart or upgrade 3 failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

Docker compose failed: docker compose -f/dev/fd/4 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-8cf661c5/materialize/test/test/platform-checks exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@materialized:6875 --materialize-internal-url=postgres://materialize@materialized:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=statement_timeout='300s' --default-timeout=300s --seed=1 --persist-blob-url=s3://minioadmin:minioadmin@persist/persist?endpoint=http://minio:9000/&region=minio --persist-consensus-url=postgres://root@materialized:26257?options=--search_path=consensus --var=replicas=1 --var=default-replica-size=4-4 --var=default-storage-size=4-1 --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-8cf661c5/materialize/test/misc/python/materialize/checks/all_checks/cluster.py:37
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-8cf661c5/materialize/test/misc/python/materialize/checks/all_checks/cluster.py:37

Test details & reproducer Write a single set of .td fragments for a particular feature or functionality and then have Zippy execute them in upgrade, 0dt-upgrade, restart, recovery and failure contexts.
BUILDKITE_PARALLEL_JOB=2 BUILDKITE_PARALLEL_JOB_COUNT=6 bin/mzcompose --find platform-checks run default --scenario=NoRestartNoUpgrade --seed=0196383b-f390-40b8-bc2b-5d7ced8599a8 

Checks + restart of environmentd & storage clusterd 5 failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

Docker compose failed: docker compose -f/dev/fd/4 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-957ddb9d/materialize/test/test/platform-checks exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@materialized:6875 --materialize-internal-url=postgres://materialize@materialized:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=statement_timeout='300s' --default-timeout=300s --seed=1 --persist-blob-url=s3://minioadmin:minioadmin@persist/persist?endpoint=http://minio:9000/&region=minio --persist-consensus-url=postgres://root@materialized:26257?options=--search_path=consensus --var=replicas=1 --var=default-replica-size=4-4 --var=default-storage-size=4-1 --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-957ddb9d/materialize/test/misc/python/materialize/checks/all_checks/webhook.py:140
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-957ddb9d/materialize/test/misc/python/materialize/checks/all_checks/webhook.py:140

Test details & reproducer Write a single set of .td fragments for a particular feature or functionality and then have Zippy execute them in upgrade, 0dt-upgrade, restart, recovery and failure contexts.
BUILDKITE_PARALLEL_JOB=4 BUILDKITE_PARALLEL_JOB_COUNT=6 bin/mzcompose --find platform-checks run default --scenario=RestartEnvironmentdClusterdStorage --seed=0196383b-f390-40b8-bc2b-5d7ced8599a8 

Checks without restart or upgrade 5 failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

Docker compose failed: docker compose -f/dev/fd/4 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-473e37d1/materialize/test/test/platform-checks exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@materialized:6875 --materialize-internal-url=postgres://materialize@materialized:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=statement_timeout='300s' --default-timeout=300s --seed=1 --persist-blob-url=s3://minioadmin:minioadmin@persist/persist?endpoint=http://minio:9000/&region=minio --persist-consensus-url=postgres://root@materialized:26257?options=--search_path=consensus --var=replicas=1 --var=default-replica-size=4-4 --var=default-storage-size=4-1 --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-473e37d1/materialize/test/misc/python/materialize/checks/all_checks/webhook.py:140
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-473e37d1/materialize/test/misc/python/materialize/checks/all_checks/webhook.py:140

Test details & reproducer Write a single set of .td fragments for a particular feature or functionality and then have Zippy execute them in upgrade, 0dt-upgrade, restart, recovery and failure contexts.
BUILDKITE_PARALLEL_JOB=4 BUILDKITE_PARALLEL_JOB_COUNT=6 bin/mzcompose --find platform-checks run default --scenario=NoRestartNoUpgrade --seed=0196383b-f390-40b8-bc2b-5d7ced8599a8 

Fast SQL logic tests 2 failed, main history: :bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed::bk-status-passed:

OutputFailure:test/sqllogictest/cluster.slt:454
        expected: Values(["mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2", "quickstart", "size_1", "1"])
        actually: Values(["mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2", "quickstart", "r2", "2", "quickstart", "size_1", "1"])
        actual raw: [Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }]
OutputFailure:test/sqllogictest/cluster.slt:466
        expected: Values(["foo", "size_1", "1", "foo", "size_2", "2", "mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2", "quickstart", "size_1", "1"])
        actually: Values(["foo", "size_1", "1", "foo", "size_2", "2", "mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2", "quickstart", "r2", "2", "quickstart", "size_1", "1"])
        actual raw: [Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }]
OutputFailure:test/sqllogictest/cluster.slt:495
        expected: Values(["mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2"])
        actually: Values(["mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2", "quickstart", "r2", "2"])
        actual raw: [Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }]
OutputFailure:test/sqllogictest/cluster.slt:671
        expected: Values(["r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "size_1", "1", "1", "18446744073709000000", "18446744073709551615", "1", "1", "size_1_8g", "1-8G", "1", "18446744073709000000", "8589934592", "1", "1", "size_2_2", "2-2", "2", "18446744073709000000", "18446744073709551615", "2", "2", "size_32", "32", "1", "18446744073709000000", "18446744073709551615", "32", "1"])
        actually: Values(["r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r2", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "size_1", "1", "1", "18446744073709000000", "18446744073709551615", "1", "1", "size_1_8g", "1-8G", "1", "18446744073709000000", "8589934592", "1", "1", "size_2_2", "2-2", "2", "18446744073709000000", "18446744073709551615", "2", "2", "size_32", "32", "1", "18446744073709000000", "18446744073709551615", "32", "1"])
        actual raw: [Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }, Column { name: "processes", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "cpu_nano_cores", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "memory_bytes", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "workers", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "credits_per_hour", table_oid: None, column_id: None, type: Numeric }] }, Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }, Column { name: "processes", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "cpu_nano_cores", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "memory_bytes", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "workers", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "credits_per_hour", table_oid: None, column_id: None, type: Numeric }] }, Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }, Column { name: "processes", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "cpu_nano_cores", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "memory_bytes", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "workers", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "credits_per_hour", table_oid: None, column_id: None, type: Numeric }] }, Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }, Column { name: "processes", table_oid: None, column_id: None, type: Ot [...]
  • Unknown error in test/sqllogictest/transform/normalize_lets.slt:
Bail:test/sqllogictest/transform/normalize_lets.slt:492 PlanFailure:test/sqllogictest/transform/normalize_lets.slt:492:
db error: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_scheduling_elapsed_raw
    mz_compute_import_frontiers_per_worker
    mz_dataflow_operators_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
    mz_scheduling_elapsed_raw
    mz_compute_import_frontiers_per_worker
    mz_dataflow_operators_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.
Test details & reproducer Run SQL tests using an instance of Mz that is embedded in the sqllogic binary itself. Good for basic SQL tests, but can't interact with sources like MySQL/Kafka, see Testdrive for that.
BUILDKITE_PARALLEL_JOB=1 BUILDKITE_PARALLEL_JOB_COUNT=6 bin/mzcompose --find sqllogictest run fast-tests 
:pipeline:ci/test/mkpipeline.sh
Waited 1m 37s
·
Ran in 53s
bootstrap stable x86_64bin/ci-builder push stable
Waited 44s
·
Ran in 25m 39s
bootstrap nightly x86_64bin/ci-builder push nightly
Waited 44s
·
Ran in 25m 36s
bootstrap min x86_64bin/ci-builder push min
Waited 46s
·
Ran in 2m 22s
bootstrap stable aarch64bin/ci-builder push stable
Waited 51s
·
Ran in 26m 33s
bootstrap nightly aarch64bin/ci-builder push nightly
Waited 57s
·
Ran in 26m 21s
bootstrap min aarch64bin/ci-builder push min
Waited 57s
·
Ran in 2m 11s
mkpipelinebin/ci-builder run min bin/pyactivate -m ci.mkpipeline test
Waited 7s
·
Ran in 1m 22s
:bazel: Build x86_64bin/ci-builder run min bin/pyactivate -m ci.test.build
Waited 2s
·
Ran in 16m 59s
:bazel: Build aarch64bin/ci-builder run min bin/pyactivate -m ci.test.build
Waited 3s
·
Ran in 21m 0s
Build WASMbin/ci-builder run stable bin/pyactivate -m ci.deploy.npm --no-release
Waited 1m 19s
·
Ran in 8m 32s
Tag development docker imagesbin/ci-builder run stable bin/pyactivate -m ci.test.dev_tag
Waited 46s
·
Ran in 3m 51s
Lint and rustfmtbin/ci-builder run stable ci/test/lint-fast.sh
Waited 1m 1s
·
Ran in 5m 16s
Clippy and doctestsbin/ci-builder run stable ci/test/lint-slow.sh
Waited 8s
·
Ran in 19m 8s
:rust: macOS Clippycargo clippy --all-targets -- -D warnings
Waited 10s
·
Ran in 26s
Lint dependenciesbin/ci-builder run stable ci/test/lint-deps.sh
Waited 11s
·
Ran in 7m 39s
Lint docsbin/ci-builder run stable ci/test/lint-docs.sh
Waited 4m 36s
·
Ran in 8m 3s
Preview docsbin/ci-builder run stable ci/test/preview-docs.sh
Waited 1m 34s
·
Ran in 4m 3s
Run Docs JS Widgets Testsbin/ci-builder run stable ci/test/docs-widgets/docs-widgets.sh
Waited 1s
·
Ran in 7m 2s
:rust: Cargo test
Waited 8s
·
Ran in 36m 21s
1/8
Testdrive 1
Waited 7s
·
Ran in 13m 28s
2/8
Testdrive 2
Waited 8s
·
Ran in 13m 40s
3/8
Testdrive 3
Waited 4s
·
Ran in 12m 23s
4/8
Testdrive 4
Waited 9s
·
Ran in 19m 12s
5/8
Testdrive 5
Waited 2m 57s
·
Ran in 8m 47s
6/8
Testdrive 6
Waited 3m 0s
·
Ran in 20m 28s
7/8
Testdrive 7
Waited 3m 0s
·
Ran in 20m 38s
8/8
Testdrive 8
Waited 3m 6s
·
Ran in 23m 49s
1/4
Cluster tests 1
Waited 3m 24s
·
Ran in 19m 7s
2/4
Cluster tests 2
Waited 3m 26s
·
Ran in 16m 37s
3/4
Cluster tests 3
Waited 3m 28s
·
Ran in 18m 25s
4/4
Cluster tests 4
Waited 3m 29s
·
Ran in 17m 57s
1/5
Fast SQL logic tests 1
Waited 1s
·
Ran in 9m 30s
2/5
Fast SQL logic tests 2
Waited 2s
·
Ran in 13m 37s
3/5
Fast SQL logic tests 3
Waited 6s
·
Ran in 12m 8s
4/5
Fast SQL logic tests 4
Waited 9s
·
Ran in 9m 20s
5/5
Fast SQL logic tests 5
Waited 7s
·
Ran in 11m 58s
Restart test
Waited 3s
·
Ran in 14m 38s
1/2
Legacy upgrade tests (last version from docs, ignore missing) 1
Waited 6s
·
Ran in 12m 42s
2/2
Legacy upgrade tests (last version from docs, ignore missing) 2
Waited 6s
·
Ran in 12m 39s
Debezium Postgres tests
Waited 22s
·
Ran in 9m 46s
Debezium SQL Server tests
Waited 5s
·
Ran in 18m 13s
Debezium MySQL tests
Waited 22s
·
Ran in 7m 33s
1/2
MySQL CDC tests 1
Waited 32s
·
Ran in 14m 59s
2/2
MySQL CDC tests 2
Waited 34s
·
Ran in 13m 24s
1/4
MySQL CDC resumption tests 1
Waited 38s
·
Ran in 10m 45s
2/4
MySQL CDC resumption tests 2
Waited 3m 51s
·
Ran in 21m 13s
3/4
MySQL CDC resumption tests 3
Waited 3m 52s
·
Ran in 17m 10s
4/4
MySQL CDC resumption tests 4
Waited 3m 53s
·
Ran in 14m 33s
MySQL RTR tests
Waited 3m 55s
·
Ran in 6m 56s
1/2
Postgres CDC tests 1
Waited 3m 56s
·
Ran in 14m 16s
2/2
Postgres CDC tests 2
Waited 3m 57s
·
Ran in 13m 21s
1/5
Postgres CDC resumption tests 1
Waited 3m 32s
·
Ran in 8m 39s
2/5
Postgres CDC resumption tests 2
Waited 3m 33s
·
Ran in 13m 42s
3/5
Postgres CDC resumption tests 3
Waited 3m 33s
·
Ran in 11m 18s
4/5
Postgres CDC resumption tests 4
Waited 3m 33s
·
Ran in 12m 38s
5/5
Postgres CDC resumption tests 5
Waited 3m 34s
·
Ran in 14m 55s
Postgres RTR tests
Waited 3m 58s
·
Ran in 7m 14s
Yugabyte CDC tests
Waited 3m 59s
·
Ran in 10m 13s
SSH connection tests
Waited 3m 35s
·
Ran in 15m 47s
Fivetran Destination tests
Waited 3m 59s
·
Ran in 10m 31s
Copy to S3
Waited 4m 9s
·
Ran in 7m 13s
Kafka resumption tests
Waited 3m 35s
·
Ran in 15m 50s
Kafka auth test
Waited 4m 3s
·
Ran in 13m 21s
Kafka exactly-once tests
Waited 4m 6s
·
Ran in 8m 45s
1/2
Kafka RTR tests 1
Waited 3m 35s
·
Ran in 8m 59s
2/2
Kafka RTR tests 2
Waited 3m 36s
·
Ran in 11m 15s
Short Zippy
Waited 4m 6s
·
Ran in 12m 0s
1/6
Checks + restart of environmentd & storage clusterd 1
Waited 3m 35s
·
Ran in 14m 37s
2/6
Checks + restart of environmentd & storage clusterd 2
Waited 3m 36s
·
Ran in 19m 2s
3/6
Checks + restart of environmentd & storage clusterd 3
Waited 3m 36s
·
Ran in 14m 25s
4/6
Checks + restart of environmentd & storage clusterd 4
Waited 3m 38s
·
Ran in 14m 1s
5/6
Checks + restart of environmentd & storage clusterd 5
Waited 3m 41s
·
Ran in 17m 58s
6/6
Checks + restart of environmentd & storage clusterd 6
Waited 3m 42s
·
Ran in 18m 21s
1/6
Checks without restart or upgrade 1
Waited 3m 44s
·
Ran in 16m 32s
2/6
Checks without restart or upgrade 2
Waited 3m 45s
·
Ran in 14m 43s
3/6
Checks without restart or upgrade 3
Waited 3m 45s
·
Ran in 14m 34s
4/6
Checks without restart or upgrade 4
Waited 3m 45s
·
Ran in 12m 37s
5/6
Checks without restart or upgrade 5
Waited 3m 45s
·
Ran in 15m 11s
6/6
Checks without restart or upgrade 6
Waited 3m 45s
·
Ran in 13m 28s
1/2
Source/Sink Error Reporting 1
Waited 4m 6s
·
Ran in 13m 22s
2/2
Source/Sink Error Reporting 2
Waited 4m 6s
·
Ran in 10m 56s
Feature benchmark (Kafka only)
Waited 1m 50s
·
Ran in 23m 21s
Persistence tests
Waited 4m 7s
·
Ran in 11m 29s
Cluster isolation test
Waited 4m 7s
·
Ran in 12m 32s
chbench smoke test
Waited 4m 7s
·
Ran in 10m 39s
Metabase smoke test
Waited 8s
·
Ran in 4m 47s
dbt-materialize tests
Waited 4m 8s
·
Ran in 14m 15s
Storage Usage Table Test
Waited 4m 8s
·
Ran in 7m 32s
Tracing Fast Path
Waited 47s
·
Ran in 5m 18s
Skip Version Upgrade
Waited 4m 9s
·
Ran in 8m 0s
Deploy websiteTriggered build of main / 3d14785c0e on Deploy website
Build #8189 (asynchronous)
Build will continue even if previous stage fails
Build will continue even if previous stage fails
DeployTriggered build of main / 3d14785c0e on Deploy
Build #17398 (asynchronous)
Total Job Run Time: 20h 2m