Test
PublicRun fast unit and integration tests
Merge pull request #31116 from MaterializeInc/dependabot/pip/ci/builder/pytest-8.3.4
Passed in 1h 17m

bootstrap stable x86_64
bootstrap nightly x86_64
bootstrap min x86_64
bootstrap stable aarch64
bootstrap nightly aarch64
bootstrap min aarch64
mkpipeline

Restart test
Yugabyte CDC tests
Short Zippy
Feature benchmark (Kafka only)
Persistence tests
Cluster isolation test
chbench smoke test
Metabase smoke test
dbt-materialize tests
Storage Usage Table Test
Tracing Fast Path
Skip Version Upgrade
Deploy websiteDeployCluster tests 2 failed, main history:
- Unknown error in test-refresh-mv-warmup:
Docker compose failed: docker compose -f/dev/fd/3 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-3a6173ea/materialize/test/test/cluster exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@materialized:6875 --materialize-internal-url=postgres://materialize@materialized:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --default-timeout=360s --persist-blob-url=file:///mzdata/persist/blob --persist-consensus-url=postgres://root@materialized:26257?options=--search_path=consensus --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-3a6173ea/materialize/test/test/cluster/mzcompose.py:3838
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-3a6173ea/materialize/test/test/cluster/mzcompose.py:3838
- Unknown error in test-storage-controller-metrics:
builtins.AssertionError: got 2.0
Test details & reproducer
Functional tests which require separate clusterd containers (instead of the usual clusterd included in the materialized container).BUILDKITE_PARALLEL_JOB=1 BUILDKITE_PARALLEL_JOB_COUNT=4 bin/mzcompose --find cluster run default
Restart test failed, main history:
- Unknown error in bound-size-mz-status-history:
Docker compose failed: docker compose -f/dev/fd/3 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-e637fc46/materialize/test/test/restart exec -T testdrive_no_reset testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@materialized:6875 --materialize-internal-url=postgres://materialize@materialized:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --default-timeout=360s --persist-blob-url=file:///mzdata/persist/blob --persist-consensus-url=postgres://root@materialized:26257?options=--search_path=consensus --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-e637fc46/materialize/test/test/restart/mzcompose.py:645
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-e637fc46/materialize/test/test/restart/mzcompose.py:645
Test details & reproducer
Testdrive-based tests involving restarting materialized (including its clusterd processes). See cluster tests for separate clusterds, see platform-checks for further restart scenarios.bin/mzcompose --find restart run default
Cluster tests 3 failed, main history:
- Unknown error in test-workload-class-in-metrics:
psycopg.errors.FeatureNotSupported: log source reads must target a replica DETAIL: The query references the following log sources: mz_dataflow_operators_per_worker HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.
Test details & reproducer
Functional tests which require separate clusterd containers (instead of the usual clusterd included in the materialized container).BUILDKITE_PARALLEL_JOB=2 BUILDKITE_PARALLEL_JOB_COUNT=4 bin/mzcompose --find cluster run default
Cluster tests 1 failed, main history:
- Unknown error in test-replica-metrics:
builtins.AssertionError: unexpected create_instance count: 1.0
Test details & reproducer
Functional tests which require separate clusterd containers (instead of the usual clusterd included in the materialized container).BUILDKITE_PARALLEL_JOB=0 BUILDKITE_PARALLEL_JOB_COUNT=4 bin/mzcompose --find cluster run default
Testdrive 4 failed, main history:
- Unknown error in session.td:
session.td:18:1: non-matching rows: expected:
[["DateStyle", "ISO, MDY", "Sets the display format for date and time values (PostgreSQL)."], ["IntervalStyle", "postgres", "Sets the display format for interval values (PostgreSQL)."], ["TimeZone", "UTC", "Sets the time zone for displaying and interpreting time stamps (PostgreSQL)."], ["allowed_cluster_replica_sizes", "", "The allowed sizes when creating a new cluster replica (Materialize)."], ["application_name", "", "Sets the application name to be reported in statistics and logs (PostgreSQL)."], ["auto_route_catalog_queries", "on", "Whether to force queries that depend only on system tables, to run on the mz_catalog_server cluster (Materialize)."], ["client_encoding", "UTF8", "Sets the client's character set encoding (PostgreSQL)."], ["client_min_messages", "notice", "Sets the message levels that are sent to the client (PostgreSQL)."], ["cluster", "<VARIES>", "Sets the current cluster (Materialize)."], ["cluster_replica", "", "Sets a target cluster replica for SELECT queries (Materialize)."], ["current_object_missing_warnings", "on", "Whether to emit warnings when the current database, schema, or cluster is missing (Materialize)."], ["database", "materialize", "Sets the current database (CockroachDB)."], ["emit_introspection_query_notice", "on", "Whether to print a notice when querying per-replica introspection sources."], ["emit_plan_insights_notice", "off", "Boolean flag indicating whether to send a NOTICE with JSON-formatted plan insights before executing a SELECT statement (Materialize)."], ["emit_timestamp_notice", "off", "Boolean flag indicating whether to send a NOTICE with timestamp explanations of queries (Materialize)."], ["emit_trace_id_notice", "off", "Boolean flag indicating whether to send a NOTICE specifying the trace id when available (Materialize)."], ["enable_consolidate_after_union_negate", "on", "consolidation after Unions that have a Negated input (Materialize)."], ["enable_rbac_checks", "on", "User facing global boolean flag indicating whether to apply RBAC checks before executing statements (Materialize)."], ["enable_reduce_reduction", "on", "split complex reductions in to simpler ones and a join (Materialize)."], ["enable_session_rbac_checks", "off", "User facing session boolean flag indicating whether to apply RBAC checks before executing statements (Materialize)."], ["extra_float_digits", "3", "Adjusts the number of digits displayed for floating-point values (PostgreSQL)."], ["failpoints", "<omitted>", "Allows failpoints to be dynamically activated."], ["force_source_table_syntax", "off", "Force use of new source model (CREATE TABLE .. FROM SOURCE) and migrate existing sources"], ["idle_in_transaction_session_timeout", "2 min", "Sets the maximum allowed duration that a session can sit idle in a transaction before being terminated. If this value is specified without units, it is taken as milliseconds. A value of zero disables the timeout (PostgreSQL)."], ["integer_datetimes", "on", "Reports whether the server uses 64-bit-integer dates and times (PostgreSQL)."], ["is_superuser", "off", "Reports whether the current session is a superuser (PostgreSQL)."], ["max_aws_privatelink_connections", "0", "The maximum number of AWS PrivateLink connections in the region, across all schemas (Materialize)."], ["max_clusters", "10", "The maximum number of clusters in the region (Materialize)."], ["max_connections", "5000", "The maximum number of concurrent connections (PostgreSQL)."], ["max_continual_tasks", "100", "The maximum number of continual tasks in the region, across all schemas (Materialize)."], ["max_copy_from_size", "1073741824", "The maximum size in bytes we buffer for COPY FROM statements (Materialize)."], ["max_credit_consumption_rate", "1024", "The maximum rate of credit consumption in a region. Credits are consumed based on the size of cluster replicas in use (Materialize)."], ["max_databases", "1000", "The maximum number of databases in the region (Materialize)."], ["max_identifier_length", "255", "The maximum length of object identifiers in bytes (PostgreSQL)."], ["max_kafka_connections", "1000", "The maximum number of Kafka connections in the region, across all schemas (Materialize)."], ["max_materialized_views", "100", "The maximum number of materialized views in the region, across all schemas (Materialize)."], ["max_mysql_connections", "1000", "The maximum number of MySQL connections in the region, across all schemas (Materialize)."], ["max_network_policies", "25", "The maximum number of network policies in the region."], ["max_objects_per_schema", "1000", "The maximum number of objects in a schema (Materialize)."], ["max_postgres_connections", "1000", "The maximum number of PostgreSQL connections in the region, across all schemas (Materialize)."], ["max_query_result_size", "1GB", "The maximum size in bytes for a single query's result (Materialize)."], ["max_replicas_per_cluster", "5", "The maximum number of replicas of a single cluster (Materialize)."], ["max_result_size", "1GB", "The maximum size in bytes for an internal query result (Materialize)."], ["max_roles", "1000", "The maximum number of roles in the region (Materialize)."], ["max_rules_per_network_policy", "25", "The maximum number of rules per network policies."], ["max_schemas_per_database", "1000", "The maximum number of schemas in a database (Materialize)."], ["max_secrets", "100", "The maximum number of secrets in the region, across all schemas (Materialize)."], ["max_sinks", "25", "The maximum number of sinks in the region, across all schemas (Materialize)."], ["max_sources", "200", "The maximum number of sources in the region, across all schemas (Materialize)."], ["max_sql_server_connections", "1000", "The maximum number of SQL Server connections in the region, across all schemas (Materialize)."], ["max_tables", "200", "The maximum number of tables in the region, across all schemas (Materialize)."], ["mz_version", "<VARIES>", "Shows the Materialize server version (Materialize)."], ["network_policy", "default", "Sets the fallback network policy applied to all users without an explicit policy."], ["optimizer_e2e_latency_warning_threshold", "500 ms", "Sets the duration that a query can take to compile; queries that take longer will trigger a warning. If this value is specified without units, it is taken as milliseconds. A value of zero disables the timeout (Materialize)."], ["real_time_recency", "off", "Feature flag indicating whether real time recency is enabled (Materialize)."], ["real_time_recency_timeout", "10 s", "Sets the maximum allowed duration of SELECTs that actively use real-time recency, i.e. reach out to an external system to determine their most recencly exposed data (Materialize)."], ["search_path", "public", "Sets the schema search order for names that are not schema-qualified (PostgreSQL)."], ["server_version", "9.5.0", "Shows the PostgreSQL compatible server version (PostgreSQL)."], ["server_version_num", "90500", "Shows the PostgreSQL compatible server version as an integer (PostgreSQL)."], ["sql_safe_updates", "off", "Prohibits SQL statements that may be overly destructive (CockroachDB)."], ["standard_conforming_strings", "on", "Causes '...' strings to treat backslashes literally (PostgreSQL)."], ["statement_logging_default_sample_rate", "0.01", "The default value of `statement_logging_sample_rate` for new sessions (Materialize)."], ["statement_logging_max_sample_rate", "0.01", "The maximum rate at which statements may be logged. If this value is less than that of `statement_logging_sample_rate`, the latter is ignored (Materialize)."], ["statement_logging_sample_rate", "0.01", "User-facing session variable indicating how many statement executions should be logged, subject to constraint by the system variable `statement_logging_max_sample_rate` (Materialize)."], ["statement_timeout", "1 min", "Sets the maximum allowed duration of INSERT...SELECT, UPDATE, and DELETE operations. If this value is specified without units, it is taken as milliseconds."], ["superuser_reserved_connections", "3", "The number of connections that are reserved for superusers (PostgreSQL)."], ["transaction_isolation", "strict serializable", "Sets the current transaction's isolation level (PostgreSQL)."], ["unsafe_new_transaction_wall_time", "", "Sets the wall time for all new explicit or implicit transactions to control the value of `now()`. If not set, uses the system's clock."], ["welcome_message", "on", "Whether to send a notice with a welcome message after a successful connection (Materialize)."]]
got:
[["DateStyle", "ISO, MDY", "Sets the display format for date and time values (PostgreSQL)."], ["IntervalStyle", "postgres", "Sets the display format for interval values (PostgreSQL)."], ["TimeZone", "UTC", "Sets the time zone for displaying and interpreting time stamps (PostgreSQL)."], ["allowed_cluster_replica_sizes", "", "The allowed sizes when creating a new cluster replica (Materialize)."], ["application_name", "", "Sets the application name to be reported in statistics and logs (PostgreSQL)."], ["auto_route_catalog_queries", "on", "Whether to force queries that depend only on system tables, to run on the mz_catalog_server cluster (Materialize)."], ["client_encoding", "UTF8", "Sets the client's character set encoding (PostgreSQL)."], ["client_min_messages", "notice", "Sets the message levels that are sent to the client (PostgreSQL)."], ["cluster", "<VARIES>", "Sets the current cluster (Materialize)."], ["cluster_replica", "", "Sets a target cluster replica for SELECT queries (Materialize)."], ["current_object_missing_warnings", "on", "Whether to emit warnings when the current database, schema, or cluster is missing (Materialize)."], ["database", "materialize", "Sets the current database (CockroachDB)."], ["default_cluster_replication_factor", "1", "Default cluster replication factor (Materialize)."], ["emit_introspection_query_notice", "on", "Whether to print a notice wh [...]
Test details & reproducer
Testdrive is the basic framework and language for defining product tests under the expected-result/actual-result (aka golden testing) paradigm. A query is retried until it produces the desired result.BUILDKITE_PARALLEL_JOB=3 BUILDKITE_PARALLEL_JOB_COUNT=8 bin/mzcompose --find testdrive run default
Checks + restart of environmentd & storage clusterd 3 failed, main history:
- Unknown error in workflow-default:
Docker compose failed: docker compose -f/dev/fd/3 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-07b1ac9a/materialize/test/test/platform-checks exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@materialized:6875 --materialize-internal-url=postgres://materialize@materialized:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=statement_timeout='300s' --default-timeout=300s --seed=1 --persist-blob-url=s3://minioadmin:minioadmin@persist/persist?endpoint=http://minio:9000/®ion=minio --persist-consensus-url=postgres://root@materialized:26257?options=--search_path=consensus --var=replicas=1 --var=default-replica-size=4-4 --var=default-storage-size=4-1 --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-07b1ac9a/materialize/test/misc/python/materialize/checks/all_checks/cluster.py:37
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-07b1ac9a/materialize/test/misc/python/materialize/checks/all_checks/cluster.py:37
Test details & reproducer
Write a single set of .td fragments for a particular feature or functionality and then have Zippy execute them in upgrade, 0dt-upgrade, restart, recovery and failure contexts.BUILDKITE_PARALLEL_JOB=2 BUILDKITE_PARALLEL_JOB_COUNT=6 bin/mzcompose --find platform-checks run default --scenario=RestartEnvironmentdClusterdStorage --seed=0196383b-f390-40b8-bc2b-5d7ced8599a8
Checks without restart or upgrade 3 failed, main history:
- Unknown error in workflow-default:
Docker compose failed: docker compose -f/dev/fd/4 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-8cf661c5/materialize/test/test/platform-checks exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@materialized:6875 --materialize-internal-url=postgres://materialize@materialized:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=statement_timeout='300s' --default-timeout=300s --seed=1 --persist-blob-url=s3://minioadmin:minioadmin@persist/persist?endpoint=http://minio:9000/®ion=minio --persist-consensus-url=postgres://root@materialized:26257?options=--search_path=consensus --var=replicas=1 --var=default-replica-size=4-4 --var=default-storage-size=4-1 --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-8cf661c5/materialize/test/misc/python/materialize/checks/all_checks/cluster.py:37
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-8cf661c5/materialize/test/misc/python/materialize/checks/all_checks/cluster.py:37
Test details & reproducer
Write a single set of .td fragments for a particular feature or functionality and then have Zippy execute them in upgrade, 0dt-upgrade, restart, recovery and failure contexts.BUILDKITE_PARALLEL_JOB=2 BUILDKITE_PARALLEL_JOB_COUNT=6 bin/mzcompose --find platform-checks run default --scenario=NoRestartNoUpgrade --seed=0196383b-f390-40b8-bc2b-5d7ced8599a8
Checks + restart of environmentd & storage clusterd 5 failed, main history:
- Unknown error in workflow-default:
Docker compose failed: docker compose -f/dev/fd/4 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-957ddb9d/materialize/test/test/platform-checks exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@materialized:6875 --materialize-internal-url=postgres://materialize@materialized:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=statement_timeout='300s' --default-timeout=300s --seed=1 --persist-blob-url=s3://minioadmin:minioadmin@persist/persist?endpoint=http://minio:9000/®ion=minio --persist-consensus-url=postgres://root@materialized:26257?options=--search_path=consensus --var=replicas=1 --var=default-replica-size=4-4 --var=default-storage-size=4-1 --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-957ddb9d/materialize/test/misc/python/materialize/checks/all_checks/webhook.py:140
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-957ddb9d/materialize/test/misc/python/materialize/checks/all_checks/webhook.py:140
Test details & reproducer
Write a single set of .td fragments for a particular feature or functionality and then have Zippy execute them in upgrade, 0dt-upgrade, restart, recovery and failure contexts.BUILDKITE_PARALLEL_JOB=4 BUILDKITE_PARALLEL_JOB_COUNT=6 bin/mzcompose --find platform-checks run default --scenario=RestartEnvironmentdClusterdStorage --seed=0196383b-f390-40b8-bc2b-5d7ced8599a8
Checks without restart or upgrade 5 failed, main history:
- Unknown error in workflow-default:
Docker compose failed: docker compose -f/dev/fd/4 --project-directory /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-473e37d1/materialize/test/test/platform-checks exec -T testdrive testdrive --kafka-addr=kafka:9092 --schema-registry-url=http://schema-registry:8081 --materialize-url=postgres://materialize@materialized:6875 --materialize-internal-url=postgres://materialize@materialized:6877 --aws-endpoint=http://minio:9000 --var=aws-endpoint=http://minio:9000 --aws-access-key-id=minioadmin --var=aws-access-key-id=minioadmin --aws-secret-access-key=minioadmin --var=aws-secret-access-key=minioadmin --no-reset --materialize-param=statement_timeout='300s' --default-timeout=300s --seed=1 --persist-blob-url=s3://minioadmin:minioadmin@persist/persist?endpoint=http://minio:9000/®ion=minio --persist-consensus-url=postgres://root@materialized:26257?options=--search_path=consensus --var=replicas=1 --var=default-replica-size=4-4 --var=default-storage-size=4-1 --source=/var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-473e37d1/materialize/test/misc/python/materialize/checks/all_checks/webhook.py:140
^^^ +++
+++ !!! Error Report
1 errors were encountered during execution
source: /var/lib/buildkite-agent/builds/hetzner-aarch64-16cpu-32gb-473e37d1/materialize/test/misc/python/materialize/checks/all_checks/webhook.py:140
Test details & reproducer
Write a single set of .td fragments for a particular feature or functionality and then have Zippy execute them in upgrade, 0dt-upgrade, restart, recovery and failure contexts.BUILDKITE_PARALLEL_JOB=4 BUILDKITE_PARALLEL_JOB_COUNT=6 bin/mzcompose --find platform-checks run default --scenario=NoRestartNoUpgrade --seed=0196383b-f390-40b8-bc2b-5d7ced8599a8
Fast SQL logic tests 2 failed, main history:
- Unknown error in test/sqllogictest/cluster.slt:
OutputFailure:test/sqllogictest/cluster.slt:454
expected: Values(["mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2", "quickstart", "size_1", "1"])
actually: Values(["mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2", "quickstart", "r2", "2", "quickstart", "size_1", "1"])
actual raw: [Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }]
OutputFailure:test/sqllogictest/cluster.slt:466
expected: Values(["foo", "size_1", "1", "foo", "size_2", "2", "mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2", "quickstart", "size_1", "1"])
actually: Values(["foo", "size_1", "1", "foo", "size_2", "2", "mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2", "quickstart", "r2", "2", "quickstart", "size_1", "1"])
actual raw: [Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }]
OutputFailure:test/sqllogictest/cluster.slt:495
expected: Values(["mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2"])
actually: Values(["mz_catalog_server", "r1", "2", "mz_probe", "r1", "2", "mz_system", "r1", "2", "quickstart", "r1", "2", "quickstart", "r2", "2"])
actual raw: [Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }, Row { columns: [Column { name: "cluster", table_oid: None, column_id: None, type: Text }, Column { name: "replica", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }] }]
OutputFailure:test/sqllogictest/cluster.slt:671
expected: Values(["r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "size_1", "1", "1", "18446744073709000000", "18446744073709551615", "1", "1", "size_1_8g", "1-8G", "1", "18446744073709000000", "8589934592", "1", "1", "size_2_2", "2-2", "2", "18446744073709000000", "18446744073709551615", "2", "2", "size_32", "32", "1", "18446744073709000000", "18446744073709551615", "32", "1"])
actually: Values(["r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r1", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "r2", "2", "1", "18446744073709000000", "18446744073709551615", "2", "1", "size_1", "1", "1", "18446744073709000000", "18446744073709551615", "1", "1", "size_1_8g", "1-8G", "1", "18446744073709000000", "8589934592", "1", "1", "size_2_2", "2-2", "2", "18446744073709000000", "18446744073709551615", "2", "2", "size_32", "32", "1", "18446744073709000000", "18446744073709551615", "32", "1"])
actual raw: [Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }, Column { name: "processes", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "cpu_nano_cores", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "memory_bytes", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "workers", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "credits_per_hour", table_oid: None, column_id: None, type: Numeric }] }, Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }, Column { name: "processes", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "cpu_nano_cores", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "memory_bytes", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "workers", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "credits_per_hour", table_oid: None, column_id: None, type: Numeric }] }, Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }, Column { name: "processes", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "cpu_nano_cores", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "memory_bytes", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "workers", table_oid: None, column_id: None, type: Other(Other { name: "uint8", oid: 16464, kind: Simple, schema: "mz_catalog" }) }, Column { name: "credits_per_hour", table_oid: None, column_id: None, type: Numeric }] }, Row { columns: [Column { name: "name", table_oid: None, column_id: None, type: Text }, Column { name: "size", table_oid: None, column_id: None, type: Text }, Column { name: "processes", table_oid: None, column_id: None, type: Ot [...]
- Unknown error in test/sqllogictest/transform/normalize_lets.slt:
Bail:test/sqllogictest/transform/normalize_lets.slt:492 PlanFailure:test/sqllogictest/transform/normalize_lets.slt:492:
db error: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
mz_scheduling_elapsed_raw
mz_compute_import_frontiers_per_worker
mz_dataflow_operators_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.: ERROR: log source reads must target a replica
DETAIL: The query references the following log sources:
mz_scheduling_elapsed_raw
mz_compute_import_frontiers_per_worker
mz_dataflow_operators_per_worker
HINT: Use `SET cluster_replica = <replica-name>` to target a specific replica in the active cluster. Note that subsequent queries will only be answered by the selected replica, which might reduce availability. To undo the replica selection, use `RESET cluster_replica`.
Test details & reproducer
Run SQL tests using an instance of Mz that is embedded in the sqllogic binary itself. Good for basic SQL tests, but can't interact with sources like MySQL/Kafka, see Testdrive for that.BUILDKITE_PARALLEL_JOB=1 BUILDKITE_PARALLEL_JOB_COUNT=6 bin/mzcompose --find sqllogictest run fast-tests
Build WASMbin/ci-builder run stable bin/pyactivate -m ci.deploy.npm --no-release
Waited 1m 19s
Ran in 8m 32s
Tag development docker imagesbin/ci-builder run stable bin/pyactivate -m ci.test.dev_tag
Waited 46s
Ran in 3m 51s
Run Docs JS Widgets Testsbin/ci-builder run stable ci/test/docs-widgets/docs-widgets.sh
Waited 1s
Ran in 7m 2s
Deploy websiteTriggered build of
main
/ 3d14785c0e
on Deploy websiteBuild #8189 (asynchronous)
Build will continue even if previous stage fails
Build will continue even if previous stage fails
DeployTriggered build of
main
/ 3d14785c0e
on DeployBuild #17398 (asynchronous)
Total Job Run Time: 20h 2m