-
Notifications
You must be signed in to change notification settings - Fork 340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PXB-3218 - Merge MySQL 8.3 #1541
Commits on Nov 1, 2023
-
Bug#35967676: Compiling 5.7 fails with VS2022
Add the missing STL headers to allow compiling on VS2022 Change-Id: I2e87cc97fa4365d22ddd18f1d5373c389fda4c86
Configuration menu - View commit details
-
Copy full SHA for 4611cc2 - Browse repository at this point
Copy the full SHA 4611cc2View commit details -
Configuration menu - View commit details
-
Copy full SHA for d83813a - Browse repository at this point
Copy the full SHA d83813aView commit details -
Configuration menu - View commit details
-
Copy full SHA for ec07543 - Browse repository at this point
Copy the full SHA ec07543View commit details -
Bug#35965815 Make run_ndbapitest.inc and run_java.inc show output on …
…error to mtr output. MTR tests that uses run_ndbapitest.inc or run_java.inc were intended to record the last 512 or 200 output lines of test output in result file on error to eventually show up in mtr output. But mtr only showed the last 20 lines of difference between test result file and the pre-recorded test results. An empty file include/full_result_diff.inc is introduced to be included in test files that do not want the 20 lines limitation of result diff output. The file is included by run_ndbapitest.inc or run_java.inc. This will make it easier to see for example why ndb.test_mgmd fails in PB2 without need to download tar file with all test failures and for PB2 builds such as gcov which does not provide such tar file. Change-Id: I457d0f9430bfe0960ebd60fcbe5eb25e3b71a986
Configuration menu - View commit details
-
Copy full SHA for 3afce06 - Browse repository at this point
Copy the full SHA 3afce06View commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: Idefbe610f8bd2f435ed4cc0bac32898cfe9ff2fc
Configuration menu - View commit details
-
Copy full SHA for d4ce6f1 - Browse repository at this point
Copy the full SHA d4ce6f1View commit details -
Bug#35686098 Assertion `n < size()' failed in Element_type& Mem_root_…
…array_YY This patch fixes three issues - For hashed set operations we compute the number of chunk files as a multiple of two. If the statistics for the number of rows in the left set operand is wrong (in this case it was -1), we would get a calculated number of chunk files of zero. When the bug report was made, this led to us not having any chunk files, since the ceiling method used (to get multiple of two) returned 0 for 0 input. This ceiling function has since been changed (commit 88d716a - Bug#35813111: Use C++ standard library for more bit operations [my_round_up_to_next_power, noclose]), so we now get 1 on zero input, so we end up with just one chunk file. This makes the crash seen with the old optimizer go away; we would quickly run out of space again and revert to de-duplicating via tmp table index. This is fixed by sanity checking the number of rows estimate and making a wild guess at the number of rows in the result set, currently set to 8 * rows already in hash table when we overflow memory. - The estimate of -1 for the windowed left operand is obviously wrong, and we changed the code to propagate the number of rows of the child table. The added tests shows new behavior for this case. - even if the estimate is not -1, but a reasonable number, fixing the above saw another crash (using the repro for the hypergraph given, but with old optimizer enabled): we ran out of space in the dedicated hash table upon re-reading one of the on-disk chunks into the hash table. If the estimate is too low, we may end up in a situation where one of the chunks just barely fits into the dedicated mem_root when processing the left operand. Later, when processing operands 2..n we have to re-read the chunks into the hash table in preparation for matching with right operand rows. We asserted that this should always succeed, since these rows already fit for the left operand. However, the order in which the rows are entered into the hash table for operand two will in general differ from that of operand one. This will in general lead to different fragmentation of the dedicated mem_root - notably if we have blobs of different sizes as in the repro - and the chunk that previously *just* fit, will may not fit this time around. In the example we saw the last row in the chunk file not fit (row percona#290 out of 290 rows in the chunk file). We fix this by falling back on the thread's main mem_root if this happens. This should be very rare. Note that in most cases if the estimate is low too low, we would run out of space when processing the left operand, and fall back on index based de-duplication using tmp table. Change-Id: Ie2b299ff6f106df727866b3cd3631b7a273c1c9b
Dag Wanvik committedNov 1, 2023 Configuration menu - View commit details
-
Copy full SHA for d96f912 - Browse repository at this point
Copy the full SHA d96f912View commit details -
Bug#35390341 Assertion `m_count > 0 && m_count > m_frame_null_count' …
…failed. This issue involves setting a user variable inside the argument of a window function, which in turn is evaluated using the window frame buffer (Window::m_needs_frame_buffering), row optimizable (Window::m_row_optimizable). Setting of user variables inside expressions is deprecated, in part since the semantics are hard to define, but this patch avoids the assert at least. Normally when evaluating a function argument for a window function, e.g. AVG( 2 * c1 ), the function (here multiplication) will be evaluated just before writing a row to the frame buffer, whereas functions *containing* a window function will be evaluated later, i.e. after the value of the window function is ready, cf. split_sum_func2. Now, for the first case when creating the window's output and frame buffer tmp tables, we replace the function (here the multiplication) with a result field in the window's tmp table, cf. change_to_use_tmp_fields, so that the window function when evaluated will pick up its input (the result of the multiplication) from the window frame buffer. However, there is an explicit exception for setting of user variables (which is implemented as a function), cf. this comment in change_to_use_tmp_fields: /* Replace "@:=<expression>" with "@:=<tmp table column>". Otherwise, we would re-evaluate <expression>, and if expression were a subquery, this would access already-unlocked tables. */ which stems from Bug#11764371 at commit 6a2402b, which involves locking issues when executing a subquery when evaluating a setting of a user variable, a join and group by. The exception mentioned is a problem for evaluating window functions, because the window function (AVG) will try to evaluate Item_func_set_user_var when reading from the frame buffer. But the underlying column is taken from the input file, not from the output/frame buffer (it is not replaced by an Item_ref subject to the slice indirection). In the repro, the two first rows are null. The third row is non-null. When trying to evaluate the wf for the third row, we invert the first (null) row, but this time, since the third input row has been read, the Item_func_set_user_var's argument points into the input buffer which has a non-null row, where as AVG expects to invert a null row (it has counters indicating that rows 1 and 2 are both null) from the frame buffer. This causes the assert error seen. The fix is to *not* apply the mentioned exception for setting user variables when windowing: that way the user variable is only set when we read the input result set for the windowing action, and the frame buffer will contain that value. As far as the locking issue of 11764371, I tried windowing on the query mentioned for the issue (included in test), and saw no issue. Change-Id: I75dc538bea55d5888783586d31ced5e9643d6fa3
Dag Wanvik committedNov 1, 2023 Configuration menu - View commit details
-
Copy full SHA for 9317c9c - Browse repository at this point
Copy the full SHA 9317c9cView commit details -
Bug#35899768 Api failure handling during node restart leaves subscrip…
…tions behind During a data node restart, the starting node asks nodegroup peers to copy the details of the current event subscribers to the starting node, so that it can take over responsibility for part of the event forwarding and buffering work when it is started. This implies that the starting node must also monitor for subscribing API node failures from this point forward. As the copy of subscribers happens before the API nodes are directly connected to the starting node, it is necessary for the starting data node to handle API failures even when those APIs are not directly connected during restarts. This was already handled to some extent in QMGR, but appears to have regressed over time. This could result in e.g. : - Leaked subscribers at SUMA - Wasted effort forwarding events to disconnected subscribers - Wasted effort forwarding events to reconnected subscribers - Crashes / undefined behaviour if reconnected subscribers reuse NdbApi object identitied. test_event_mysqld is created to give some coverage of the MySQLD ha_ndbcluster plugin stack event consumption behaviour during data node restarts and MySQLD asynchronous disconnects. test_event_mysqld -n MySQLDEventsRestarts test_event_mysqld -n MySQLDEventsDisconnects test_event_mysqld -n MySQLDEventsRestartsDisconnects The third testase is invoked from a new MTR testcase to give simple automated coverage. ndb_binlog_testevent_rd Change-Id: Icef0fd0972fb5646bd27dbf2b9ff194938a30ba9
Configuration menu - View commit details
-
Copy full SHA for 7740c72 - Browse repository at this point
Copy the full SHA 7740c72View commit details -
Bug#35663761 NdbApi fails to find container for latest epoch in NODE_…
…FAILREP handling Background NdbApi events are buffered in NdbApi to match the rate of production and rate of consumption by user code. A maximum buffer size can be set to avoid unbounded memory usage when the rate is mismatched for a long time. When the maximum size is reached event buffering stops until the buffer usage drops below a lower threshold. A TE_OUT_OF_MEMORY event is added to the buffer to inform the consumer that there can be missing events. Problem Testing and experience shows some issues with this mechanism, particularly as it stops the buffering of both 'table data' events (TE_INSERT, TE_UPDATE, TE_DELETE) and other meta events (TE_DROP_TABLE, TE_NODE_FAILURE, TE_CLUSTER_FAILURE, ...). Discarding the meta events results in issues in NdbApi and in consumer code as the state of the event stream is not clear. One specific example is where the cluster is disconnected while the event buffer is full - in this case, the local ClusterMgr thread should add TE_NODE_FAILURE and TE_CLUSTER_FAILURE events to the buffered events for each active EventOperation, but this was not possible and resulted in a crash. Other undesirable behaviours may be possible. Solution In an event buffer overload situation, only the table data events TE_INSERT, TE_UPDATE, TE_DELETE are discarded. Meta events are buffered and eventually processed as normal. Effects This means that the buffer usage is not necessarily as tightly bounded as in the original implementation, though the rate of production of meta events is expected to be minimal. Additionally it means that from the consumer's point of view, when they iterate to a position in the event stream that corresponds to an out of memory situation, they may still observe some meta events 'during' the out of memory period. These events will not include data, and specifically will not include TE_EMPTY_EPOCH events which could be misleading. The consumer can therefore safely assume that the out-of-memory situation persists until it receives TE_EMPTY_EPOCH (if enabled) or data carrying events afer a TE_OUT_OF_MEMORY event. An additional effect of this change is that the event API's latest epoch as returned by getLatestGCI() no longer is frozen while the event buffer producer is in an epoch gap - it continues to climb. This behaviour was not documented before as an indication of event buffer OOM handling, but was used by some existing test_event testcase. These are modified to use observations of the buffer fill level instead to detect when an OOM gap is being buffered. Testing Four new tests added to test_event_mysqld : test_event_mysqld -n MySQLDEventsEventBufferOverload test_event_mysqld -n MySQLDEventsEventBufferOverloadRestarts test_event_mysqld -n MySQLDEventsEventBufferOverloadDisconnects test_event_mysqld -n MySQLDEventsEventBufferOverloadRestartsDisconnects New MTR test added invoking test_event_mysqld -n MySQLDEventsEventBufferOverloadRestartsDisconnects ndb_binlog_testevent_ord Change-Id: If904fb60be2798b53cc4cb25ea3607c4289739b9
Configuration menu - View commit details
-
Copy full SHA for dab182c - Browse repository at this point
Copy the full SHA dab182cView commit details -
Bug#35655162 NdbApi event buffer overflow can cause timeout waiting f…
…or drop table Problem During NdbApi event buffer overflow, buffering of data and metadata events is paused, resulting in data and metadata events being discarded. This means that event subscribers waiting for metadata events such as TE_DROP_TABLE may never receive them and will timeout. Specifically, MySQL Cluster schema distribution uses the arrival of a TE_DROP_TABLE event on an NdbEventOperation to know when it is required and safe to drop the NdbEventOperation. If this event does not arrive within a bounded time then the NdbEventOperation is not dropped, leaking resources and potentially resulting in later problems. Solution The fix to Bug#35663761 NdbApi fails to find container for latest epoch in NODE_FAILREP handling modified the NdbApi event buffer overload handling to only pause the buffering of data events, but continue buffering metadata events. This means that in event buffer overload situations, TE_DROP events will not be lost. This patch adds a testcase covering this scenario : test_event_mysqld -n MySQLDEventsEventBufferOverloadDDL This test is invoked from MTR as : ndb_binlog_testevent_os Change-Id: I654fd0aa0c83d35956d4d01632f16f610951bd0c
Configuration menu - View commit details
-
Copy full SHA for d52022e - Browse repository at this point
Copy the full SHA d52022eView commit details -
Bug#35899768 Api failure handling during node restart leaves subscrip…
…tions behind During a data node restart, the starting node asks nodegroup peers to copy the details of the current event subscribers to the starting node, so that it can take over responsibility for part of the event forwarding and buffering work when it is started. This implies that the starting node must also monitor for subscribing API node failures from this point forward. As the copy of subscribers happens before the API nodes are directly connected to the starting node, it is necessary for the starting data node to handle API failures even when those APIs are not directly connected during restarts. This was already handled to some extent in QMGR, but appears to have regressed over time. This could result in e.g. : - Leaked subscribers at SUMA - Wasted effort forwarding events to disconnected subscribers - Wasted effort forwarding events to reconnected subscribers - Crashes / undefined behaviour if reconnected subscribers reuse NdbApi object identitied. test_event_mysqld is created to give some coverage of the MySQLD ha_ndbcluster plugin stack event consumption behaviour during data node restarts and MySQLD asynchronous disconnects. test_event_mysqld -n MySQLDEventsRestarts test_event_mysqld -n MySQLDEventsDisconnects test_event_mysqld -n MySQLDEventsRestartsDisconnects The third testase is invoked from a new MTR testcase to give simple automated coverage. ndb_binlog_testevent_rd Change-Id: Icef0fd0972fb5646bd27dbf2b9ff194938a30ba9
Configuration menu - View commit details
-
Copy full SHA for d3d2f1d - Browse repository at this point
Copy the full SHA d3d2f1dView commit details -
Bug#35663761 NdbApi fails to find container for latest epoch in NODE_…
…FAILREP handling Background NdbApi events are buffered in NdbApi to match the rate of production and rate of consumption by user code. A maximum buffer size can be set to avoid unbounded memory usage when the rate is mismatched for a long time. When the maximum size is reached event buffering stops until the buffer usage drops below a lower threshold. A TE_OUT_OF_MEMORY event is added to the buffer to inform the consumer that there can be missing events. Problem Testing and experience shows some issues with this mechanism, particularly as it stops the buffering of both 'table data' events (TE_INSERT, TE_UPDATE, TE_DELETE) and other meta events (TE_DROP_TABLE, TE_NODE_FAILURE, TE_CLUSTER_FAILURE, ...). Discarding the meta events results in issues in NdbApi and in consumer code as the state of the event stream is not clear. One specific example is where the cluster is disconnected while the event buffer is full - in this case, the local ClusterMgr thread should add TE_NODE_FAILURE and TE_CLUSTER_FAILURE events to the buffered events for each active EventOperation, but this was not possible and resulted in a crash. Other undesirable behaviours may be possible. Solution In an event buffer overload situation, only the table data events TE_INSERT, TE_UPDATE, TE_DELETE are discarded. Meta events are buffered and eventually processed as normal. Effects This means that the buffer usage is not necessarily as tightly bounded as in the original implementation, though the rate of production of meta events is expected to be minimal. Additionally it means that from the consumer's point of view, when they iterate to a position in the event stream that corresponds to an out of memory situation, they may still observe some meta events 'during' the out of memory period. These events will not include data, and specifically will not include TE_EMPTY_EPOCH events which could be misleading. The consumer can therefore safely assume that the out-of-memory situation persists until it receives TE_EMPTY_EPOCH (if enabled) or data carrying events afer a TE_OUT_OF_MEMORY event. An additional effect of this change is that the event API's latest epoch as returned by getLatestGCI() no longer is frozen while the event buffer producer is in an epoch gap - it continues to climb. This behaviour was not documented before as an indication of event buffer OOM handling, but was used by some existing test_event testcase. These are modified to use observations of the buffer fill level instead to detect when an OOM gap is being buffered. Testing Four new tests added to test_event_mysqld : test_event_mysqld -n MySQLDEventsEventBufferOverload test_event_mysqld -n MySQLDEventsEventBufferOverloadRestarts test_event_mysqld -n MySQLDEventsEventBufferOverloadDisconnects test_event_mysqld -n MySQLDEventsEventBufferOverloadRestartsDisconnects New MTR test added invoking test_event_mysqld -n MySQLDEventsEventBufferOverloadRestartsDisconnects ndb_binlog_testevent_ord Change-Id: If904fb60be2798b53cc4cb25ea3607c4289739b9
Configuration menu - View commit details
-
Copy full SHA for a51ae07 - Browse repository at this point
Copy the full SHA a51ae07View commit details -
Bug#35655162 NdbApi event buffer overflow can cause timeout waiting f…
…or drop table Problem During NdbApi event buffer overflow, buffering of data and metadata events is paused, resulting in data and metadata events being discarded. This means that event subscribers waiting for metadata events such as TE_DROP_TABLE may never receive them and will timeout. Specifically, MySQL Cluster schema distribution uses the arrival of a TE_DROP_TABLE event on an NdbEventOperation to know when it is required and safe to drop the NdbEventOperation. If this event does not arrive within a bounded time then the NdbEventOperation is not dropped, leaking resources and potentially resulting in later problems. Solution The fix to Bug#35663761 NdbApi fails to find container for latest epoch in NODE_FAILREP handling modified the NdbApi event buffer overload handling to only pause the buffering of data events, but continue buffering metadata events. This means that in event buffer overload situations, TE_DROP events will not be lost. This patch adds a testcase covering this scenario : test_event_mysqld -n MySQLDEventsEventBufferOverloadDDL This test is invoked from MTR as : ndb_binlog_testevent_os Change-Id: I654fd0aa0c83d35956d4d01632f16f610951bd0c
Configuration menu - View commit details
-
Copy full SHA for 4961d5d - Browse repository at this point
Copy the full SHA 4961d5dView commit details -
Null merge branch 'mysql-8.0' into mysql-trunk
Change-Id: I5488e06be61992c7d817d6febfdbb0b8d84d2e2c
Configuration menu - View commit details
-
Copy full SHA for 85b8cc1 - Browse repository at this point
Copy the full SHA 85b8cc1View commit details
Commits on Nov 2, 2023
-
Bug#32764586 : MEMORY LEAK WHEN ACCESSING VIEW
Description: ------------ Each SELECT on a view causes a small memory leak on the Windows platform. Analysis: --------- In case of select view Security_context(sctx) is allocated in the MEM_ROOT. Memory allocated by the members of sctx is not freed until the MEM_ROOT goes out of scope as a result memory keeps on accumulating. Security_context employed a mechanism logout() to avoid such memory accumulation but for it to work correctly, we need to own the life cycle of the sctx members keeping in mind that it is a frequently accessed code path so that we avoid frequent memory allocations/deallocations. DB_restrictions is owned by the Security_context(sctx). The former has an associative container std::unordered_map (m_restrictions) member that is allocated on the stack. We need to clear the memory allocated to this member at the time of logout(). We had a similar memory growth reported in Bug#31919448. At that time we used the temporary object and swap() trick to free the content of m_restrictions. Unfortunately, that trick worked on Linux but not on Windows. This is because constructors of associative containers can throw which means compilers are free to allocate memory on the heap in the constructor of associative containers. As it turned out to be the case on Windows, std::unordered_map allocates the memory in the heap. It is not cleared unless the object is destroyed. That means the swap() trick would not work in such cases because memory allocated by the temporary object is transferred to the m_restrictions hence memory growth is still seen. We have the following options to fix this situation : - Refactor the code in the select view area that may have a wider repercussions, hence it is not the preferred solution. - Create the m_restrictions map on the heap but not on the stack. This is the preferred option. Fix: ---- - Change m_restrictions object creation from stack to heap. - Allocate the memory to m_restriction only if that is required. - Handle the unsafe usage of the partial_revokes APIs. - Got rid of the swap() trick in DB_restrictions::clear(). - Removed is_not_empty() as we already have is_empty(). Change-Id: Iee828909d23991db86227f9664eb7f130a3d6d56
Sai Tharun Ambati committedNov 2, 2023 Configuration menu - View commit details
-
Copy full SHA for 75c3b0e - Browse repository at this point
Copy the full SHA 75c3b0eView commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: I84cfb3b47dfdf4002d1ee9c3c690f8788f84bad2
Sai Tharun Ambati committedNov 2, 2023 Configuration menu - View commit details
-
Copy full SHA for e2a37ed - Browse repository at this point
Copy the full SHA e2a37edView commit details -
NULL merge branch 'mysql-8.0' into mysql-trunk
Change-Id: I2c0899e95d9cb25d4942557351e209244931f77a
Configuration menu - View commit details
-
Copy full SHA for 8fdb9f1 - Browse repository at this point
Copy the full SHA 8fdb9f1View commit details -
Bug#35049440: Mysqld assert in Item_typecast_signed::val_int
Problem is in Item_field::replace_equal_field where we try to replace fields with equivalent ones in order to push some of these predicates down, we do not take into consideration nullability, so some non-nullable fields were replaced with nullable ones. The fix was to add a function called allow_replacement which determines if a non-nullable field can be replaced with a nullable one (UNKNOWN results can be treated as false), otherwise skip such changes. Change-Id: Id4370babd6d6e29e36a05ed46e4ca951507670d6
Catalin Besleaga committedNov 2, 2023 Configuration menu - View commit details
-
Copy full SHA for da9db8e - Browse repository at this point
Copy the full SHA da9db8eView commit details -
Bug#35963172: HLL query crashes mysqld at setup_fields(assertion)
The assertion occurs because there is missing error checking in udf_handler::call_init_func(), after evaluating the arguments to the UDF function. Fixed by adding error checks. Change-Id: Icbbd256bdd4c6b21f4ec2c8b8b8ce856ebe68a47
Configuration menu - View commit details
-
Copy full SHA for 047a80e - Browse repository at this point
Copy the full SHA 047a80eView commit details -
WL#9582 Implements the use of distance scan of R-tree indexes in orde…
…r to execute k nearest neighbors (kNN) queries. Works with innodb distance scan and hypergraph optimizer. Change-Id: I3c25f586b5cd6b1ae5fd74ebb0897071f23336d4
Configuration menu - View commit details
-
Copy full SHA for bfc1afb - Browse repository at this point
Copy the full SHA bfc1afbView commit details -
Bug#35968195 mysql_config --libs reports -lzlib instead of -lz
This patch added some abstaction layers between physical libraries (bundled or system) and their names on disk vs name in cmake code: Bug#35057542 Create INTERFACE libraries for bundled/system zlib/zstd/lz4 When generating the files mysqlclient.pc (for pkg-config) and mysql_config (a shell script), we need to go through these abstraction layers, and produce library names as required by the linker of the host platform. This is done by the cmake macro EXTRACT_LINK_LIBRARIES. The fix is to store these explicitly as "-lz" and "-lzstd" respectively. The lz4 library is not used by our client library, so there is no need to store the linker option for that. Change-Id: Ief70f66900bec2d07fdb3daa88caf60a138c953b
Tor Didriksen committedNov 2, 2023 Configuration menu - View commit details
-
Copy full SHA for d54160f - Browse repository at this point
Copy the full SHA d54160fView commit details -
The fix for bug#35865438 'ndb_mgm hangs without --ndb-tls-search-path' misused the ndb_mgm_get_clusterlog_severity_filter() API. Change-Id: I0ef255c83b2ff3efaf0ebc1ad502e61e0e483575
Configuration menu - View commit details
-
Copy full SHA for 4666434 - Browse repository at this point
Copy the full SHA 4666434View commit details
Commits on Nov 3, 2023
-
WL#12899 : Remove slave-rows-search-algorithms
This worklog will remove the system variable `slave-rows-search-algorithms` Change-Id: Ic8264bce904d1102c90eb83cbb48ea78cdeef264
Arpit Goswami committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for b1ceeaf - Browse repository at this point
Copy the full SHA b1ceeafView commit details -
Removed the option Removed the tests which uses the option. Change-Id: I5316b2c49eb4cf1dc583d97891e7b43a5c472eca
V S Murthy Sidagam committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 8d5dab4 - Browse repository at this point
Copy the full SHA 8d5dab4View commit details -
WL#15752: Add statement type "DDL" to transaction tracking facility -…
… missing DDL statements WL#15497 lets group replication determine whether any session is running a DDL statement that would make it inadvisable to change the primary server at this point. This changeset adds support for additional DDL statements as well as some DCL statements: 13.1.19 CREATE SPATIAL REFERENCE SYSTEM Statement 13.1.31 DROP SPATIAL REFERENCE SYSTEM Statement 13.1.12 CREATE DATABASE Statement 13.1.2 ALTER DATABASE Statement 13.1.24 DROP DATABASE Statement 13.1.18 CREATE SERVER Statement 13.1.8 ALTER SERVER Statement 13.1.30 DROP SERVER Statement 13.1.5 ALTER INSTANCE Statement 13.1.36 RENAME TABLE Statement 13.1.23 CREATE VIEW Statement 13.1.11 ALTER VIEW Statement 13.1.35 DROP VIEW Statement 13.1.22 CREATE TRIGGER Statement 13.1.34 DROP TRIGGER Statement 13.1.21 CREATE TABLESPACE Statement 13.1.10 ALTER TABLESPACE Statement 13.1.33 DROP TABLESPACE Statement 13.1.17 CREATE PROCEDURE and CREATE FUNCTION Statements 13.1.7 ALTER PROCEDURE Statement 13.1.4 ALTER FUNCTION Statement 13.1.29 DROP PROCEDURE and DROP FUNCTION Statements 13.1.14 CREATE FUNCTION Statement (SONAME) 13.1.26 DROP FUNCTION Statement (SONAME) 13.1.13 CREATE EVENT Statement 13.1.3 ALTER EVENT Statement 13.1.25 DROP EVENT Statement 13.1.15 CREATE INDEX Statement (WL#15497) 13.1.27 DROP INDEX Statement (WL#15497) 13.1.20 CREATE TABLE Statement (WL#15497) 13.1.9 ALTER TABLE Statement (WL#15497) 13.1.37 TRUNCATE TABLE Statement (WL#15497) 13.1.32 DROP TABLE Statement (WL#15497) 13.7.1.3 CREATE USER Statement 13.7.1.1 ALTER USER Statement 13.7.1.5 DROP USER Statement 13.7.1.2 CREATE ROLE Statement 13.7.1.9 SET DEFAULT ROLE Statement 13.7.1.11 SET ROLE Statement 13.7.1.4 DROP ROLE Statement 13.7.1.6 GRANT Statement 13.7.1.8 REVOKE Statement 13.7.1.7 RENAME USER Statement 13.7.1.10 SET PASSWORD Statement Change-Id: Ia472d9ce1b4ac9bfe00d3ec619baa5a32f719857
Tatiana Azundris Nuernberg committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for d86c2b3 - Browse repository at this point
Copy the full SHA d86c2b3View commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: I3d7820d23a7f4f80136eab131fd9e59e7c9175d9
Tor Didriksen committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 87340a7 - Browse repository at this point
Copy the full SHA 87340a7View commit details -
Bug#35968054 routertest_integration_routing_sharing_restart fails aft…
…er WL#13448 After WL#13448 Remove COM_XXX command which are deprecated. the test: ...classic_protocol_reconnect_all_commands fails as the error-codes changed. Change ====== - updated the expected error-codes for COM_PROCESSKILL, COM_LIST_FIELDS and COM_REFRESH Change-Id: Icb09aa6253e850353b54a2ba4b90c7c23c9f3965
Configuration menu - View commit details
-
Copy full SHA for f162375 - Browse repository at this point
Copy the full SHA f162375View commit details -
In the past the pattern: std::shared_ptr<void> exit_guard(nullptr, [this](void *) { ... }); has been used as adhoc Scope_guard. Change ====== - use Scope_guard explicitly Change-Id: I7593730da0f90a028231b580c443c3d0062b2cf6
Configuration menu - View commit details
-
Copy full SHA for b37438a - Browse repository at this point
Copy the full SHA b37438aView commit details -
Bug#35968321 use std::lock_guard
In the past: std::shared_ptr<void> exit_trigger(nullptr, [&](void *) { ... }); has been used as adhoc scope guard. In this case, the guard is used around a std::recursive_mutex for which a std::lock_guard exists. Change ====== - use a std::lock_guard instead of the adhoc scope-guard Change-Id: Iae88f4d60db70894620cd3d108056588182c15d0
Configuration menu - View commit details
-
Copy full SHA for 84a917c - Browse repository at this point
Copy the full SHA 84a917cView commit details -
Bug#35795161 Remove references to unsupported platforms
Additional patch, removing reference to Ubuntu 16 and Ubuntu 18. Change-Id: I91ca91e91b4609d542d034f60fc1ec9e62ce4a10
Tor Didriksen committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for a82adf6 - Browse repository at this point
Copy the full SHA a82adf6View commit details -
Bug#35595808 tablespace_first_page_unrecoverable test failing intermi…
…ttently This is a follow up patch to fix the intermittent failure of this recently added test. There were following problems: 1. Test was restarting the server with an unknown option '--nocore` . Apparently, this option is known to the mysqltest_safe_process that is forked by the mtr to monitor the mysqld process. In situations when a test wants to skip producing the core file, mtr framework expects it to pass --skip-core-file to set this option. We also know that server produces core file only if --core-file option is specified. mtr scripts pass specify this option if the server is started through the mtr scripts. But in this case since I start the server without the mts scripts hence --core-file remains false thus there was no need to pass any option to the server. 2. Test tried to evict page#0 belonging to tablespace t1 from double-write buffer by inserting the 512 records to another tablespace t2. This was not enough, there could still be situations where the page#0 of the tablespace t1 could remain in the double-write buffer. 3. On windows different error code is returned. 4. The script corrupt_page.inc doesn't wait for server to close/die completely. This leads to sporadic error messages. Ideally this should be the responsibility of the caller of the script but it is safe to wait for the server to close inside the corrupt_page.inc Fixes: ====== 1. Removed the unknown option '--nocore`. By default server does not produce the core files anyways. 2. To simulate the situation reliably that corrupt page is not found in the double-write buffer, taken the backup of the double-write buffer files before creating the table to be corrupted and restore them at the time of starting the server with corrupt tablespace. This makes it reliable test. Also reorganized the test now it corrupts the first page from tablespace t2.ibd as opposed to earlier from t1.ibd 3. Added the error code returned on Windows. 4. Test now uses the var/tmp dir for temporary files: my_restart.err and t1.ibd. I think this is better choice than earlier. 5. corrupt_page.inc calls wait_until_disconnected.inc. Also improved the error messages. Change-Id: I4184bcac8d9aee030360dc23940eeb752b78acc0
Configuration menu - View commit details
-
Copy full SHA for 1dbd7a2 - Browse repository at this point
Copy the full SHA 1dbd7a2View commit details -
WL#9681 Remove skip-host-cache option
Removed the option Removed the tests which uses the option. Change-Id: Ife2ee01abc7a5f13efac4c2faf41bce86a5e846c
V S Murthy Sidagam committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 581f1f9 - Browse repository at this point
Copy the full SHA 581f1f9View commit details -
WL#13229 Remove --old-style-user-limits
Removed the option Removed the tests which uses the option. Change-Id: I626c1f7d6d45dd13bd8695f34c3ee5932be1e911
V S Murthy Sidagam committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 4a7fefc - Browse repository at this point
Copy the full SHA 4a7fefcView commit details -
Bug#35750394 Remove theNodeIdTransporter[] state invariant
Refactoring patch removing invariant where there could be a MultiTransporter created, but not set up as theNodeIdTransporter[] yet. 1) When a data node is restarted, the MultiTransporter is created already when reading the config. However it is not inserted into the theNodeIdTransporter[] until after QMGR has 'switched' to use the the MultiTransporter. 2) When we are reconnecting to a data node where a MultiTransporter were previously used, it already existed in theNodeIdTransporter[], even if the reconnect process had not yet reached the 'switch' point. At the same code in QMGR, TRPMAN and TransporterRegistry is handling both the intial connect and reconnect, it is complicated by that a connecting MultiTransporter may or may not exist in theNodeIdTransporter[] when setting up such a connection. This patch insert the MultiTransporter into theNodeIdTransporter[] as soon as it is created. Note that if the QMGR handshaking with neighbour node will then later conclude that multiple transporters can not be used, we will simply continue to communicate over the single base transporter which is always connected first. This allows us to simplify get_node_multi_transporter() and remove a couple of other get'ters for multi_transporter. Change-Id: I152b3a2821e0e1eac087f6b864fa97962f98dacf
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 22b3ff3 - Browse repository at this point
Copy the full SHA 22b3ff3View commit details -
Bug#35750394 Introduce get_node_base_transporter(NodeId)
Refactoring patch introducing the get_node_base_transporter(NodeId) method. Replacing some common code patterns with calling this method instead Change-Id: Ibba5bbd8f75687ee1e13d41ebc3c75486fc63d52
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 0f86c73 - Browse repository at this point
Copy the full SHA 0f86c73View commit details -
Bug#35750394 Refactor Transporter interface to use TrpId over NodeId
Refactoring patch required to allow the transporter CONNECTING/DISCONNECTING protocol to be used for the individual PartsOfMultiTransporter's. The usage of a 'NodeId' to identify a Transporter in the interface is replaced by using its 'TrpId' instead. NodeBitmask is also replaced by TrpBitmask. Naming of methods now taking modified arguments are mostly kept unchanged. The execpetion is where variants taking both a NodeId and TrpId are needed, where the later then is named with a *_trp suffix. Format of EventLogger messages as well as different DEBUG output has been addapted such that ut should be clear whether a NodeId or a TrpId is reported in the logs. (Somethimes both as well) Some comments starting with 'OJA' are added by the patch, pointing out where MultiTransporters does not follow the correct protocol to disconnect or connect. These comments will be removed again by the following patches which are the real fixes. Change-Id: I02ab4daa46e4b19d162c234bf4377cf3e918372a
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 02c5d9c - Browse repository at this point
Copy the full SHA 02c5d9cView commit details -
Bug#35750394 Use refactored TrpId-states to simplify ::connect_server
TransporterRegistry::connect_server() handled connect of a multi transporter instance as a special case of a Transporter connect(). With refactorings in previous patches a multi Transporter instance now has its own states++ available, and in most aspects are just a plain Transporter, which enable some cleanup and simplifications. In particular: - For a non-multi Transporter we used to check that performStates[] was 'CONNECTING'. - As a multi Transported instance used to not have its own performStates[], we instead checked !isConnected(). This patch introduce the utility method get_node_transporter_instance which is now used to locate the Transporter instance to connect. The performStates[TrpId] is used to check for CONNECTING state, independent of whether a multi Transporter is used or not. -> All special handling of a multi Transporter is removed. ... A multi Transporter instance is just a Transporter. Change-Id: I4ae73b89846206e7afa499ff340930a230cf2745
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 6acaf46 - Browse repository at this point
Copy the full SHA 6acaf46View commit details -
Bug#35750394 Rename connect/disconnect methods:
'reconnect' and 'disconnect' seems overused in naming methods and the nameing might not always help to clearify the intended usage of each method. In particular some of these methods do an immediate connect and waits for the connection with the other peer to be established/teared down, while others starts the asynch CONNECTING/DISCONNECING protocol. Some of the root cause for this bug might be that the naming of the methods doing the different steps in connect/ disconnect of transporters were confusing. Many of them were named 'do connect', with different variants of upper case and '_' in their naming, and some simply 'connect'. In these cases 'do' might be misleading and indicate an immediate connect/disconnect taking place. That made it unclear wheteher the particular method did an immediate connect or just initated the CONNECTING/DISCONNECTING transporter protocol which would eventually result in the transporter being reported as CONNECTED/DISCONNECTED, but relied on the start_clients_threads to do the real connect/disconnect, and the final CONNECTED/DISCONNECTED state to be set by report_connect/ report_disconnect. This patch rename such 'do connect/disconnect' methods to 'start connecting/disconnecting', in particular: TransporterRegistry::do_connect -> start_connecting TransporterRegistry::do_connect_trp -> start_connecting_trp TransporterRegistry::do_disconnect -> start_disconnecting TransporterRegistry::do_disconnect_trp -> start_disconnecting_trp Transporter::do_disconnect -> start_disconnecting TransporterFacade::doConnect -> startConnecting TransporterFacade::doDisconnect -> startDisonnecting Change-Id: I02577788ff66cfaee50f30019ad18329ec1f246e
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for ff95a63 - Browse repository at this point
Copy the full SHA ff95a63View commit details -
Bug#35750394 Use transporter protocol to connect 'PartsOfMultiTranspo…
…rter' We used to connect the PartsOfMultiTransporter when start_clients_thread discovered that the the base transporter had CONNECTED by calling Transporter::connect_client() directly. It basically did some of the same steps as if the transporters were in a CONNECTING state. However, as the CONNECTING-protocol was not used, all the different state transitions, enable_send_buffers, epoll_add and such had to be patched up as an afterthought by QMGR. Generally the protocol to set up connections were not followed either, were there are strict rules of where the performState[] transitions can be done, E.g. the CONNECTED state should only be set by the receive thread responsible for the specific TrpId. Patch is to let Qmgr::connect_multi_transporter() use start_connecting_trp(TrpId) to initate the CONNECTING protocol. It will the connect as any other transporters. Also note that the 'hack' to set_s_port() on the transporter now is obsolete - will be done by start_clients_thread when seeing the CONNECTING transporter. Removed epoll_add functions and other code now obsolete. Change-Id: I8937d96a2a1b7440a84b0dc3a0c8d2b857b7a2b7
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 070ee50 - Browse repository at this point
Copy the full SHA 070ee50View commit details -
Bug#35750394 Use transporter protocol to disconnect 'PartsOfMultiTran…
…sporter' We used to disconnect the MultiTransporters when report_disconnect of *one* of the MultiTransporters was observed by calling Transporter::forceUnsafeDisconnect(). Even if the disconnect being 'reported' followed the disconnect protocoll, the other disconnects of the remaing transporters *did not*. Besides the protocol breakage, this could also cause real time execution problems as closing of sockets could stall, waiting for the other peer or timeouts. Patch fixes this by starting an asynchronous disconnect instead with start_disconnecting_trp(). Transporter::forceUnsafeDisconnect is now obsolete and is removed. Transporter::start_disconnecting can be simplified as there should be no difference between a disconnecting multiTransporter and a plain transporter any more. Change-Id: I3101603a66b7f712d2e51f2c9c0ae963d67edddb
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 68f7307 - Browse repository at this point
Copy the full SHA 68f7307View commit details -
Bug#35750394 Replace shutdown() with start_disconnecting_trp()
When QMGR has concluded that all the PartsOfMultiTransporter has CONNECTED, it will disconnect the base transporter which is not needed anymore. This used the Transporter::shutdown() method which forcefully just closed theSocket, not following the transporter protocol of first setting the performState[] to DISCONNECTING, and letting the asynch disconnect happening out of the way of the executing block / QMGR. Patch replace usage of ::shutdown() with start_disconnecting_trp(). ::shutdown becomes obsolete and is removed. Change-Id: Iee22f2c48df0e650fd29e5e3d12d6f0d06b5059b
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 343e836 - Browse repository at this point
Copy the full SHA 343e836View commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: Ibc0ce929215a4e21907396ec9d08ab6080a90c94
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for b4c21d2 - Browse repository at this point
Copy the full SHA b4c21d2View commit details -
Bug#35938286 Eliminate usage of Transporter::isMultiTransporter()
Remove one of the few dependencies of Multi_Transporter having to inherit from Transporter Change-Id: Ic106614281e47ea32099131a0c79b8663ea294b7
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 6e0b312 - Browse repository at this point
Copy the full SHA 6e0b312View commit details -
Bug#35938286 Eliminate Transporter::is_encrypted as a method inherite…
…d and overridden by Multi_Transporter. Step#2 in breaking out Multi_Transporter dependencies on subclassing Transporter Change-Id: I0804840f3fff21d52b9a02f0457ac9861a8712b4
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for a4efc50 - Browse repository at this point
Copy the full SHA a4efc50View commit details -
Bug#35938286 Identify more Multi_transporter methods which are overri…
…dden from Transporter, but never used. Insert a 'require(false)' in the implementation of these methods in order to prove that they are never used -> Ran Pb2 tests Change-Id: I2ce2ee479d365c4d3c43a59a2e0e5532d7870390
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for e9219c2 - Browse repository at this point
Copy the full SHA e9219c2View commit details -
Bug#35938286 Remove ::get_bytes_sent and ::get_bytes_received as meth…
…ods overridden by Multi_Transporter. Change implementation of TransporterRegistry::get_bytes_sent and ::get_bytes_received() to collect the sent and received over all the node transporters. Change-Id: I2cebb5db6a3e98f5b72dea6643415857b15aafad
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for f4b2ce0 - Browse repository at this point
Copy the full SHA f4b2ce0View commit details -
Bug#35938286 Make get_send_transporter() a method only implemented by…
… Multi_Transporter. Note: With all patches upto, including this, all Transporter methods previously overridden by Multi_Transporter has now been eliminated, or implemented as methods with a 'require(false)' to prove that they are never used. Change-Id: Ia1f774300168b968942e3705986c92fd6676282b
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for bf99b0b - Browse repository at this point
Copy the full SHA bf99b0bView commit details -
Bug#35938286 Introduce Transporter::lock_send_transporter() and ::unl…
…ock_send_transporter() Use these to eliminate 'lock' / 'unlock' on the Multi_Transporter Change-Id: I124339222839aee600c31d8da78eefeb665acc72
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 232f457 - Browse repository at this point
Copy the full SHA 232f457View commit details -
Bug#35938286 Prepare TransporterRegistry for a Multi_Transporter not …
…being a Transporter any more As the Multi-Transporter is not a sub class of Transporter any more, it can not be stored in theNodeIdTransporters[] any more. -> theNodeIdTransporters[] is enhanced to always store the initial base transporter to a specific NodeId. This also enables use to simplify the implementation of ::get_node_base_transporter() as we not need to handle that that theNodeIdTransporters[] may be a Multi_Transporter any more. In case a Multi_Transporter is created for a NodeId, it is now stored in theNodeIdMultiTransporters[] instead. This array replace theMultiTransporters[] with a slightly changes semantic - indexed by NodeId instead, instead of 0..nMultiTransporters. ::get_node_multi_transporter() is also enhanced to look up the Multi_Transporter directly from theNodeIdMultiTransporters[]. Change-Id: I8d9ad9dde262380da8dc5555a310eeffbdf0c487
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for ea98a37 - Browse repository at this point
Copy the full SHA ea98a37View commit details -
Bug#35938286 class Multi_Transporter should not be a 'Transporter' an…
…y more. Class Multi_Transporter will not inherit from class Transporter. Remove unused overridden methods inherited from class Transporter. Change-Id: I87feb2673f02d1eeaac8331058cddc1a24d2ca77
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 75fcd2a - Browse repository at this point
Copy the full SHA 75fcd2aView commit details -
Bug#35938286 Eliminate usage of Transporter::isMultiTransporter()
Remove one of the few dependencies of Multi_Transporter having to inherit from Transporter Change-Id: Ic106614281e47ea32099131a0c79b8663ea294b7 (cherry picked from commit 5faec76d06469f66bf17cc1adb2a225717919059)
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 6dad814 - Browse repository at this point
Copy the full SHA 6dad814View commit details -
Bug#35938286 Eliminate Transporter::is_encrypted as a method inherite…
…d and overridden by Multi_Transporter. Step#2 in breaking out Multi_Transporter dependencies on subclassing Transporter Change-Id: I0804840f3fff21d52b9a02f0457ac9861a8712b4 (cherry picked from commit d6b3cd5ff5ce3cc49e07c6a4e3993bc99baa6dd3)
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 628ffcd - Browse repository at this point
Copy the full SHA 628ffcdView commit details -
Bug#35938286 Identify more Multi_transporter methods which are overri…
…dden from Transporter, but never used. Insert a 'require(false)' in the implementation of these methods in order to prove that they are never used -> Ran Pb2 tests Change-Id: I2ce2ee479d365c4d3c43a59a2e0e5532d7870390 (cherry picked from commit 974ec94ef4a67ff7257b92935fe3af6bbf26b76c)
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 22ebe56 - Browse repository at this point
Copy the full SHA 22ebe56View commit details -
Bug#35938286 Remove ::get_bytes_sent and ::get_bytes_received as meth…
…ods overridden by Multi_Transporter. Change implementation of TransporterRegistry::get_bytes_sent and ::get_bytes_received() to collect the sent and received over all the node transporters. Change-Id: I2cebb5db6a3e98f5b72dea6643415857b15aafad (cherry picked from commit 22716326ce545e11494f897c20f96d23b4747450)
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for f40beee - Browse repository at this point
Copy the full SHA f40beeeView commit details -
Bug#35938286 Make get_send_transporter() a method only implemented by…
… Multi_Transporter. Note: With all patches upto, including this, all Transporter methods previously overridden by Multi_Transporter has now been eliminated, or implemented as methods with a 'require(false)' to prove that they are never used. Change-Id: Ia1f774300168b968942e3705986c92fd6676282b (cherry picked from commit ee86144475a85481a315d6593e536b4d5dc0fb61)
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 8c29e4c - Browse repository at this point
Copy the full SHA 8c29e4cView commit details -
Bug#35938286 Introduce Transporter::lock_send_transporter() and ::unl…
…ock_send_transporter() Use these to eliminate 'lock' / 'unlock' on the Multi_Transporter Change-Id: I124339222839aee600c31d8da78eefeb665acc72 (cherry picked from commit de0565ffa027e8e0b1790daacdb7c426349ea77a)
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 114a821 - Browse repository at this point
Copy the full SHA 114a821View commit details -
Bug#35938286 Prepare TransporterRegistry for a Multi_Transporter not …
…being a Transporter any more As the Multi-Transporter is not a sub class of Transporter any more, it can not be stored in theNodeIdTransporters[] any more. -> theNodeIdTransporters[] is enhanced to always store the initial base transporter to a specific NodeId. This also enables use to simplify the implementation of ::get_node_base_transporter() as we not need to handle that that theNodeIdTransporters[] may be a Multi_Transporter any more. In case a Multi_Transporter is created for a NodeId, it is now stored in theNodeIdMultiTransporters[] instead. This array replace theMultiTransporters[] with a slightly changes semantic - indexed by NodeId instead, instead of 0..nMultiTransporters. ::get_node_multi_transporter() is also enhanced to look up the Multi_Transporter directly from theNodeIdMultiTransporters[]. Change-Id: I8d9ad9dde262380da8dc5555a310eeffbdf0c487 (cherry picked from commit cb6f19b151cb5f2731e43874d1804d7a3856656b)
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for c93ad8f - Browse repository at this point
Copy the full SHA c93ad8fView commit details -
Bug#35938286 class Multi_Transporter should not be a 'Transporter' an…
…y more. Class Multi_Transporter will not inherit from class Transporter. Remove unused overridden methods inherited from class Transporter. Change-Id: I87feb2673f02d1eeaac8331058cddc1a24d2ca77 (cherry picked from commit a93039734304105d3d23de49ffea5e280a412a0b)
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 04f56be - Browse repository at this point
Copy the full SHA 04f56beView commit details -
Change-Id: Ie49c590f21f9479e1c8c1fffa305f54e36242683
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 4f4150a - Browse repository at this point
Copy the full SHA 4f4150aView commit details -
Bug#35839977 testNodeRestart -n Bug34216 fails randomly
Test executes a number of updates against a table it created. Some of the updates are executed as 'NoCommit', with a final 'Commit' for the entire transaction. ERROR_INSERT is used to crash the node on the commit (as apposed to the intermediate execute-NoCommit's) Intention of test case is to check for regression against Bug#34216, where: 'During TC-take-over (during NF) commit messages can come out of order to TUP' - Thus presumable hitting the ERROR_INSERT when execute-commit. Such failures are now randomly detected by the failing test case. Root cause is that there is other background activity going on on the data nodes as well. In particular the update operations may trigger ndb_index_stat updates performing READ operations on the table being updated, as well as accessing the system tables. Patch enhance the ERROR_INSERT 5048/5049 code to also require that it is a ZUPDATE being commited, as well as that the table is an UserTable. Change-Id: I4061fa4521b8f670b3783a3aa0b6256bca15a7d0
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 20946be - Browse repository at this point
Copy the full SHA 20946beView commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: I2f471fdb6936244c923bc7506afc4acf745b29ba
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 1fdbdba - Browse repository at this point
Copy the full SHA 1fdbdbaView commit details -
Bug#33800633 startChangeNeighbour problem
When setting up a new set of neighbour transporters, the old set of neighbours may still have pending data awaiting to be sent. In order to ensure that available data on these transporters will still be sent, we need to ensure that the becoming non-neighbour transporters are inserted into the list of transporter we need to send on. Patch enhance startChangeNeighbour() such that it will check the old set of neighbour transporters if any has 'm_data_available' and use insert_trp() to insert them into the list of non-neighbour transporters. Note that we now also need to clear the m_neighbour_trp flag before doing such inserts, else insert_trp() would not have inserted the TrpId into the non-neighbour list. Patch also redeclares some local variables refering a 'transporter id' from an Uint32 to a TrpId. A few asserts are also added to ensure the consistency of the Transporter list structures. Change-Id: I87bb539868ef33eb19e37ed90b30f1a3e1e07680
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 41a9f5d - Browse repository at this point
Copy the full SHA 41a9f5dView commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: Ie7255da6abb25906e71b2b19af2d066da59f3259
Ole John Aske committedNov 3, 2023 Configuration menu - View commit details
-
Copy full SHA for 49b56e6 - Browse repository at this point
Copy the full SHA 49b56e6View commit details
Commits on Nov 5, 2023
-
Bug#35898221 Hypergraph: too low row estimate for semijoin that is no…
…t an equijoin This commit fixes a regression introduced by the fix for bug bug#34764211 "Too high row estimate for semijoin" (Change-Id: I231cd0c8ef504d64cd835184a39f9975066b61bf). Some semijoins may be transformed into an inner join between the right hand side aggregated on the join fields, and the original left hand side. That fix utilized this transform to make better row estimates, by estimating the number of output rows as: CARD(left_hand_relation) * inner_join_selectivity * CARD(d) where: * inner_join_selectivity is the cardinality of an inner join on the same predicate, divided by the cardinality of a cross join. * 'd' is the set of distinct rows from right_hand_relation, when only looking at those columns that appear in the join predicate. The regression happens for a semijoin that is not a pure equijoin. That is, the join predicate is something else than a conjunction of 'left_table.field=right_table.field' terms. In this case JoinPredicate.semijoin_group is empty, because the semijoin to inner join transform will never happen. In this case the number of aggregated rows was wrongly set to 1. This fix corrects that. This fix adds a new function EstimateSemijoinFanOut() that collects the fields from the right hand_relation that appear in the join predicate. Unlike JoinPredicate.semijoin_group, this works for both semijoin and antijoin, and for arbitrary predicates, not just conjunctions of field=field. That function then estimates CARD(d), using the same apparatus that we use for e.g. DISTINCT. Change-Id: I02224ff4f64315f2b0b92d6b883fe08e2f6a9975
Jan Wedvik committedNov 5, 2023 Configuration menu - View commit details
-
Copy full SHA for c8dd1af - Browse repository at this point
Copy the full SHA c8dd1afView commit details
Commits on Nov 6, 2023
-
Bug#35208990 - MySQL 8.0.32U1/Cloud reports wrong results (select)
PROBLEM ------- 1. When innodb_validate_tablespace_paths=off, innodb does not validate the tablespace during startup. 2. Since the tablespace is not validated ,we fail to initialize in memory filsystem hash map which maps space id with the tablespace. 3. If ibuf entries are present during startup, a background thread tries to merge these into the appropriate tablespace. 4. The background thread will search for tablespace in the hash map ,since it cannot find the tablespace it assumes that tablespace is deleted and drop the ibuf entries silently. 5. This leads to a corruption in the tablespace because number of primary and secondary entries differ. FIX --- 1. If ibuf entries are present during startup , irrespective of the innodb_validate_tablespace_paths setting ,validate all the tablespaces. Change-Id: I41bf5f39f654ce50c9fa47b6dd3d0153ba829308
Aditya A committedNov 6, 2023 Configuration menu - View commit details
-
Copy full SHA for 2646b4b - Browse repository at this point
Copy the full SHA 2646b4bView commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Aditya A committedNov 6, 2023 Configuration menu - View commit details
-
Copy full SHA for 1321217 - Browse repository at this point
Copy the full SHA 1321217View commit details -
Bug#35898221 Hypergraph: too low row estmate for semijoin that is no…
…t an equijoin Post-push fix: This commit fixes the following warning: "sql/join_optimizer/cost_model.h:228:1: error: control reaches end of non-void function [-Werror=return-type]". Change-Id: I41fbdd94b6442458a906d12bc176bb72c6b9000a
Jan Wedvik committedNov 6, 2023 Configuration menu - View commit details
-
Copy full SHA for 24aaea8 - Browse repository at this point
Copy the full SHA 24aaea8View commit details -
Bug#35928350 Contribution: NDB STORED USER fails with replication filter
Problem: The MySQL Server fails when replicating "GRANT NDB_STORED_USER ..." with replication filter turned on. This occurs since the replication filter causes all non-updating queries to return an error - due to the assumption that only changes need to be replicated. Analysis: To handle the GRANT statement a `SELECT ... FROM information_schema.user_privileges` is used, this is a non-updating query and thus triggers an error in the MySQL Server when replication filters are in use. Solution: Install an empty replication filter while running distributed privilege queries, thus sucessfully getting a result returned. Thanks to Mikael Ronström for providing the steps necessary how to reproduce this problem. Contributed by: Mikael Ronström Change-Id: I2c171f64b7776ac2410d2d918f591b16bc8c1a04
Configuration menu - View commit details
-
Copy full SHA for 706cfba - Browse repository at this point
Copy the full SHA 706cfbaView commit details -
Bug#34551954 [noclose] Slave SQL thread crashes on CREATE USER
In the Acl_change_notification constructor, remove the ambiguity between a function parameter and a member variable both called "users". Change-Id: I8fba20f562026606411eb8b67cd11d56af09e11e
Configuration menu - View commit details
-
Copy full SHA for 62bf1c1 - Browse repository at this point
Copy the full SHA 62bf1c1View commit details -
Bug#34551954 Slave SQL Thread crashes the instance when create user
Problem: Crash in replication applier when handling NDB synchronized privilieges. Analysis: Fatal failure occurs when query return code indicates sucess but no result set is available. Such problem scenario can be seen in the replication applier, for example in problem described in BUG#35928350. Solution: Handle the case when expected result set is not available by logging an error message and returning failure. This should make for a more stable behaviour when the queries does not return the expected result set. Change-Id: I4623ff8be5e59cbc3edcbd1627cbedda6dbab5ce
Configuration menu - View commit details
-
Copy full SHA for b88cc5f - Browse repository at this point
Copy the full SHA b88cc5fView commit details -
Configuration menu - View commit details
-
Copy full SHA for 7c81e73 - Browse repository at this point
Copy the full SHA 7c81e73View commit details -
Bug#35767731 Replace all use of 'utf8' and '_utf8' [.inc noclose]
Post-push fix: revert bad .result file. Change-Id: I78f6aa63e4dd3d98be94b98d410ef4e598a23718
Tor Didriksen committedNov 6, 2023 Configuration menu - View commit details
-
Copy full SHA for 338c8ab - Browse repository at this point
Copy the full SHA 338c8abView commit details -
Bug#35826171 BACKPORT ROLLUP BUGFIXES Bug#35211828 AND Bug#35498378 T…
…O 8.0 Backport from mysql-trunk. 1) Bug#35211828: Derived condition pushdown with rollup gives wrong results 2) Bug#35498378: MYSQLD CRASH - ASSERTION NULLPTR != DYNAMIC_CAST<TARGET>(ARG) FAILED Change-Id: I26833f6cc240bc0484e689021a5346029419a0cc
Priyanka Sangam committedNov 6, 2023 Configuration menu - View commit details
-
Copy full SHA for 4cc1e27 - Browse repository at this point
Copy the full SHA 4cc1e27View commit details -
Change-Id: If4b103cff903f3994f8f38410d7c0f6bfeabeabb
Priyanka Sangam committedNov 6, 2023 Configuration menu - View commit details
-
Copy full SHA for 1793c9c - Browse repository at this point
Copy the full SHA 1793c9cView commit details -
WL#14056 Support GSSAPI/Kerberos authentication on Windows using
authentication_ldap_sasl_client plug-in So far LDAP authentication was limited depending on authentication mechanism and the platform server or client runs. This worklog will improve both the client and the server side Windows plugins, so they support SASL SCRAM and SASL GSSAPI authentication mechanisms. Change-Id: I60c2ce4925f8d6c18e202b59c54a96303ea8e2ba
Michal Jankowski committedNov 6, 2023 Configuration menu - View commit details
-
Copy full SHA for 8f8a85a - Browse repository at this point
Copy the full SHA 8f8a85aView commit details -
Bug#35064211: Transaction_monitor_thread::start can stay waiting for …
…the thread to start Problem: When the Transaction_monitor_thread is created, the thread that requested for the creation waits until Transaction_monitor_thread starts running. However upon running, the Transaction_monitor_thread thread does not unlock the mutex causing the creator of a thread to wait for a long time to read the thread running status. Analysis: Checked and observed all other threads on running unlocks the mutex. In Transaction_monitor_thread the lock was never released post setting the status of the thread as running. Fix: Code has benn improved to unlock the mutex upon thread creation after setting the status of the thread as being running. The scope of the lock has been reduced. Change-Id: I07f961346b99a740c76b1e315850921a2f4fcdb8
Jaideep Karande committedNov 6, 2023 Configuration menu - View commit details
-
Copy full SHA for e1a5f77 - Browse repository at this point
Copy the full SHA e1a5f77View commit details -
WL#14056 Support GSSAPI/Kerberos authentication on Windows using
authentication_ldap_sasl_client plug-in Post-push fix: auth_ldap_sasl_mechanism.cc:111:2: error: extra ';' outside of a function is incompatible with C++98 [-Werror,-Wc++98-compat-extra-semi] Change-Id: Ida99c78d538f1422e32b8fe4b5caca24fae7f28b
Tor Didriksen committedNov 6, 2023 Configuration menu - View commit details
-
Copy full SHA for a126a5a - Browse repository at this point
Copy the full SHA a126a5aView commit details -
Bug#35877452: mysqld crash at LogicalOrderings::AddHomogenizedOrderin…
…gIfPossible A query with redundant elements in the ORDER BY clause could fail if the query optimization took so long time that the secondary engine requested a restart of the optimization with a different set of parameters to restrict the search space. BuildInterestingOrders() got confused because Query_block::order_list was in an inconsistent state when the optimization was restarted. It was inconsistent because the optimizer had modified the intrusive list pointers in JOIN::order to remove redundant elements. JOIN::order shares some data with Query_block::order_list, so some of the underlying data of Query_block::order_list was modified as a result without the parent object's knowledge, and the parent object became inconsistent. Query_block::order_list is restored before the next execution of a prepared statement for this exact reason; see Query_block::restore_cmd_properties() and the doxygen comment for Query_block::order_list. But it is not restored when the optimization is restarted. Since it's not safe to access Query_block::order_list after such modifications, the optimizer should use JOIN::order instead. Fixed by making BuildInterestingOrders() inspect JOIN::order instead of Query_block::order_list. For completeness and consistency, the patch also replaces usages of Query_block::group_list in the hypergraph optimizer with corresponding data structures in JOIN, even though the hypergraph optimizer doesn't currently mutate the group list in a similar way. BuildInterestingOrders() now uses JOIN::group_list instead of Query_block::group_list, and EstimateAggregateRows() uses JOIN::group_fields. (EstimateAggregateRows() already used JOIN::group_fields for some cases, but only when called by the old optimizer. It could not use it for the hypergraph optimizer, because the hypergraph optimizer didn't populate group_fields until after EstimateAggregateRows(). The patch moved the hypergraph optimizer's call to make_group_fields() a little earlier so that EstimateAggregateRows() could use the same code for both optimizers.) Change-Id: I4790482416e8935b7918cccd36f316d88b1d5700
Configuration menu - View commit details
-
Copy full SHA for 79487fc - Browse repository at this point
Copy the full SHA 79487fcView commit details -
Bug#35938286 Eliminate Transporter::is_encrypted ...
Original patch broke the Pb2 tests basic_tls.test and tls_required.test. Post push patch, fixing TransporterRegistry::is_encrypted_link(NodeId). Root cause was an incorrect transfer of the Multi_Transporter::is_encrypted method into TransporterRegistry::is_encrypted_link(). We need to get the 'is_encrypted' property from the first active multitransporter, not from the base-transporter which may already have been closed if we have switched to use the multi transporters. Change-Id: If77531ca151839499e1c89358a5f9e6ea336a7c1
Ole John Aske committedNov 6, 2023 Configuration menu - View commit details
-
Copy full SHA for bb2ba83 - Browse repository at this point
Copy the full SHA bb2ba83View commit details -
Bug#35750394 Refactor Transporter interface to use TrpId over NodeId
Post-push fix: Broken build for clang on windows: error: unused variable 'trp_id' Change-Id: I9f637a53267280c90fbcb6266a17b3bb7f7650bf
Tor Didriksen committedNov 6, 2023 Configuration menu - View commit details
-
Copy full SHA for 4e13d9d - Browse repository at this point
Copy the full SHA 4e13d9dView commit details -
Bug#35443773 error logged when certificate verification fails.
When the client aborts a TLS handshake because a the certificate can't be verified (unknown CA) router logs: ERROR ... classic::loop() processor failed: error:0A000418:SSL routines::tlsv1 alert unknown ca (tls_err:167773208) That error shouldn't be logged as ERROR. Change ------ - close the connection without raising a "processor failed" error, if the TLS handshake fails with the client - log a message at INFO level why the tls-handshake failed. - close the connection without raising a "processor failed" error, if it is closed without a COM_QUIT - decode more TLS alert values for debugging. Change-Id: I3a492189288d2c430744ec2a32dc40b70ffd0f11
Configuration menu - View commit details
-
Copy full SHA for 3a41466 - Browse repository at this point
Copy the full SHA 3a41466View commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: I5fcdc6fe6e06b030f42422627d794b4267b2384c
Configuration menu - View commit details
-
Copy full SHA for 3dd0ec4 - Browse repository at this point
Copy the full SHA 3dd0ec4View commit details -
Bug#35976922 Get character sets/collations by name rather than by glo…
…bal pointers Fix this TODO: // TODO(tdidriks) check name rather than address: extern MYSQL_STRINGS_EXPORT CHARSET_INFO my_charset_gb18030_chinese_ci; extern MYSQL_STRINGS_EXPORT CHARSET_INFO my_charset_utf16le_general_ci; Character sets/collations should always be loaded/initialized properly with get_collation_number() or some other defined function in the mysys/strings API. Change-Id: Ifa89308481d9a3db7428c1f3ed4ba375d1681209
Tor Didriksen committedNov 6, 2023 Configuration menu - View commit details
-
Copy full SHA for 1b8063d - Browse repository at this point
Copy the full SHA 1b8063dView commit details -
Bug#35912698 ndb_rpl.ndb_rpl_log_updates fails in PB2
Problem: Test `ndb_rpl.ndb_rpl_log_updates` fails occasionally on PB2. Analysis: The errors are mostly related with synchronization issues, namely: - Some inserts are not yet applied in the replica cluster in time (but the updates are in the relay log) - Table definition may not have been coordinated throughout the cluster Solution: Make each source change wait for them to be applied on the binlog. Change-Id: I2e79de9d784c87a2b0151b6a9a19413dd5f81b96
Configuration menu - View commit details
-
Copy full SHA for e4c2cdc - Browse repository at this point
Copy the full SHA e4c2cdcView commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: I544eb2c5281860ae524294f7d783875da6ac4efd
Configuration menu - View commit details
-
Copy full SHA for 34c8a31 - Browse repository at this point
Copy the full SHA 34c8a31View commit details -
WL#15515: Add TP connection information to PFS
Problem: Inadequate P_S instrumentation for TP Solution: Add a new P_S plugin table for TP called tp_connections which displays the information in the tp_client_low_level_t struct for each connection. In addition the exiting P_S tables tp_thread_group_state and tp_thread_state are extended with additional columns for existing data structure members which were missing, and also with new information which will be useful for performance tuning/diagnostics. In addition, some limited refactoring was done to simply information retrieval for, and the modification of the schema for, the TP P_S tables. Change-Id: I94414afd00b5a6702ee03ec341c9c0b773a4b94c
Dyre Tjeldvoll authored and Dyre Tjeldvoll committedNov 6, 2023 Configuration menu - View commit details
-
Copy full SHA for 30ed48a - Browse repository at this point
Copy the full SHA 30ed48aView commit details
Commits on Nov 7, 2023
-
Bug#35925503 Message "Metadata: Failed to submit table
'mysql.ndb_apply_status' for synchronization" is submitted every minute Problem: On MySQL Cluster 8.0.34, messages like below are submitted to the error log every minute: > Metadata: Failed to submit table 'mysql.ndb_apply_status' for synchronization This is essentially useless, and fills the error log unnecessarily. Analysis: The mysql.ndb_apply_status is a util table managed by the binlog thread, as such there is no need to check if it has changed. Solution: Remove table from list of tables in DD, this is done in a similar fashion to how it's removed from the list of tables in NDB. This means that the table will not be submitted for change. Change-Id: I7af9579bcd98b0e7db5b2acccf0b0a7df89a7265
Configuration menu - View commit details
-
Copy full SHA for d97437d - Browse repository at this point
Copy the full SHA d97437dView commit details -
Configuration menu - View commit details
-
Copy full SHA for ca021ea - Browse repository at this point
Copy the full SHA ca021eaView commit details -
Bug#35902058 innodb-purge thread can take long time to complete and s…
…ystemd isn't notified Problem: --------- - During shutdown, InnoDB purge thread can sometimes take a couple of hours to complete its work (depending on the data) and it can make user feel that the shutdown is stuck. Solution: --------- - Add an "externally" visible systemd notification indicating the step. Note: ----- - The message is kept general to accommodate pre_dd_shutdown handlertons of other SEs (if there are any). Change-Id: Ibb3f4ab8cb1da45f2fbd374c8ef88c8657ea7b68
Configuration menu - View commit details
-
Copy full SHA for ef04f23 - Browse repository at this point
Copy the full SHA ef04f23View commit details -
mysql-builder@oracle.com committed
Nov 7, 2023 Configuration menu - View commit details
-
Copy full SHA for 4fbef9e - Browse repository at this point
Copy the full SHA 4fbef9eView commit details -
BUG#35918849 Automatic thread_end in SqlClient
When the SqlClient uses the MySQL library it allocate resources which need to be released using mysql_thread_end(). Such resource leakage is visible with asan or valgrind. This is fixed by changing SqlClient to activate a thread_local guard to release the MySQL thread resources when thread ends. Also make sure to only initialize the MySQL C API library once, this is important when running with multiple threads. Change-Id: I6f49b2725c725c809f3cacb1300619e078e449c7
Configuration menu - View commit details
-
Copy full SHA for 56743a8 - Browse repository at this point
Copy the full SHA 56743a8View commit details -
Configuration menu - View commit details
-
Copy full SHA for dcdd8dd - Browse repository at this point
Copy the full SHA dcdd8ddView commit details -
Bug#35907828 testNodeRestart -n LCP_with_many_parts_drop_table
fails when Partial LCP is OFF Patch for 7.6 only Problem: LCP_with_many_parts_drop_table is intended to cover a scenario where an LCP is restored from ~2048 partial LCPs with 1 part each one. To increase the number of partial-LCPs needed to restore an LCP the ERROR insert 10048 is used to force each one to use only 1 part. This works when partial LCP is enabled but for full LCP the error insert forces the number of parts calculated to be 1 instead of 2048 (default). This causes the assertion: ndbassert(is_partial_lcp_enabled() || num_change_parts == 0) to fail due to the wrong number of parts. Solution: Disable LCP_with_many_parts_drop_tables when EnablePartialLcp = 0 since test is useless in these case. Also, to prevent future issues caused by EI 10048 when partial LCP is disabled, error insert 10048 handler changed to set the number of parts to 1 only when partial lcp is enabled. For full lcp EI 10048 does nothing. Change-Id: I7008d4e3f6ec45621e4a5f811fc0c935f21cc496
Configuration menu - View commit details
-
Copy full SHA for c6f5a9b - Browse repository at this point
Copy the full SHA c6f5a9bView commit details -
Bug#35963172: HLL query crashes mysqld at setup_fields(assertion)
Changing test case from using function HLL to using METAPHON, since HLL cannot be loaded from a default installation. Change-Id: I96e5d7f702a7be6ff63a0e2f0db44130a4560d34
Configuration menu - View commit details
-
Copy full SHA for c997fe1 - Browse repository at this point
Copy the full SHA c997fe1View commit details -
Bug#35982510 Add mysqlclient_ername.h to clang_tidy_prerequisites
Add dependency to generate mysqlclient_ername.h Also add indentation to entries in top level .clang-tidy file (it was rejected by clang-16) Remove .clang-tidy and source_downloads from .gitignore Fix bad comment about clang 10 vs. clang 12. Change-Id: Idbc6b02a6c7f0018c54a43202d31b40c790596b2
Tor Didriksen committedNov 7, 2023 Configuration menu - View commit details
-
Copy full SHA for d6052b5 - Browse repository at this point
Copy the full SHA d6052b5View commit details -
WL#13959 Remove relay_log_info_repository and master_info_repository POST-PUSH-FIX After error messages has been changed in the scope of WL#13959, mysql_57_inplace_upgrade test started to fail because it expected the obsolete error message: Error in checking mysql.slave_master_info repository info type of TABLE Changed mysql_57_inplace_upgrade.test and removed unnecessary suppressions related to the OBSOLETE_ER_RPL_ERROR_CHECKING_REPOSITORY and OBSOLETE_ER_RPL_CHANNELS_REQUIRE_TABLES_AS_INFO_REPOSITORIES error types. Change-Id: I881d3b89c2f21162bf8ccc6ff034ecfd84c188ef
Karolina Szczepankiewicz committedNov 7, 2023 Configuration menu - View commit details
-
Copy full SHA for be80245 - Browse repository at this point
Copy the full SHA be80245View commit details -
Null-merge from mysql-5.7-cluster-7.6 ..
Change-Id: I9ad641191ce7385e0fa37f241d67c2bf7fc385cb
Configuration menu - View commit details
-
Copy full SHA for 30bc5c1 - Browse repository at this point
Copy the full SHA 30bc5c1View commit details -
Change-Id: I01c7c06d016b1b1df4a0f1493a0b540b081240ac
Configuration menu - View commit details
-
Copy full SHA for 5b0e14a - Browse repository at this point
Copy the full SHA 5b0e14aView commit details -
Bug#35907828 testNodeRestart -n LCP_with_many_parts_drop_table
fails when Partial LCP is OFF Post-push fix Removed the usage of nodeId as index to the configuration stuff. Indexes and nodeId can be different. Instead, the index of the nodeId section is searched and then used to retrieve the value of the desired parameters. Change-Id: I64b55ee96d471d17049072060ced9a0f8b09f36c
Configuration menu - View commit details
-
Copy full SHA for 366bb93 - Browse repository at this point
Copy the full SHA 366bb93View commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: Ib74cdd246055604d3f3d72e5d0420ddff872e65f
Configuration menu - View commit details
-
Copy full SHA for 00c2b02 - Browse repository at this point
Copy the full SHA 00c2b02View commit details -
Bug#35717365 [noclose] PB2: ndb_tls.large_txn_non_mt can fail
In suite ndb_tls: Remove large_txn.cnf, since large_txn.test has been removed. Remove SendBufferMemory & related params from generic my.cnf. Change-Id: I64e72bb4410246b9faf4c80bb18bdb98a37d018d
Configuration menu - View commit details
-
Copy full SHA for 9c697e4 - Browse repository at this point
Copy the full SHA 9c697e4View commit details -
Bug#35942104 Certificate::open_one() leaks the other certificates
Any additional certificates in the stack must be freed before open_one() returns. Change-Id: I363f7c93fc68affa02e2ab7054d808ae5b91111e
Configuration menu - View commit details
-
Copy full SHA for 684ff17 - Browse repository at this point
Copy the full SHA 684ff17View commit details -
Bug#35945822: SIGABRT Mysqld crash from operator() at sql_executor.cc…
…:4399 A query using GROUP BY WITH ROLLUP fails because it is not able to map an expression in the HAVING clause to the corresponding grouping column. The reason why it doesn't find the grouping column, is that Item::eq() gets confused by Item_cache items being added to one of the items being compared for equivalence, but not to the other. Item::eq() does try to see through Item_cache and other wrappers, but the current logic works only if the wrappers are applied in a given order. For example, if a cache is on top of a rollup group wrapper, it is able to see through both the cache and the rollup group wrapper. However, if the rollup group wrapper is on top of the cache, it sees through the rollup group wrapper, but not through the cache. Fixed by making the unwrapping logic keep going as long as there is something more to unwrap. Change-Id: I0359f8a56f23ea93c80d8256f02ddbdb582e7f69
Configuration menu - View commit details
-
Copy full SHA for 68cc9c2 - Browse repository at this point
Copy the full SHA 68cc9c2View commit details -
Bug#35686098 Assertion `n < size()' failed in Element_type& Mem_root_…
…array_YY [follow-up] Instability in canon, optimizer trace for set operations: if run with ASAN enabled, the number hashing overflow chunk files is eight, not four, presumably due to space overhead/alignment differing under ASAN. Solution: allow both 4 and 8 chunks. Change-Id: I2be6dceb7d4f24444e3846758a166c321bed341d
Dag Wanvik committedNov 7, 2023 Configuration menu - View commit details
-
Copy full SHA for f983506 - Browse repository at this point
Copy the full SHA f983506View commit details
Commits on Nov 8, 2023
-
Bug#35846221: Assertion Failure in /mysql-8.0.34/sql/field.cc:7119
Problem is due to missing implementation of Item_func_make_set::fix_after_pullout(), which makes this particular MAKE_SET function be regarded as const and may thus be evaluated during resolving. Fixed by implementing a proper fix_after_pullout() function. Change-Id: I7094869588ce4133c4a925e1a237a37866a5bb3c
Configuration menu - View commit details
-
Copy full SHA for 0c2c07f - Browse repository at this point
Copy the full SHA 0c2c07fView commit details -
Bug#35846221: Assertion Failure in /mysql-8.0.34/sql/field.cc:7119
Problem is due to missing implementation of Item_func_make_set::fix_after_pullout(), which makes this particular MAKE_SET function be regarded as const and may thus be evaluated during resolving. Fixed by implementing a proper fix_after_pullout() function. Change-Id: I7094869588ce4133c4a925e1a237a37866a5bb3c
Configuration menu - View commit details
-
Copy full SHA for ddfa195 - Browse repository at this point
Copy the full SHA ddfa195View commit details -
Change-Id: I7cf22d902423e75c9eed25c04eefa7700a5c983a
Configuration menu - View commit details
-
Copy full SHA for 9cf80d9 - Browse repository at this point
Copy the full SHA 9cf80d9View commit details -
Configuration menu - View commit details
-
Copy full SHA for ca55b38 - Browse repository at this point
Copy the full SHA ca55b38View commit details -
WL#9582 Fix a test in gis_bugs_crashes that was returning a different…
… result in MacOS after pushing wl9582. The input geometry of that test is (geometrically) invalid so the result is unexpected. The test now tests the input geometry if they are valid instead of performing operations on them. Change-Id: Ie66aefca95ba724eafa83751e40c3a5991e329b6
Configuration menu - View commit details
-
Copy full SHA for 8e29ae3 - Browse repository at this point
Copy the full SHA 8e29ae3View commit details -
Bug#34959356 Poor performance when using HASH field to check unique […
…back-port] GROUP BY with hashing due to long key. In the repro, all rows have two consecutive GROUP BY items referencing two columns with same contents plus two NULL fields, e.g. GROUP BY f1, f2, f3, f4. Both of two keys for first fields are of type HA_KEYTYPE_VARTEXT2 (same behavior for HA_KEYTYPE_TEXT, HA_KEYTYPE_VARTEXT1). For this case, when computing the hash of the field, an XOR function is used. This causes the two first fields (with same contents) to cancel each other, so the resulting hash was zero for all rows after the two first fields even though all rows' grouped items are different in the first two fields, e.g. f1 f2 f3 f4: row 1: "a", "a", NULL, NULL row 2: "b", "b", NULL, NULL It happens like this (in calc_field_hash): hash_val = 0 hash_val (f1) = 0 ^= func("a") hash_val (f2) = hash_val(f1) ^ func("a") = 0 ^ func("a") ^ func("a") = 0 Next, the zero is modified by the two nulls, but the end hash for the row is the same for all rows. This yields very bad performance, i.e. 100% collision in hash table. Solved by modifying the hash computation to avoid this effect. Change-Id: If9b2059a761be912350400456b029f9fbc3e77ea
Dag Wanvik committedNov 8, 2023 Configuration menu - View commit details
-
Copy full SHA for afd9c65 - Browse repository at this point
Copy the full SHA afd9c65View commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: I6fd8ff1165fba98e52788125961724c21df42a5e
Dag Wanvik committedNov 8, 2023 Configuration menu - View commit details
-
Copy full SHA for 3055f92 - Browse repository at this point
Copy the full SHA 3055f92View commit details -
Bug#35968017 Unknown character set '' with c/j and rwsplitting
When connector/j is used to connect to the rw-splitting port, the connection fails and router logs: Unknown character set '' The error is triggered by connector/j as it runs: SET character_set_results = NULL to disable conversion of the character sets from server to client, but the session-tracker reports the NULL as '' to the router. Replaying the SET character_set_results = "" leads to above error as empty-string isn't a known character-set name. Change ====== - store NULL instead of "" when the session-tracker reports { character_set_results: "" } Change-Id: Idcfc9b7668e034e0f0fd3efedac810ad87035ff3
Configuration menu - View commit details
-
Copy full SHA for a0be654 - Browse repository at this point
Copy the full SHA a0be654View commit details -
WL#15966 default server_ssl_mode PREFERRED
Recent advanced Router features - Connection Reuse - Connection Sharing - Read-Write splitting require that the Router accepts the client connection first and then selects a Server to route the connection too. The current default combination of client_ssl_mode/server_ssl_mode PREFERRED/AS_CLIENT first connects to the server and then sends the server's handshake to the client, which means users need to adapt the configuration before they can use the new features. Change ====== - At bootstrap, changed the default of "--server-ssl-mode" from "AS_CLIENT" to "PREFERRED" if "--client-ssl-mode" is not "PASSTHROUGH" Change-Id: I97173891139b4d68c8f9f1752e20e71f17e7165f
Configuration menu - View commit details
-
Copy full SHA for 970eea0 - Browse repository at this point
Copy the full SHA 970eea0View commit details -
WL#15684 System variable to select JSON EXPLAIN format
Add system variable explain_json_format_version to change between the different JSON formats Change-Id: Ib0361109bb764ddc606d31ede90d73d1643380b4
Configuration menu - View commit details
-
Copy full SHA for 021f6d6 - Browse repository at this point
Copy the full SHA 021f6d6View commit details -
WL#15684 System variable to select JSON EXPLAIN format
Post-push fix: broken test on rebase Change-Id: Ib0361109bb764ddc606d31ede90d73d1643380b4
Configuration menu - View commit details
-
Copy full SHA for ecd5893 - Browse repository at this point
Copy the full SHA ecd5893View commit details -
Bug#35728291 Autotest testNodeRestart -n TransStallTimeoutNF T1 fails…
… occasionally Test fixes 1 Improve failure logging in test 2 Cause transaction under test to access the random rowNumber generated rather than row number 1. 3 Modify error insert so that it does not stall transaction commit on SYSTAB_0 which can affect GSL locking of any attached MySQLDs. Change-Id: I53604d2e7afe147a8dbfde4279c20086c3388f02
Configuration menu - View commit details
-
Copy full SHA for f08b14c - Browse repository at this point
Copy the full SHA f08b14cView commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: I4b004b9fcb6a6e44bacb56ae450336ccb95dd6e1
Configuration menu - View commit details
-
Copy full SHA for d3e0f23 - Browse repository at this point
Copy the full SHA d3e0f23View commit details -
Bug#35728291 Autotest testNodeRestart -n TransStallTimeoutNF T1 fails…
… occasionally Test fixes 1 Improve failure logging in test 2 Cause transaction under test to access the random rowNumber generated rather than row number 1. 3 Modify error insert so that it does not stall transaction commit on SYSTAB_0 which can affect GSL locking of any attached MySQLDs. Change-Id: I53604d2e7afe147a8dbfde4279c20086c3388f02
Configuration menu - View commit details
-
Copy full SHA for 143764e - Browse repository at this point
Copy the full SHA 143764eView commit details -
Null merge branch 'mysql-5.7-cluster-7.6' into mysql-8.0
Change-Id: Ia0991115bcfa52418bcd6b339dc1dc0dce4690f5
Configuration menu - View commit details
-
Copy full SHA for 54d5f8f - Browse repository at this point
Copy the full SHA 54d5f8fView commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: Icc666df85e1b129137635d0ee8f950ee015d653d
Configuration menu - View commit details
-
Copy full SHA for 8676cab - Browse repository at this point
Copy the full SHA 8676cabView commit details
Commits on Nov 9, 2023
-
Bug#35889669 bulkload doesn't provide innodb stats hence autodb gives incorrect estimates The table statistics was not updated by bulk load statement. This patch populates the columns TABLE_ROWS and DATA_LENGTH of the information schema table INFORMATION_SCHEMA.TABLES. Change-Id: I74ef9042dafbc2bd1f72dfe459ff47ce68abb98a
Configuration menu - View commit details
-
Copy full SHA for fecdc94 - Browse repository at this point
Copy the full SHA fecdc94View commit details -
WL#15508 Group index skip scans in the hypergraph optimizer [1/3]
Code refactoring only, no functional changes 1) Refactored out aggregate-handling code into separate function. 2) Refactored range optimizer code for group index skip scans to prepare for collecting all possible group index skip scans instead of only the best one. 3) Re-recorded trace outputs for tests which have changed to use group index skip scan trace strings. Change-Id: Ib18b24e61d616ad255f18c8e4945940e726b99a6
Priyanka Sangam committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for 8ec7d40 - Browse repository at this point
Copy the full SHA 8ec7d40View commit details -
WL#15508 Group index skip scans in the hypergraph optimizer [2/3]
Added tests for group index skip scans in the hypergraph optimizer 1) Modified group_min_max.test for reuse by factoring out queries into group_skip_scan_test.inc and renamed test to group_skip_scan.test. 2) Added group_skip_scan_hypergraph.test using new include file roup_skip_scan_test.inc. 3) Renamed group_min_max_ext test and associated include files to use prefix group_skip_scan_ext. Added a group_skip_scan_ext_hypergraph test by reusing these include files. These tests show how hypergraph does not choose group skip scans, as they are not yet supported. Change-Id: I0a4e05d97e77c37a43e643b1867457894d5b9be3
Priyanka Sangam committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for 3043592 - Browse repository at this point
Copy the full SHA 3043592View commit details -
WL#15508 Group index skip scans in the hypergraph optimizer [3/3]
Added support for group index skip scans in the hypergraph optimizer 1) Added functions in the range optimizer to collect and propose all possible group index skip scans. 2) Modified the hypergraph optimizer to use the added range optimizer functions to collect and propose all group index skip scans while processing candidate index range scans. 3) Modified hypergraph code to populate aggregators for use by group index skip scans in addition to existing use by aggregate paths. 4) Added a cost dimension named 'has_group_skip_scan' to access paths, set to true for access paths which have a group index skip scan. This flag is propagated to all access paths added on top of group skip scans. 5) Used the has_group_skip_scan cost dimension to skip adding aggregation paths when the group skip scan has already done the aggregation. Also modified access-path comparison to prefer paths which are already aggregated. 6) Skipped rowcount-consistency checks for access paths which are already aggregated/deduplicated by group skip scans when compared to paths which are not aggregated/deduplicated. 7) Implemented unit tests for group index skip scans in hypergraph where multiple candidate group skip scans are proposed. Re-recorded test output for hypergraph tests which have changed to use group index skip scans. Change-Id: I600f9716f61a9fe7cb2f15090ec0b46e67c7f0cb
Priyanka Sangam committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for 02fdfa8 - Browse repository at this point
Copy the full SHA 02fdfa8View commit details -
Bug#35786041: Same values on "--vardir" and "--comment" are used.
Description: ----------- Changing PB2 collections files to use different "--vardir" and "--comment" values. Change-Id: Id30a5600725bc8243d647aec0af39092f91ed137
Harini T S committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for 82708f7 - Browse repository at this point
Copy the full SHA 82708f7View commit details -
WL#15738: Flush Mask Dictionary from table to memory
All components that have tables that can be changed on a primary and run on replicas - need to include the ability to call the scheduler component and flush the data on the secondary or replica into memory. This worklog is to implement the ability to periodically refresh the in-memory cache of masking dictionaries from the underlying table. The flush may be requested by the user ad hoc (by calling UDF) or periodically, leveraging the scheduler component. In particular, the periodical flush is required in MDS scenarios. Change-Id: Ib57b4f5abdd404e39fb89fd3348245d10f4296c1
Michal Jankowski committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for f59821b - Browse repository at this point
Copy the full SHA f59821bView commit details -
WL#11007: Remove group_replication_primary_member status variable
In this worklog the global status variable group_replication_primary_member will be removed. It was deprecated in the 'WL#10958: Deprecate group_replication_primary_member status variable' in MySQL 8.0.4. The MEMBER_ROLE column of performance_schema.replication_group_members table can be used to identify whether member has PRIMARY or SECONDARY role. Change-Id: I5b8eec14dc44020e0c83cfd4114aefe031582c64
Hemant Dangi committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for ac373b1 - Browse repository at this point
Copy the full SHA ac373b1View commit details -
1. Client Certificates between client and router 2. Client Certificates between router and server 3. Require users to authenticate with certificates against router New [routing] options: - client_ssl_ca and client_ssl_capath (for client->router) - server_ssl_cert and server_ssl_key (for router->server) - router_require_enforce `router_require_enforce` enforces the Router to enforce the `router_require` attribute of the user: CREATE USER ... ATTRIBUTES '{"router_require": { "subject": ...}}' Change-Id: I0ae994807fb4f51de6c6bd9bf72649b11a817142
Configuration menu - View commit details
-
Copy full SHA for 8a4f379 - Browse repository at this point
Copy the full SHA 8a4f379View commit details -
Bug#35968017 Unknown character set '' with c/j and rwsplitting
Postfix, reworked test to not fail on newly added sys-vars. Change-Id: I90417df30f359f3b5dd7ecb089fa8a657763b27c
Configuration menu - View commit details
-
Copy full SHA for 469de85 - Browse repository at this point
Copy the full SHA 469de85View commit details -
Bug#35997600 routertest_component_component_test_framework fails on s…
…olaris routertest_component_component_test_framework's ComponentTestFrameworkTest.wait_for_exit_with_low_timeout_tester fails on slow machines with: [ RUN ] ComponentTestFrameworkTest.wait_for_exit_with_low_timeout_tester .../component/test_component_test_framework.cc:277: Failure Failed Expected exception of type std::system_error but got none .../helpers/process_manager.cc:679: Failure Failed # Process: (pid=4479) ./runtime_output_directory/routertest_component_component_test_framework --gtest_filter=ComponentTestFrameworkTest.DISABLED_wait_for_exit_with_low_timeout_testee ## Console output: Note: Google Test filter = ComponentTestFrameworkTest.DISABLED_wait_for_exit_with_low_timeout_testee [==========] Running 0 tests from 0 test suites. [==========] 0 tests from 0 test suites ran. (0 ms total) [ PASSED ] 0 tests. YOU HAVE 1 DISABLED TEST [ FAILED ] ComponentTestFrameworkTest.wait_for_exit_with_low_timeout_tester (240 ms) The test expects that the test-dummy does not exit immediately. The test-dummy is expected to block for 2seconds, but as seen above the test-dummy isn't actually executed as gtest skips disabled tests by default. Change ====== - pass --gtest_also_run_disabled_tests to actually run the test-dummy. Change-Id: I829bcd2f2b3c8f130772f6f94d7bf0987f9d4ef3
Configuration menu - View commit details
-
Copy full SHA for 7c3ca93 - Browse repository at this point
Copy the full SHA 7c3ca93View commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: Id7e1f65c0ae6593f998bb2c9823a9f0bd571f1ef
Configuration menu - View commit details
-
Copy full SHA for ac0ef44 - Browse repository at this point
Copy the full SHA ac0ef44View commit details -
Bug#35665084: MySQL sig 11 crash observed in dispatch_sql_command
* Prevent any has_external table check for error propagation in MySQL as it is causing crashes when MySQL tables were getting closed. * Instead use the set_execute_only_in_secondary_engine to make execution in secondary engine forced when there is an external primary engine. This prevents both optimization and execution to consider the primary engine. If there is an error in the secondary engine then the execution stops and returns the error. Change-Id: I5fcbfa7cbb8f06816567ddc8416562878ab0fd00
Stella Giannakopoulou committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for 2ad874a - Browse repository at this point
Copy the full SHA 2ad874aView commit details -
BUG#35797536 MySQL 8.0 MSI won't install on system with 8.1+ already …
…present Change-Id: I430d5efbf0edb30b97bf2a501e989adbfbacc935
Configuration menu - View commit details
-
Copy full SHA for 805922c - Browse repository at this point
Copy the full SHA 805922cView commit details -
Change-Id: Ib6095515b7d9023432a7dda37a1d2de0c96c7ed0
Configuration menu - View commit details
-
Copy full SHA for fc873a0 - Browse repository at this point
Copy the full SHA fc873a0View commit details -
Bug#35728352 Autotest testNdbApi -nCheckSlowCommit fails occasionally
Testcase was failing on multi-nodegroup clusters as it was killing a data node which was not a participant in the transaction being tested, and so the TC node failure handling code being tested was not invoked. In a single nodegroup cluster, all data nodes are participants in any writing transaction. Fix is to ensure that the node killed is both : - Not the TC node - A participant in the transaction. Change-Id: Idf60a79918b648f5125a681204b9c168c8de37dc
Configuration menu - View commit details
-
Copy full SHA for 5f0f139 - Browse repository at this point
Copy the full SHA 5f0f139View commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: Ib197a8edbe60633e8e30984fd1cc72a83af0bef0
Configuration menu - View commit details
-
Copy full SHA for f344a34 - Browse repository at this point
Copy the full SHA f344a34View commit details -
Bug#35728352 Autotest testNdbApi -nCheckSlowCommit fails occasionally
Testcase was failing on multi-nodegroup clusters as it was killing a data node which was not a participant in the transaction being tested, and so the TC node failure handling code being tested was not invoked. In a single nodegroup cluster, all data nodes are participants in any writing transaction. Fix is to ensure that the node killed is both : - Not the TC node - A participant in the transaction. Change-Id: Idf60a79918b648f5125a681204b9c168c8de37dc
Configuration menu - View commit details
-
Copy full SHA for 925b988 - Browse repository at this point
Copy the full SHA 925b988View commit details -
Null merge branch 'mysql-5.7-cluster-7.6' into mysql-8.0
Change-Id: Ifbe966948e0d23d032124a9036aa0b783c550d7a
Configuration menu - View commit details
-
Copy full SHA for 4000643 - Browse repository at this point
Copy the full SHA 4000643View commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: I3c437b930828e884b2591dac56ca9b401c0260be
Configuration menu - View commit details
-
Copy full SHA for 5e6a7d1 - Browse repository at this point
Copy the full SHA 5e6a7d1View commit details -
Bug#30529132 REVIEW OUTDATED LCPSCANFRAGWATCHDOG AND BUG24664 USING E…
…RROR INSERT 10039 This patch adds an error insert mechanism to allow LCP fragment scans to be stalled once they start (10055), or after scanning at least one batch of rows (10056). Additionally, a table id can be supplied to indicate that the stall should happen on a particular table. To supply the tableId the error insert 'extra' value is used. This can be specified using e.g. the NdbRestarter test class. The ndb_mgm ERROR command does not currently allow it to be specified, but the Backup block already has a dump code (DUMP 13003) that can be used to set the error and error2 codes in the Backup block. Therefore : Stall next LCP fragment scan to start : ERROR 10055 DUMP 13003 10055 0 Stall next LCP fragment scan to start on tableid t ERROR 10055 <t> DUMP 13003 10055 <t> Stall next LCP fragment next-batch : ERROR 10056 DUMP 13003 10056 0 Stall next LCP fragment next-batch on tableid t ERROR 10056 <t> DUMP 13003 10056 <t> Subsequent patch will use these error codes from existing tests which are not currently getting LCP coverage. Change-Id: I24cdceb69782b101953d8a94b0e5539d2eb91e31
Configuration menu - View commit details
-
Copy full SHA for 600d8ad - Browse repository at this point
Copy the full SHA 600d8adView commit details -
Bug#30529132 REVIEW OUTDATED LCPSCANFRAGWATCHDOG AND BUG24664 USING E…
…RROR INSERT 10039 This fix replaces the use of the old error code 10039 with new error code 10055 for stalling local checkpoints in testcases where that is required. Error code 10039 was modified in the 7.6 release so that it no longer stalled LCP and so the testcases were not giving the coverage they were designed for. This fix restores the testcase coverage. Testcases affected : testNodeRestart -n LcpScanFragWatchdog -n LcpScanFragWatchdogDisable -n LcpScanFragWatchdogIsolation testSystemRestart -n Bug24664 Notes : - In 7.6 Partial LCP : - The LCP Fragment Scan Watchdog was changed to cover more of the LCP process including non scan periods. - An undocumented hard coded two minute time limit on each actual fragment scan has been implemented. This means that the user supplied limit is ignored when a fragment scan takes > 2 minutes. This behavioural change is left as-is. The 2 minute hard limit does not affect e.g. testNodeRestart -n LCPScanFragWatchdogDisable as the scan stall error injection occurs before that timing mechanism starts. - testSystemRestart -n Bug24664 was using error insert 10040 to resume the stalled LCP. This is no longer necessary, clearing the error is sufficient. Additionally, error insert 10040 has been reused for a different purpose so the overall effect was incorrect. Change-Id: I28f6462c101016a50b74bde2375176424428b9fe
Configuration menu - View commit details
-
Copy full SHA for c27d3ee - Browse repository at this point
Copy the full SHA c27d3eeView commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: I507b61556d19ae81b1528ac67aa28c8c93e87dc0
Configuration menu - View commit details
-
Copy full SHA for b96b434 - Browse repository at this point
Copy the full SHA b96b434View commit details -
WL#15294 Extending GTID with tags to identify group of transactions -…
… step 1a Step1: Serialization/deserialization framework - utility library Utility library implements utility functions, classes, template helpers, missing generic functionalities. Selected features are: - macro deprecating a header - definition of Error - base class for (C++) error handling - enumeration utilities: to_underlying (-std=C++<23>), to_enumeration - template utilities: Is_specialization - bit operations Change-Id: Ie5c3985843652e83dfc0b7ec77e3648823c3922f
Karolina Szczepankiewicz committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for b44b749 - Browse repository at this point
Copy the full SHA b44b749View commit details -
WL#15294 Extending GTID with tags to identify group of transactions -…
… step 1b Step1: Serialization/deserialization framework - algorithms This step introduces serialization library. Implemented are basic algorithms to encode/decode primitive fields, including variable-length integers. Provided are also basic definitions (serialization_types.h), definition of an errors that may occur during serialization. Change-Id: I092f95e187199b1dc9034e686efe5f4f4d093d5a
Karolina Szczepankiewicz committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for d06b29a - Browse repository at this point
Copy the full SHA d06b29aView commit details -
WL#15294 Extending GTID with tags to identify group of transactions -…
… step 1c Step1: Serialization/deserialization framework - serialization library This step implements serialization library. Serialization library provides methods for automatic serialization and deserialization of a serializable fields defined by the API user. It exposes a simple API of a message definition, which involves provision of fields and fields metadata such as: - field encode predicate, a function called by the encoder and used to determine whether a field should be encoded in the output stream - field missing functor, a function called by the decoder in case field was not found in the input stream - unknown field policy, a behavior defined by the encoder and applied by the encoder in case an unknown field is found in the input stream. Implementation defines three basic concepts, implemented using a static polymorphism: - Serializable - base class for serializable types, which provides an interface needed to be implemented by derived classes - Serializer - used to define a message boundaries in the data stream, define formatting of the message and field metadata. This is a base class for implemented serializers. It provides means to iterate over serializable fields of unknown types. Serializer classes are responsible for message encoding and decoding. Derived classes should provide methods for a specific implementaton of message boundaries and fields formatting. - Archive - base class which has two main roles: byte-level formatting of simple types and data storage. Archive chooses the final format of formatted data (text/binary). Change-Id: Iedc1adfdad763dab5a95aa0f5df9116eeb3ce10d
Karolina Szczepankiewicz committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for 74b03e0 - Browse repository at this point
Copy the full SHA 74b03e0View commit details -
WL#15294 Extending GTID with tags to identify group of transactions -…
… step 2 Step 2. Changing representation of the GTID This step implements changes in GTID representation. It introduces classes responsible for TAG/TSID processing in the system, changes GTID / GTID set parsing and serialization methods. It also implements a new event, GTID_TAGGED_LOG_EVENT implemented in the Gtid_event and Gtid_log_event classes. Change-Id: Ibedf34baaba3112bc8fd4b6dff3e549a20f31fdb
Karolina Szczepankiewicz committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for f7dbc5f - Browse repository at this point
Copy the full SHA f7dbc5fView commit details -
WL#15294 Extending GTID with tags to identify group of transactions -…
… step 3 Step 3. Changes in GTID handling/generation in the Binlog Group Commit This step implements an integration of a GTID tag feature with binlog group commit. It also implements optimization for locks acquired during the binlog group commit flush stage which is described in sections below. Moreover, this step is testing GTID tag feature in various scenarios, such as asynchronous replication, semi-synchronous replication, multi-source replication, using compressed replication stream. It tests GTID functions, binlogging of tagged GTID log events, auto-positioning protocol in case GTID tags are present in the system and forward compatibility of a tagged Gtid_log_event. Below described is an extension of SIDNO lock/unlock optimization used during the binlog group commit flush stage. GTID is assigned automatically during the Binlog Group Commit flush stage. This function acquires locks associated with the global SID map object that is defined in the current mysql process and UUID related SIDNO. UUID related SIDNO lock needs to be held while calling GTID generation code. In case the user specifies the full GTID for the next transaction, lock related to the defined SIDNO is taken and unlocked in the GTID generation code. Extended locking mechanism was used to reduce a number of lock/unlock calls for the server SIDNO lock. When using AUTOMATIC:tag option, we may use multiple SIDNO locks and we may benefit from the same lock/unlock optimization as introduced for a server SIDNO. This step implements Locked_sidno_set that keeps track of multiple SIDNO locks and provides a deadlock-free locking mechanism for registered locks. Registered locks will be automatically unlocked after the binlog flush stage is over. Change-Id: Icc57a01d4ee2ce4ed0dc556cbec842882bd83f75
Karolina Szczepankiewicz committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for 674df5a - Browse repository at this point
Copy the full SHA 674df5aView commit details -
WL#15294 Extending GTID with tags to identify group of transactions -…
… step 4 Step 4. Changes in GTID handling/generation in the GR plugin This step performs integration of the GTID tag feature with the Group Replication plugin. It extends algorithms for GTID generation used in the Ceritifier class. * Background: Before this WL, Certifier kept track of: - GTID free intervals for the group UUID - currently "allocated" GTID interval used for each member. Certifier allocated GTID intervals in blocks, with specified block size. In case the GR view changes, certifier is updating the list of available GTID intervals. Allocated blocks are cleared out, list of available intervals is re-calculated from the current state of the group gtid executed set, then new blocks are allocated. In case the current GTID interval is exhausted, a new block is allocated for the group sidno and member from which transaction originates. * Overview of changes: In this step, functionality is extended to handle all TSIDs which correspond to different SIDNOs (tagged TSIDs with the group replication UUID). Descibed logic is refactored out to the newly introduced "Gtid_generator" and "Gtid_generator_for_sidno" classes. Gtid_generator keeps track of generators of GTIDs for recorded sidnos (AUTOMATIC tagged or untagged). Cerification handler utilizes new information about GTID to be generated (tag) and propagated in the updated Gtid_log_event. Change-Id: I7d0a7fc4df975d8772ac97c444216623db5b98d6
Karolina Szczepankiewicz committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for 4ca1476 - Browse repository at this point
Copy the full SHA 4ca1476View commit details -
WL#15294 Extending GTID with tags to identify group of transactions -
step 5 Step 5. Protect execution of GTID_NEXT 'tagged' with privileges Setting of GTID_NEXT with a tag is allowed for users with one of the following privileges: - TRANSACTION_GTID_TAG and SYSTEM_VARIABLES_ADMIN - TRANSACTION_GTID_TAG and SESSION_VARIABLES_ADMIN - TRANSACTION_GTID_TAG and REPLICATION_APPLIER This step implements necessary checks executed in applier and set GTID_NEXT command handler. Change-Id: I7de5c9923058e51e47393f62072a619a829796f4
Karolina Szczepankiewicz committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for e534da7 - Browse repository at this point
Copy the full SHA e534da7View commit details -
WL#15294 Extending GTID with tags to identify group of transactions -
step 6 Step 6. Allow GTID_NEXT with a tag to run under certain GTID modes. Allow changing GTID mode to compatible mode in case sessions with tag assigned are running. This step implements necessary checks performed during execution of the SET GTID_NEXT command: - Setting GTID_NEXT to AUTOMATIC:tag is allowed if gtid_mode is ON or ON_PERMISSIVE - Setting GTID mode to off is disallowed in case there is any ongoing session with GTID_NEXT set to AUTOMATIC:tag - Setting GTID_NEXT to UUID:tag:NUMBER is allowed in case GTID_MODE is ON, ON_PERMISSIVE or OFF_PERMISSIVE - Setting GTID mode to OFF is disallowed in case there is an ongoing session owning an assigned, tagged GTID Change-Id: I73254dfcc2c3f256054b9c876e8c26936db7bc00
Karolina Szczepankiewicz committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for 215fec6 - Browse repository at this point
Copy the full SHA 215fec6View commit details -
WL#15294 Extending GTID with tags to identify group of transactions -…
… step 7 Step 7. Rename remaining '*sid*' related type names, function names and variables into '*tsid*' Change-Id: I0f28272a5b93ce081623436f9a957aba76f9e91b
Karolina Szczepankiewicz committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for b6776e9 - Browse repository at this point
Copy the full SHA b6776e9View commit details -
WL#15294 Extending GTID with tags to identify group of transactions -…
… step 8 Step 8. extends GTID assigned optimization used in case UUID of the transaction is a server UUID. The 'next_free_gno' variable is extended to an unordered map, which tracks next free transaction numbers for multiple SIDNOs Change-Id: I18cc13129733bae76649b6e7be8b7486f8781e57
Karolina Szczepankiewicz committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for fc8cbdd - Browse repository at this point
Copy the full SHA fc8cbddView commit details -
WL#15294 Extending GTID with tags to identify group of transactions -…
… step 9 At connection time, the source checks that the replica doesn't have GTIDs with the source's UUID, which the source does not have. This step extends the check to cover also tagged GTIDs with the source's UUID Change-Id: I81ad1f3befc19a5f8762baca2017fc28a39e823b
Karolina Szczepankiewicz committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for fb24ccd - Browse repository at this point
Copy the full SHA fb24ccdView commit details -
Merge branch 'mysql-trunk-wl15294-step9' into mysql-trunk
This worklog is aimed at easier identification of groups of transactions that were executed for different purposes. Right now, the user is able to set GTID for the next transaction by setting GTID_NEXT to UUID:NUMBER. The user is also allowed to set GTID_NEXT to AUTOMATIC, which means that the server will generate GTIDs for transactions executing within the current session scope. The goal of this worklog is allow the user to specify a name for a group of transactions, so that the user can easily distinguish this group by simply looking at transactions GTIDs. This worklog extends the GTID definition. Current GTID implementation consists of unique source UUID and transaction sequence number. Introduced is a GTID tag, which may be assigned to a single transaction or a group of transactions by the user. Therefore, definition of a tagged transaction GTID is: UUID:TAG:NUMBER. When transaction tag is unspecified, transaction keeps UUID:NUMBER definition. The user is able to assign a tag for transaction GTID by executing the SET GTID_NEXT command. The user can assign a tag to a single transaction with specified UUID and GNO or to all transactions generated within current session scope that will be generated automatically. TAG definition accepted in the system is the following: ^[a-zA-Z_][a-zA-Z0-9_]{0,31}$ which means that: - tag consist of up to 32 characters (<=32) - tag accepts letters with ASCII codes between 'a'-'z' and 'A'-'Z', numbers (0-9), and the underscore character; tag must start with a letter or underscore - tag definition is case insensitive, after the user provides a tag, tag is normalized to contain only lower-case letters Please note that the user may choose to skip tag definition. This way, tag will be empty. SET gtid_next='AUTOMATIC:<TAG>' is only allowed when gtid_mode is ON or ON_PERMISSIVE. If gtid_mode is OFF or OFF_PERMISSIVE, SET gtid_next='AUTOMATIC:<TAG>' gives an error. In case the current mode is ON or ON_PERMISSIVE, and there is any session ongoing with gtid_next set to 'AUTOMATIC:<TAG>', the user is not able to change the mode to any of the incompatible GTID modes (OFF / OFF_PERMISSIVE). Setting gtid_next to '<UUID>:<TAG>:<NUMBER>' is allowed in case current GTID mode is ON, ON_PERMISSIVE or OFF_PERMISSIVE. Otherwise, setting a tag produces an error. In case the current mode is ON, ON_PERMISSIVE or OFF_PERMISSIVE, and there is GTID specified for the next transaction that includes a TAG, the user is not able to change the mode to any of the incompatible GTID modes (OFF). Change-Id: I3b07ae8f5266a942487802a5c75396aa5c13fd89
Karolina Szczepankiewicz committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for 7a0bbd7 - Browse repository at this point
Copy the full SHA 7a0bbd7View commit details -
Bug#30529132 REVIEW OUTDATED LCPSCANFRAGWATCHDOG AND BUG24664 USING E…
…RROR INSERT 10039 This patch adds an error insert mechanism to allow LCP fragment scans to be stalled once they start (10055), or after scanning at least one batch of rows (10056). Additionally, a table id can be supplied to indicate that the stall should happen on a particular table. To supply the tableId the error insert 'extra' value is used. This can be specified using e.g. the NdbRestarter test class. The ndb_mgm ERROR command does not currently allow it to be specified, but the Backup block already has a dump code (DUMP 13003) that can be used to set the error and error2 codes in the Backup block. Therefore : Stall next LCP fragment scan to start : ERROR 10055 DUMP 13003 10055 0 Stall next LCP fragment scan to start on tableid t ERROR 10055 <t> DUMP 13003 10055 <t> Stall next LCP fragment next-batch : ERROR 10056 DUMP 13003 10056 0 Stall next LCP fragment next-batch on tableid t ERROR 10056 <t> DUMP 13003 10056 <t> Subsequent patch will use these error codes from existing tests which are not currently getting LCP coverage. Change-Id: I24cdceb69782b101953d8a94b0e5539d2eb91e31
Configuration menu - View commit details
-
Copy full SHA for 266f712 - Browse repository at this point
Copy the full SHA 266f712View commit details -
Bug#30529132 REVIEW OUTDATED LCPSCANFRAGWATCHDOG AND BUG24664 USING E…
…RROR INSERT 10039 This fix replaces the use of the old error code 10039 with new error code 10055 for stalling local checkpoints in testcases where that is required. Error code 10039 was modified in the 7.6 release so that it no longer stalled LCP and so the testcases were not giving the coverage they were designed for. This fix restores the testcase coverage. Testcases affected : testNodeRestart -n LcpScanFragWatchdog -n LcpScanFragWatchdogDisable -n LcpScanFragWatchdogIsolation testSystemRestart -n Bug24664 Notes : - In 7.6 Partial LCP : - The LCP Fragment Scan Watchdog was changed to cover more of the LCP process including non scan periods. - An undocumented hard coded two minute time limit on each actual fragment scan has been implemented. This means that the user supplied limit is ignored when a fragment scan takes > 2 minutes. This behavioural change is left as-is. The 2 minute hard limit does not affect e.g. testNodeRestart -n LCPScanFragWatchdogDisable as the scan stall error injection occurs before that timing mechanism starts. - testSystemRestart -n Bug24664 was using error insert 10040 to resume the stalled LCP. This is no longer necessary, clearing the error is sufficient. Additionally, error insert 10040 has been reused for a different purpose so the overall effect was incorrect. Change-Id: I28f6462c101016a50b74bde2375176424428b9fe
Configuration menu - View commit details
-
Copy full SHA for e66a42f - Browse repository at this point
Copy the full SHA e66a42fView commit details -
Null merge branch 'mysql-5.7-cluster-7.6' into mysql-8.0
Change-Id: I10f5532426298b8884f83a4e0f00921311cd8a11
Configuration menu - View commit details
-
Copy full SHA for e4adf88 - Browse repository at this point
Copy the full SHA e4adf88View commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: I39c50170a614ac432cdd6fe4943f28fd0c2f9bf6
Configuration menu - View commit details
-
Copy full SHA for f6faa8e - Browse repository at this point
Copy the full SHA f6faa8eView commit details -
Bug#35750771 Refactor Transporter and NdbSockets to handle lifecycle …
…issues Post-push addendum patch, fixing a regression causing slow disconnects. When the connection over a transporter was 'lost', possibly due to the other peer closing the socket, the TransporterRegistry logic to gracefully take down the connection relied upon ::performReceive() to be called on the failed transporter. performReceive() would then detect an error on that socket, resulting in start_disconnecting_trp() to be called, which would initiate the disconnect process, and eventually report the Transporter as DISCONNECTED. Patch for Bug#35750771 introduced handling of the EPOLLHUP event. When receiving that event we just unsubscribed on any further event from the HUP'ed socket. Notably we did *not* set the TrpId bit in the m_recv_transporters which had data to be received from ::performReceive(). Thus performReceive() may not detect the failed socket and start_disconnecting_trp(). Patch fix this issue by calling start_disconnecting_trp() immediately when the EPOLLHUP event is received. Another minor issue is also fixed, where we counted the EPOLLHUP event as if a transporter had been added to the m_recv_transporters set. Change-Id: I526887f1f39c74e3c4ae166a45853cb7f9f50096
Ole John Aske committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for 9feed3e - Browse repository at this point
Copy the full SHA 9feed3eView commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: Iff150a3abfb281f09970ed2692d8a0cf67a537a7
Ole John Aske committedNov 9, 2023 Configuration menu - View commit details
-
Copy full SHA for 0f45e02 - Browse repository at this point
Copy the full SHA 0f45e02View commit details
Commits on Nov 10, 2023
-
Bug#35930469: Fix clang-tidy warnings for mysql.cc
The focus of the bug is to fix warnings (as many as possible) reported by clang-tidy tool, on a single file only (client/mysql.cc). Many of reported issues may not be severe, but they add the noise in clang-tidy results, making it hard to detect new issues. The types of bugs to be fixed: - function parameter names are different in declaration vs definition mysql.cc:338:43: warning: function 'com_go' has a definition with different parameter names [readability-inconsistent-declaration-parameter-name] - redundant "return;" statement at the end of void function mysql.cc:5842:3: warning: redundant return statement at the end of a function with a void return type [readability-redundant-control-flow] - boolean expressions that can be optimized mysql.cc:1288:12: warning: redundant boolean literal in conditional return statement [readability-simplify-boolean-expr] - possible buffer overflow bugs mysql.cc:1506:45: warning: comparison length is too long and might lead to a buffer overflow [bugprone-not-null-terminated-result] strncmp(link_name, "/dev/null", 10) == 0) { ^~ 9 - improve const correctness mysql.cc:2737:3: warning: variable 'length' of type 'size_t' (aka 'unsigned long') can be declared 'const' [misc-const-correctness] - redundant function declarations mysql.cc:3257:14: warning: redundant 'index' declaration [readability-redundant-declaration] extern char *index(const char *, int c), *rindex(const char *, int); - use "auto" to avoid repeating the type in expression mysql.cc:3766:3: warning: use auto when initializing with a cast to avoid duplicating the type name [hicpp-use-auto,modernize-use-auto] client_query_attributes *qa = - compare function result explicitly with returned number (!= 0) mysql.cc:4656:22: warning: function 'strcmp' is called without explicitly comparing result [bugprone-suspicious-string-compare] if (!current_db || cmp_database(charset_info, current_db, tmp)) { ^ - simplify if/else blocks when "if" block unconditionally returns mysql.cc:2568:5: warning: do not use 'else' after 'return' [llvm-else-after-return,readability-else-after-return] Change-Id: Ic7df7fbdc964d0350d20c07e8e6600455f2f023f
Miroslav Rajcic committedNov 10, 2023 Configuration menu - View commit details
-
Copy full SHA for a60d86c - Browse repository at this point
Copy the full SHA a60d86cView commit details -
Bug#35507763: Assertion `to->field_ptr() != from->field_ptr()' failed.
An assertion failed when inserting data into a table with a zero-length column, such as CHAR(0) or BINARY(0). The purpose of the assertion was to detect if a column was copied into itself, but it could also be triggered when copying between adjacent columns if one of the columns had zero length, since the two columns could have the same position in the record buffer in that case. This is not a problem, since copying zero bytes is a no-op. Fixed by making the assertion less strict and only fail if it detects that a non-zero number of bytes is copied from a source that is identical to the target. Change-Id: Ifc07b13a552c3ef6ce99bcc1d491afeef506e9c1
Configuration menu - View commit details
-
Copy full SHA for b8a48b0 - Browse repository at this point
Copy the full SHA b8a48b0View commit details -
Bug#35944739: Assertion failure at CreateIteratorFromAccessPath in
access_path.cc Some queries that used materialization raised assertion failures in debug builds. The failing assertion checked that the fix for bug#32788576 was complete and always lifted composite access paths up from the table path that was attached to the materialize access path. It failed because GetTableAccessPath() created a materialize access path without checking if there were any composite paths that should be lifted up. Fixed by adding the missing call to MoveCompositeIteratorsFromTablePath(). Change-Id: I03abe17a71e6ec6949bc1a269b45dfce1f9dfe12
Configuration menu - View commit details
-
Copy full SHA for 61c205b - Browse repository at this point
Copy the full SHA 61c205bView commit details -
Bug#36001037 routertest_integration_routing_sharing crashes if mysqld…
… fails to start If mysqld fails to start, routertest_integration_routing_sharing crashes in the first ChangeUserTest test. The test should be SKIPPED, but isn't. Change ====== - in SetUp() check if the mysqld is running, as in the other test classes. Change-Id: Ia876e722ba6601204c11c34fe94041f24858ee84
Configuration menu - View commit details
-
Copy full SHA for 8fea1c0 - Browse repository at this point
Copy the full SHA 8fea1c0View commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: Ida816837a0888112676a2ff4c07bb9e756759c2b
Configuration menu - View commit details
-
Copy full SHA for bebe83b - Browse repository at this point
Copy the full SHA bebe83bView commit details -
Bug#35998744 dead code in test_routing_sharing.cc
test_routing_sharing.cc contains ShareConnectionTestWithRestartedServer which isn't used in this file. The code has already been copied to test_routing_sharing_restart.cc Change ====== - removed ShareConnectionTestWithRestartedServer Change-Id: I74469cdc44ca9aba4037b3c8afdca76c437439bd
Configuration menu - View commit details
-
Copy full SHA for a7b5b29 - Browse repository at this point
Copy the full SHA a7b5b29View commit details -
Bug#35998612 routertest_integration_routing_sharing gtest_repeat=1000…
… fails Running routertest_integration_routing_sharing with --gtest_repeat=1000 fails at ~720 iterations with "out-of-filedescriptors". lsof shows lots of /tmp/mysql-unique-ids/ entries which hints at the TcpPortPool handling out lots of ports. The --gtest_repeat starts and stops routers for each TestSuite. While the "start" gets a new port from the pool, the "stop" doesn't release it. Instead of the ports are only released at test end. Change ====== - for the shared routers use tcp-port-pool that's local to the test-suite - adjust all integration tests that keep the router alive for a test-suite Change-Id: I7c98a5ff7ff28d536851d4831e503e6f71a72c4d
Configuration menu - View commit details
-
Copy full SHA for 75e96be - Browse repository at this point
Copy the full SHA 75e96beView commit details -
mysql-builder@oracle.com committed
Nov 10, 2023 Configuration menu - View commit details
-
Copy full SHA for b3274a3 - Browse repository at this point
Copy the full SHA b3274a3View commit details -
Extending GTID with tags to identify group of transactions Post push fix: Recorded the 'is_statistics_ci' result Change-Id: Ic937fedd73cee31528d4159bf943ae05194d258d
Karolina Szczepankiewicz committedNov 10, 2023 Configuration menu - View commit details
-
Copy full SHA for ecf4e72 - Browse repository at this point
Copy the full SHA ecf4e72View commit details -
Bug#35977181: Contribution: Doc: Update compression protocol information
Fixed the doxygen docs on compression options. Thanks to Daniel van Eeden. Change-Id: I2a10bfe8c0d09f3ea15692065c0419ac9b1383f3
Configuration menu - View commit details
-
Copy full SHA for 1d87b0d - Browse repository at this point
Copy the full SHA 1d87b0dView commit details -
Bug#35889188 routertest_component_gr_state ClusterSetAccessToPartitio…
…nWithNoQuorum fails Follow up fix for Spec/ClusterSetAccessToPartitionWithNoQuorum.Spec/unreachable_quorum_allowed_traffic_none which rarely fails with: ./router/tests/component/test_gr_state.cc:1154: Failure Value of: wait_for_transaction_count_increase(http_port, 1) Actual: false Expected: true The test expects that the transaction count increments from 0 to 1, but on slow machines the test may only see the transaction-count=1 and never see the any increase. Change ====== - instead of wait_for_transaction_count_increase(..., 1), use wait_for_transaction_count(..., 1) Change-Id: If02574250abd7c80a07c75cf99fc0d5565c5f8c7
Configuration menu - View commit details
-
Copy full SHA for d1c494f - Browse repository at this point
Copy the full SHA d1c494fView commit details -
WL#15508 Group index skip scans in the hypergraph optimizer
Post-push fix for MTR failures. MTR tests from the RAPID test suite fail because a group skip scan collection is attempted by a secondary engine which does not use indexes. Some unnecessary traces are added to explain why no candidate group skip scans are proposed, resulting in extra trace lines in the MTR test results. This is fixed by detecting the case where group skip scan collection is attempted by a secondary engine which does not support indexes, and doing an early exit. The MTR test group_min_max_innodb is re- recorded to reflect this behaviour. Change-Id: I60a4d72c0e120a9ce9f276e0747c8cd561514984
Priyanka Sangam committedNov 10, 2023 Configuration menu - View commit details
-
Copy full SHA for 491059a - Browse repository at this point
Copy the full SHA 491059aView commit details -
Bug#35916912 Performance degradation from 8.0.30 onwards related to p…
…erformance_schema This fix reduces performance overhead of the statement instrumentation, in the performance_schema implementation. Problem ======= 1) The new g_telemetry global atomic pointer is not protected against false sharing on the CPU cache line. 2) The statement instrumentation makes unnecessary copies of MESSAGE_TEXT, which is using 512 bytes, and is most of the time empty. Fix === 1) Declared g_telemetry using a dedicated cache line, to avoid false sharing. This helps because this atomic is expected to be "read only", and will not change during the server lifetime except when installing or uninstalling a telemetry component. Keeping a shared copy in each CPU cache should reduce the atomic overhead. 2) The diagnostics area has been fixed to keep track of the message text length. The performance schema instrumentation now only copies the message text when it is not empty, saving overhead in memcpy(). The data layout in events_statements_current / history / history_long has changed to improve data locality for other attributes, and also reduce overhead in memcpy(). Change-Id: I6a80af3547e98eccbb1e39c806f603462b665cef
Configuration menu - View commit details
-
Copy full SHA for 1dec2d2 - Browse repository at this point
Copy the full SHA 1dec2d2View commit details -
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: I80167b568559852474a01e98ef368423d1c319fd
Configuration menu - View commit details
-
Copy full SHA for 0851194 - Browse repository at this point
Copy the full SHA 0851194View commit details -
Bug#35537311 nth_value window function assertion error
In debug build, assert in read_frame_buffer_row, read position hints are off by more than one row. In production build, no issue. Solution: improve maintenance of hint for FIRST_IN_FRAME/LAST_IN_FRAME hints for RANGE window. In this case, we had an empty logical range (RANGE BETWEEN 99 FOLLOWING AND 51 FOLLOWING), which led us to read too many rows trying to find first row in range while only LAST_IN_FRAME hint was updated. This should probably be improved in a separate patch. Meanwhile, for the next current row, we would step back and try to read from the first possible first row in range, for which we would then have no good hint, hence the assert. Change-Id: I4c6689c471bbfd261ec301da8a24e92ceed0b5b1
Dag Wanvik committedNov 10, 2023 Configuration menu - View commit details
-
Copy full SHA for 170a583 - Browse repository at this point
Copy the full SHA 170a583View commit details
Commits on Nov 14, 2023
-
Revert "Bug#35998612 routertest_integration_routing_sharing gtest_rep…
…eat=1000 fails" This reverts commit 6b4f838a45e3592a77f1ac805e1cdc069261d3c0. Change-Id: I09f1567d529ce46cf3353e876210e504d8ee62e3
Configuration menu - View commit details
-
Copy full SHA for adc037d - Browse repository at this point
Copy the full SHA adc037dView commit details -
Bug#35931702 When binlog_format=MIXED, writeset dependency tracking i…
…s broken because statement part is ignored When Mixed mode is used for binary logging, and a transaction exists that contains a part logged as a statement (a safe query) and a part that is logged in row mode (unsafe query) then only partial write set information exists for the query. Using the write set information for conflict detection could lead to false negatives, and for that reason we limit the usage of write sets for conflict checks to only row based logging. Change-Id: I55f9094d96afc69b5589c4fa33488a4019d7da6d
Configuration menu - View commit details
-
Copy full SHA for 3533c67 - Browse repository at this point
Copy the full SHA 3533c67View commit details
Commits on Nov 17, 2023
-
Bug#36004838 An overloaded transporter fails to yield the send-thread
A transporter may be set in an 'overloaded' state if a send attempt failed to send anything on it. That could e.g be to the OS-internal send buffers being filled up, possibly due to the receiver side not consuming fast enough / having full buffers as well. The handling of such overload is to let the send-thread taking a short 200us yield from the CPU if there is no other send work to be found. Right before the send-threads are yielding the CPU the yield method will check the callback method check_available_send_data() for any late arrived transporters having more data to send. That is effectively implemented by the send-threads having the flag 'bool m_more_trps' which is set every time a new transporter is entered into the queue with insert_trp(). It is also cleared if get_trp() could not find a TrpId to be returned. The special case that get_trp() had found only a delayed transporter is handled in handle_send_trp(). Here we cleared the 'm_more_trps' if we found a delay, and handle_send_thread was part of a send-thread (as opposed to an assist-send) Note that failing to clear the m_more_trps would result in the send-thread later not being able to yield the CPU. For the 'delayed-send with overload'-case however, handle_send did an early 'return false', without ever entering the code section where m_more_trps was cleared for a send-thread. This prevented the CPU yield in the overload situation. Patch is to combine the check for overload with the general check for delay to be canceled or m_more_trps to be cleared. Change-Id: Ied5cca628e10715510d88350140c7f4ce1a17f37
Configuration menu - View commit details
-
Copy full SHA for e975333 - Browse repository at this point
Copy the full SHA e975333View commit details -
Bug#35934223 [noclose] spurious failures in ndb.test_mgmd
The "Bug45495" test in testMgmd fails under gcov testing. The test results show hundreds of lines of DEBUG log messages being written while a fiile_exists() call times out. This patch starts the mgmd servers without the --verbose flag, so that the DEBUG level log messages are suppressed. This patch is intended for both mysql-8.0 and mysql-trunk. A separate patch for this bug addresses failures in other tests and is intended for mysql-trunk only. Change-Id: I904de0a1926f3a151ee21c194659b61b2d19aed2
Configuration menu - View commit details
-
Copy full SHA for fc7d304 - Browse repository at this point
Copy the full SHA fc7d304View commit details -
Bug#35934223 Spurious failures in ndb.test_mgmd
Use longer timeouts when calling NdbProcess::wait(). Change-Id: Iee4a33453a6dcb33ab0b500ca30d0758fdf9168c
Configuration menu - View commit details
-
Copy full SHA for ef3184a - Browse repository at this point
Copy the full SHA ef3184aView commit details
Commits on Nov 22, 2023
-
Bug#35945415 comp_err: Handle incorrect first and last '"' char of th…
…e error message Bug#34637697 Incorrect handling of escaped apostrophe sign in utilties/comp_err.cc Handling of the message line is improved. We check for: - starting and ending `"` chars - we validate escape characters - we validate line has no garbage after closing `"`, - we validate the `"`s are escaped, - we validate the line is not too long, - we validate messages to error log don't contain `\n`. We output meaningful error messages in each case. Change-Id: I2e350f65ac25f432641541c71ec77955aa26903e
Configuration menu - View commit details
-
Copy full SHA for 72836c2 - Browse repository at this point
Copy the full SHA 72836c2View commit details -
Bug#36002814 Test failure in testNodeRestart -n LCPScanFragWatchdog T2
The Qmgr GSN_ISOLATE_ORD handling was modified to handle the larger node bitmap size necessary for supporting up to 144 data nodes. For compatibility QMGR uses the node version of the sending node to determine whether the incoming signal has its node bitmap in a long section or inline in the signal. The senderRef member of the incoming signal is used which is set by the signal originator. However in the context of ISOLATE_ORD, the original sender may be shutdown when ISOLATE_ORD is processed, in which case their node version may have been reset to zero, causing the inline bitmap path to be taken, resulting in incorrect processing. The signal handler is changed to use the presence of a long section to decide whether the incoming signal uses a long section to represent nodes to isolate or not. Change-Id: I1efe6c98c3f342770462d81ae00395b5a2eec7d7
Configuration menu - View commit details
-
Copy full SHA for fbc6bf5 - Browse repository at this point
Copy the full SHA fbc6bf5View commit details
Commits on Nov 24, 2023
-
Bug#35996824 Avoid SSL send to stall block-threads doing assist-send
NdbSocket::ssl_recv() and ::ssl_send() could both block on the NdbSocket::mutex which is intended to prevent concurrent SSL send and receives. Waiting for that mutex also paused the block threads if they were used to do 'assist_send', thus slowing down the general signal processing on the data nodes. Patch change the NdbMutex_Lock to do a Trylock instead and returns TLS_BUSY_TRY_AGAIN if the lock could not be taken. Change-Id: I88dbb18aa1420ab8cf81683e1bbba62b97782c4f
Configuration menu - View commit details
-
Copy full SHA for 5cc7af5 - Browse repository at this point
Copy the full SHA 5cc7af5View commit details -
Bug#35997178 - Removed the tool pathfix.py which is deprecated in Fed…
…ora 39 platform for Python 3.12 version Change-Id: I15de72410d30b5893ea070903403dee4e0cbe6a4
Configuration menu - View commit details
-
Copy full SHA for 0b345ea - Browse repository at this point
Copy the full SHA 0b345eaView commit details -
Bug#35945223 Errors on Gtid_log_event::do_apply_event might refer an …
…old GTID Issue description ----------------- When an event with GTID is processed its current GTID is checked for format, permissions and other before it is assigned to THD->gtid_next attribute. During those steps an error can be detected, for example, wrong format or lack of permissions. Then, when the error is reported gtid_next from a previous transaction is used as the current one is not yet assigned to THD->gtid_next. Analysis -------- Errors are reported by rpl_rli_pdb.cc Slave_worker::do_report function which uses current THD->gtid_next value. Proposed solution ----------------- Introduce a way to pass GTID to Slave_worker::do_report function so that current GTID being processed can be used in the function before it is assigned to THD->gtid_next value. Change-Id: I052c98854781a61bdb4f41e603374a3ed8f8c800
Configuration menu - View commit details
-
Copy full SHA for 4e93187 - Browse repository at this point
Copy the full SHA 4e93187View commit details -
Bug#35997178 - Removed the tool pathfix.py which is deprecated in Fed…
…ora 39 platform for Python 3.12 version [post-fix] Change-Id: I8c877921e52d5cb257862828d55ce4467d2169cc
Configuration menu - View commit details
-
Copy full SHA for a308906 - Browse repository at this point
Copy the full SHA a308906View commit details -
WL#15116 Remove the memcached plugin
The memcached library bundled with MySQL has been deprecated in 8.0 . The library is removed in this patch. This patch removes: the plugin/innodb_memcached source directory, any CMake reference to it (hence WITH_INNODB_MEMCACHED is not a known option anymore), all special code paths, workarounds under storage/innodb, sql/ that were marked to exist specifically for purpose of supporting the memcached InnoDB plugin, including parts of the InnoDB API all tests that used memcached Some parts of the InnoDB API that are not used anymore are not removed, as determining which parts were specifically only used by Memcached is made difficult by InnoDB itself using calls to the API internally. This task is left up to a more general refactoring -- since there is no other external (external to InnoDB) user of the API, it should probably not be called an API. Change-Id: Ia60d77d2e898e63d79afae89ddc2c7a09bdb18b8
Configuration menu - View commit details
-
Copy full SHA for 82dcbf0 - Browse repository at this point
Copy the full SHA 82dcbf0View commit details -
BUG#35940509: "memory/group_rpl/Gcs_message_data::m_buffer" reports n…
…egative values in SPM Group Replication (GR) rely on the Group Communication Service (GCS) to exchange messages with the group and be informed of the group membership. When there is event from GCS, for example a message or membership update, the GCS delivery thread will call the respective handler on GR so that GR pursuits the required actions. The GCS delivery thread is not initialized with a MySQL context. Some operations done on the GR event handlers do require a MySQL context to be performed on debug builds, the context is required for the debug information provided by the performance schema instrumentation framework. The operations that do require a MySQL context on debug builds are: * GR member actions load and store; * asynchronous replication channels failover configuration load and store; * debug flags for conditional execution. To allow these operations, a temporary context was created, that is, they were surrounded by: ``` my_thread_init(); ... my_thread_end(); ``` These operations only happened on single-primary mode. On 8.X.0 it was noticed that this temporary MySQL context was causing side-effects, more precisely it was disabling the memory consumption tracking inside GCS. To solve the issue on 8.X.0+ the following changes were done: 1) The temporary MySQL context creation was removed from the GR event handlers. 2) A single MySQL context is created for the entire lifetime of the GCS delivery thread if GCS is built together with MySQL server. 3) Moved the `gms_listener_test` test to run on a thread which already has a MySQL context. The original `gms_listener_test` was also creating a temporary context. Change-Id: I59c481cde76bc3995e585eeb130b53676c25bd80
Configuration menu - View commit details
-
Copy full SHA for 167d4f9 - Browse repository at this point
Copy the full SHA 167d4f9View commit details -
Bug#36008340 Serialization library does not allow to write / read unb…
…ounded string Problem ------- Serialization library does not allow to encode / decode unbounded string Analysis / Root-cause analysis ------------------------------ Same write / read function is used to encode / decode both unbounded and bounded strings. Primitive_type_codec functions should allow to write / read unbounded string, which defined size is 0. Solution -------- Relaxed conditions in the Primitive_type_codec methods to allow for reading and writing of std::string with field_size equal to 0. Change-Id: I0052bd8041607c8cf880734f6487811c94a6fed2
Configuration menu - View commit details
-
Copy full SHA for 185b0f1 - Browse repository at this point
Copy the full SHA 185b0f1View commit details -
WL#15116 Remove the memcached plugin
Post-push fix for build failures. The plugin file called innodb_engine.so was part of the memchached plugin. This patch removes references to it. Also, fixing a date in packaging/rpm-fedora/mysql.spec.in Change-Id: Id1e5c6959b64a0ca9cc2ca85ccd08059b243e7a1
Configuration menu - View commit details
-
Copy full SHA for 287dc40 - Browse repository at this point
Copy the full SHA 287dc40View commit details -
Bug#35982564 Heap buffer overflow on NDB_SHARE_KEY DBUG_DUMP
Problem: Running an ASAN build with DBUG_TRACE calls (--debug), a heap buffer overflow is detected on NDB_SHARE::create_key(). Analysis: On NDB_SHARE::create_key(), the DBUG_DUMP of the `m_buffer` field of NDB_SHARE_KEY is given with the size `size` which amounts to sizeof(NDB_SHARE_KEY) + buffer_size. Hence, DBUG_DUMP is called with sizeof(NDB_SHARE_KEY) extra bytes, caught by ASAN build. Solution: Since DBUG_DUMP is not necessary, because a DBUG_PRINT of the buffer is already present in the lines before, then it is removed. Change-Id: I649d66e2aa87951ada0483208b95e9757a97ebae
Configuration menu - View commit details
-
Copy full SHA for fb8aacd - Browse repository at this point
Copy the full SHA fb8aacdView commit details -
WL#15294 Extending GTID with tags to identify group of transactions -…
… step 1b Post-push fix for broken build with clang 16 variable_length_integers-t.cc:66:31: error: not a Doxygen trailing comment [-Werror,-Wdocumentation] << " bytes, "; //<< std::endl ^~~~ ///< Change-Id: I6ae337b5c18aef7e59cdde9d3b611c9d011f370e
Configuration menu - View commit details
-
Copy full SHA for bf2ed34 - Browse repository at this point
Copy the full SHA bf2ed34View commit details -
Bug#35839977 testNodeRestart -n Bug34216 fails randomly
Another attempt to fix this test failure. We need to also know which tableId we should crash on when commiting a ZUPDATE. Previous (failed!) patch is extended with using insertError2InNode() to insert the error code as well as the tableId to crash on as 'ERROR_INSERT_EXTRA' Dblqh::execCOMMIT() is enhanced to also check that tableId == ERROR_INSERT_EXTRA before CRASH_INSERTION(5048) Change-Id: If4e453bdf7400a4439465e3e0c0ad9ef7f91a46e
Configuration menu - View commit details
-
Copy full SHA for 089f933 - Browse repository at this point
Copy the full SHA 089f933View commit details -
Bug#35938286 Class Multi_Transporter is-NOT a Transporter
Post push patch, removing Multi_Transporter as a 'friend' Change-Id: I92a810dd04ebad203832332b6da685cfd3660dcb
Configuration menu - View commit details
-
Copy full SHA for 8825172 - Browse repository at this point
Copy the full SHA 8825172View commit details -
Bug#36009860 Ndb Tls : close vs shutdown race
There was a possible race between the start_clients_thread() and update_connections(), both seeing the same transporter in state DISCONNECTING. The design intention was that the start_clients_thread() should shutdown() the transporter, such that Transporter::isConnected() became 'false'. Upon seeing the !isConnected() state on a DISCONNECTING transporter, update_connections() should 'close' the transporter. However, as the start_clients_threads -> doDisconnect() code patch sets 'm_conncted = false' before doing the disconnectImpl() -> shutdown, that opened up a race where update_connections() could see the transporter as 'not-isConnected()' and thus close the transporter before it was even shutdown. With SSL enabled that resulted in that the SSL_context object was released by ::ssl_close(), and later referred by ssl_shutdown(). Without SSL enabled we still broke the intended call order of socket close vs shutdown. Patch ensure that disconnectImpl() is now completed before we set the m_connected state to 'false'. Thus update_connections will not be able to close the NdbSocket before it has been completely shutdown(). Note that this raises the theoretical possibilites of TransporterRegistry code paths ::poll*() and ::performReceive(), where we check: 'if (is_connected(trp_id) && t->isConnected()) {' ... Now possibly seeing a Transporter as 'isConnected()' when disconnectImpl() has been performed, without 'm_coonncted' not having been set to 'false' yet. However it seems that the above check is too strict: When the transporter 'protocol' for doing disconnect is followed, there will not exist cases where 'performState[] == CONNECTED' and 'not isConnected())' Note that doDisconnect(), which sets 'm_connected = false', is only called from start_clients_thread() when transporter being in state DISCONNECTING or DISCONNECTED. Thus the above assumption should be safe. Change-Id: I9444ae74b93149deed8f4197809f6dd8dce44158
Configuration menu - View commit details
-
Copy full SHA for 8d4e9fc - Browse repository at this point
Copy the full SHA 8d4e9fcView commit details -
Bug#33725447: A bug in handler.cc cause mysql-server crash
when input some sql statement When a shared materialized table for a cte is marked for re-materialization because it is a lateral derived table and the counters for the invalidators do not match for the second reference to the cte, it clears all references to the cte which resets the cursors positioned for the table which materialzed the cte table leading to problems when reading from that table. More detailed analysis in the bug page and the test file (w.r.t the failing query). The fix is to not re-materialize the cte table when we are using the shared materialized derived table. We also update the invalidator counters while at it. Change-Id: I5f9daeab03fee93354c0253fd55865f1a6a1e32c
Configuration menu - View commit details
-
Copy full SHA for 31f177f - Browse repository at this point
Copy the full SHA 31f177fView commit details
Commits on Nov 27, 2023
-
Bug#35938286 Make get_send_transporter() a method only implemented by…
… Multi_Transporter. Post push patch fixing a regression where TransporterRegistry::get_bytes_received() returned the number of 'sent' bytes instead. Change-Id: Icac3ed5d4014ef6f36b31ee4e4c8795cc2ecc601
Configuration menu - View commit details
-
Copy full SHA for 92ee85a - Browse repository at this point
Copy the full SHA 92ee85aView commit details -
Bug#35208990 MySQL 8.0.32U1/Cloud reports wrong results (select)
post push patch to add comments Change-Id: I99ad8a65682eefec6b49f46b022e0e98f64442ba
Configuration menu - View commit details
-
Copy full SHA for 85ae9c7 - Browse repository at this point
Copy the full SHA 85ae9c7View commit details -
Bug#35710213: mysql server 8.1.0 failed at Item_field::used_tables
The problem is in the function JOIN::destroy(), where we call update_used_tables() to reinstate used tables information after execution of a statement that has one or more const tables, and has thus been subject to const table elimination. During this, we may encounter a table, such as a special information schema, that has already been closed and subsequently has had its table pointer (in class Table_ref) set to nullptr. The solution to this is to delay the refreshing of used table information to the start of the next execution, just after tables have been opened and we know that all table objects are in a proper state. The performance impact of this seems to be negligible. Also added some necessary code to prevent that uninitialized command properties were copied in the case of as yet unprepared statements. Change-Id: Ib44ab2cf1e61c3b188e7ceb000adb1445c48979a
Configuration menu - View commit details
-
Copy full SHA for 7bab360 - Browse repository at this point
Copy the full SHA 7bab360View commit details -
WL#15294 Extending GTID with tags to identify group of transactions -…
… step 7 Post-push fix for broken RelWithDebInfo build with gcc 12 and 13 inlined from 'test_generated_gtids' at unittest/gunit/group_replication/group_replication_certifier_auto-t.cc:68:22: include/mysql/psi/mysql_rwlock.h:261:45: error: 'tsid_map_lock' may be used uninitialized [-Werror=maybe-uninitialized] Change-Id: Ibd85fbac90c2de1acbfd62ce69aaa7ae493ee2f3
Configuration menu - View commit details
-
Copy full SHA for f2638cd - Browse repository at this point
Copy the full SHA f2638cdView commit details -
Bug#35990372 Wrong index used in ConfigValues::openSection()
Context: Some NDBAPI tests uses ConfigValues::openSection() function in order to get/set config parameters. openSection has as argument the section 'type' (SYSTEM, NODE, CONNECTION ...) and the index to the section we are interested in. Section indexes are sequential, starting from zero and are ordered by nodeId BUT nodeId and index can be different. Problem: In some testes openSection is used with wrong arguments (nodeId is used instead index). Depending on the configuration this can lead openSection to open wrong/invalid section. Solution: Tests changed to always use index instead nodeId to openSection. In particular, when iterating over all sections looking to a parameter, iteration now starts from index 0, preventing any section from being ignored. Change-Id: Ib783490b651de9d4ae2627b5b3a3661530495318
Configuration menu - View commit details
-
Copy full SHA for ea96c9c - Browse repository at this point
Copy the full SHA ea96c9cView commit details -
Bug#35400142 - Binary TAR/ZIP files contains duplicated file "INFO_SRC"
Bug#35529968 - Hide the "CPACK_COMPONENT_GROUP_INFO_DISPLAY_NAME" option in the MSI installer The INFO_SRC file was part of two CMake components, "Readme" and "Documentation", that made it being included twice in the TAR and ZIP. Using some unpacking tools, this caused an unnecessary prompt if to overwrite or not. Also The "Info" component is always to be installed. Change-Id: I9648b586cdd063f0319ae028ebb3636ed2344159
Configuration menu - View commit details
-
Copy full SHA for 23a5d8a - Browse repository at this point
Copy the full SHA 23a5d8aView commit details -
Bug#36008973 [HCS]Bulk load from many threads crashes server
- This is a regression caused by fix for Bug#35889669 bulkload doesn't provide innodb stats hence autodb gives incorrect estimates. - In dict_stats_exec_sql() don't rollback the trx that was passed to it in error situations. - Add retry mechanism if statistics update fails because of DB_LOCK_WAIT_TIMEOUT. Change-Id: I6591633cb9a5fd87d2a74972b39287cfadaf31dc
Configuration menu - View commit details
-
Copy full SHA for a33beff - Browse repository at this point
Copy the full SHA a33beffView commit details -
Bug#35919431: rpl_async_conn_failover_interim_add_sender failing on
PB2 weekly-trunk Issue: ====== When the asynchronous replication connection failover is enabled, the Monitor IO thread repeatedly tries to connect and sync source's member's details. Before executing queries to fetch group member details, it does run test query to test connection, and if test query fails it logs error and retries again later. The issue for this bug is the test query fails but mysql_error() is returning empty error message, due to which rpl_async_conn_failover_interim_add_sender testcase is failing with 'Found warnings/errors in error log file'. Solution: ========= Log 'Unkown error message' when empty error message is returned by mysql_error(). Add another mtr suppression in rpl_async_conn_failover_interim_add_sender testcase to handle new error message. Change-Id: Icc0f8d6bac0565c29048eaefb583c00ac64bd7c0
Configuration menu - View commit details
-
Copy full SHA for 204b663 - Browse repository at this point
Copy the full SHA 204b663View commit details -
BUG#36027902 sys.diagnostics uses deprecated syntax
The view sys.diagnostics uses a deprecated SQL syntax, declaring a DECIMAL as UNSIGNED. This fix simplifies the SQL syntax, to use only a plain DECIMAL type. Change-Id: I4d0680136a94ba3ac9fefc6c0ab43a342dde80f8
Configuration menu - View commit details
-
Copy full SHA for fc98320 - Browse repository at this point
Copy the full SHA fc98320View commit details -
Combined push of the following:
commit f5a8fec5c87ab6e46b1ca5f80c14b2441f28cce1 Author: Duarte Patricio <duarte.patricio@oracle.com> AuthorDate: Tue Oct 24 16:20:12 2023 Commit: Duarte Patricio <duarte.patricio@oracle.com> CommitDate: Tue Nov 21 13:12:13 2023 Bug#34199339 Replica fail for keyless table using BIT and HASH_SCAN commit 4013169c7f54c6d5ff2262dd1fad24497005663f Author: Duarte Patricio <duarte.patricio@oracle.com> AuthorDate: Mon Nov 6 14:16:27 2023 Commit: Duarte Patricio <duarte.patricio@oracle.com> CommitDate: Tue Nov 21 13:12:13 2023 Bug#34199339 Refactor unpack_record for readability Change-Id: I36ca00f1941b8482d7cd710f8f483fd200eee200
Configuration menu - View commit details
-
Copy full SHA for b73de27 - Browse repository at this point
Copy the full SHA b73de27View commit details -
Bug#35940325 Add support for PGO build of MySQL server on Windows
Problem: When attempting to build the instrumented version of the MySQL server, (i.e. with the -DFPROFILE_USE=1 CMake option), the size of the .obj files produced by the Visual Studio 2019 C++ compiler have grown to the point that the linker error LNK1248 is encountered while building the sql_main library. Although it would be possible to work around this issue by excluding individual files from the PGO build (by undoing the use of the /GL compiler option on those files) the number of files that would be affected is becoming unwieldy and could potentially affect the overall optimization. Solution: The /Z7 compiler option has been used up to now. This option embeds debug symbol information in .obj files, and can be replaced by the /Zi option that stored debug symbol information separately from the .obj files, in a .pdb file. Using the /Zi option (for PGO builds only) reduces the size of the .obj files sufficiently so that it is no longer necessary to exclude any library or individual file from using the /GL PGO compiler option. Change-Id: I84d6fad32310bf9dddfd9d93e5fe0d2b9b6ae0ab
Configuration menu - View commit details
-
Copy full SHA for 06092ea - Browse repository at this point
Copy the full SHA 06092eaView commit details -
WL#15730: Lakehouse full query support
- This provides random scan and parallel scan implementation for Lakehouse tables to use MySQL query engine as fallback for queries not supported by the Rapid secondary engine. - It issues a select query on rapid everytime a scan needs to be performed - For random scan support (general MySQL queries), a separate thread is spawned for processing the results from the Rapid secondary engine. This enables pipelining between parsing of results and feeding the MySQL query engine with rows. If the MySQL query finishes the random scan, the Rapid query is aborted. - Implementation of the modules for basic accesses for a lakehouse table by processing the next record or a stored record. - Performance schema statistics per query and per lakehouse table. Show for each query offloaded to lakehouse how many scans it performed and any failure message. Similarly for each loaded lakehouse table that participates in a query offloaded to rapid show the number of started scans, successful scans, failed scans and aborted scans. Change-Id: If54204be287bc689d757dd011f2b5e1bbb124fa4
Configuration menu - View commit details
-
Copy full SHA for 63560bf - Browse repository at this point
Copy the full SHA 63560bfView commit details -
Bug#36002364 - Fix missing code coverage issues
List of changes: 1. Add missing linker flags when compiling in code coverage mode. 2. In safe_process.cc, attempt to shut down child processes gracefully with SIGTERM. This allows children dump their coverage information. A SIGKILL is still issued if the process is not terminated within the given timeout. 3. Dump mysqld code coverage information upon termination with SIGTERM and SIGQUIT. 4. Dump rpdserver code coverage information upon termination and additionally invoke the signal handler for SIGTERM and SIGINT. 5. Tidy rpdserver.c includes according to 'include-what-you-use'. 6. In rapid.cp_mem_usage, wait until the cluster is formed before finalizing the tests in order to avoid test failures. This is already being done in other tests of similar nature. Change-Id: I9e9196294c708128fe5eb47f006f75260d4deaf8
Configuration menu - View commit details
-
Copy full SHA for 1e10a6c - Browse repository at this point
Copy the full SHA 1e10a6cView commit details -
Bug#35654240: A Recursive CTE Causing MySQL Server Crash
Problem: Right-nested UNIONs are not supported for recursive query blocks in CTEs. A check for this is missing. In WL#11350, we started allowing right-nested set operations. The missing check resulted in a crash. Solution: If there are right-nested UNIONs of recursive query blocks that cannot be safely flattened, we throw an error indicating that the operation is not yet supported. Change-Id: I8ac01f91bf63e81fe7393179a3d01f9ba1eaa4e2
Configuration menu - View commit details
-
Copy full SHA for 38a9ddf - Browse repository at this point
Copy the full SHA 38a9ddfView commit details -
BUG#36010273: Use standard header <bit> instead of own implementation
* Background The serialization library created in WL#15294 backported the following bit operation functions from C++20: countr_zero, countr_one, countl_zero, bit_width * Problem Now we have moved to C++20, we should use the standard implementations instead. There was a small difference in bit_width in C++20 vs the custom implementation, and our code in get_size_integer_varlen_unsigned made use of the difference, so it needs to be adjusted. * Solution Remove the custom implementation and use the standard one. Adjust get_size_integer_varlen_unsigned to work with the standard implementation. Changed to use C++20 Concepts instead of SFINAE in variable_length_integers.h. Change-Id: I25de12a448e824b4863a529290504f502869675a
Configuration menu - View commit details
-
Copy full SHA for de64173 - Browse repository at this point
Copy the full SHA de64173View commit details -
Bug#34063709: Lack of error while revoking nonexistent privilege.
Post commit fix: Corrected setting original_server_version variable to solve test failures with --mysqld=--skip-log-bin. Change-Id: Iec8fb2326987bf1085ccab0be80c6769a9f07cbe
Configuration menu - View commit details
-
Copy full SHA for e8a0486 - Browse repository at this point
Copy the full SHA e8a0486View commit details -
Updated copyright year in user visible text
Approved-by: Balasubramanian Kandasamy <balasubramanian.kandasamy@oracle.com> Change-Id: I8e86c48217b407d7690af86c212fc2f95fd78ffe
Configuration menu - View commit details
-
Copy full SHA for 94a9874 - Browse repository at this point
Copy the full SHA 94a9874View commit details -
Bug#36036725 'DO 1;' forbidden by rw-splitting
"DO 1;" is treated as multi-statement which are currently forbidden by rw-splitting. Change ====== - allow trailing semicolons after a single query Change-Id: If5d389b5381c63313519ca15d5eb8349a4eb9d4c
Configuration menu - View commit details
-
Copy full SHA for 1f3c361 - Browse repository at this point
Copy the full SHA 1f3c361View commit details -
Bug#36029117: MYSQL_FIELD def no longer used
Removed def from MYSQL_FIELD. Removed some left-over internal arguments, data structure members and functions from libmysql. Cleaned up mysql_client_test to not use def. Change-Id: Ie95aa8bd52e051044ace28f1d50d85fda3518a60
Configuration menu - View commit details
-
Copy full SHA for a32b194 - Browse repository at this point
Copy the full SHA a32b194View commit details -
BUG#35998554: get_group_member_info_*() does not distinguish member n…
…ot found from error It was discovered that some methods from `Group_member_info_manager` class did return the same value for distinct situations, which could be masquerading errors. One case is ``` Group_member_info * Group_member_info_manager::get_group_member_info_by_member_id( const Gcs_member_identifier &id); ``` that does return a allocated copy of the `member_info` when the member exists on the `Group_member_info_manager`, or returns `nullprt` when the member does not exist. Though `nullptr` can also be returned when the memory allocation of the copy fails, which means that the method caller is unable to distinguish: * member not found; * no memory available to construct the object. On both situations the method caller will interpret `nullptr` as member not found which can leverage incorrect decisions. The same pattern exists on the methods: * Group_member_info_manager::get_group_member_info() * Group_member_info_manager::get_group_member_info_by_index() * Group_member_info_manager::get_group_member_info_by_member_id() * Group_member_info_manager::get_primary_member_info() To solve the above issue, the four methods were refactored to: 1) return a boolean value to state if the member was found; 2) receive a out parameter that is a local reference to the method caller that is updated when the member is found. The use of the reference eliminates the allocation of the memory. Change-Id: Ic10263943fc63f40cdda506a59f10962aa208d92
Configuration menu - View commit details
-
Copy full SHA for cfeaa26 - Browse repository at this point
Copy the full SHA cfeaa26View commit details -
Bug#33800633 startChangeNeighbour problem
Revert of original patch. Testing with debug binaries has revealed problems with hitting several different assert related to handling of the TrpId queue mechanism. Likely cause is that get_trp() seems to deliver the same TrpId to two different (assist_send-threads), one where the TrpId is delivered through the neighbour priority mechaninsm, and another where it is delivered as a non-neighbour. If this happens it breaks the protection mechanisms if the transporter buffers, where only a single threads is assumed to operate on the same TrpId. Root cause may be that the patch now being reverted called insert_trp() on transporters having data to sent as part of startChangeNeighbourNode() - making it a non-neighbour TrpId awaiting sends. However, the later call to setNeighbourNode() may reinstantiate the same NodeId as a neighbour again. Thus the same TrpId coulde be returned both as a non-neighbour as a neighbour and be operated upon on two different threads. Change-Id: Icada4600e008d551e97af2b31abd070f5e4ac68a
Configuration menu - View commit details
-
Copy full SHA for 7d3791a - Browse repository at this point
Copy the full SHA 7d3791aView commit details -
Bug#36005903 mgmapi failure to parse bindadress
Bug#36018640 Test UnresolvedHosts2 passes but ndbd aborts and dumps core (Merge mysql-trunk => mysql-8.3.0) Change-Id: I6c65d7b4208a026cfc39e48fe34dbc3dcd7ea002
Configuration menu - View commit details
-
Copy full SHA for 5151820 - Browse repository at this point
Copy the full SHA 5151820View commit details -
WL#15752: Add statement type "DDL" to transaction tracking facility -…
… missing DDL statements post-push fixes Change-Id: I1d4135d18a6b06b6b76ce56bdabfb35ebc50dada
Configuration menu - View commit details
-
Copy full SHA for 78c835a - Browse repository at this point
Copy the full SHA 78c835aView commit details -
BUG#35949017 Schema dist setup lockup Bug#35948153 Problem setting up events due to stale NdbApi dictionary cache [percona#2] Bug#35948153 Problem setting up events due to stale NdbApi dictionary cache [#1] Bug#32550019 Missing check for ndb_schema_result leads to schema dist timeout Change-Id: I4a32197992bf8b6899892f21587580788f828f34
Configuration menu - View commit details
-
Copy full SHA for 54d003b - Browse repository at this point
Copy the full SHA 54d003bView commit details -
Bug #35449266 - An ALTER TABLE query corrupted the data dictionary so
mysqld crashes Background: Column names are case insensitive in MySQL. When "ALTER TABLE .. ADD COLUMN .. ALGORITHM=INSTANT" is triggered, the new row version is created for this record. This version is stored on disk with record and the column's metadata information is updated in storage engine (se_private_data). And when "ALTER TABLE .. CHANGE COLUMN" is triggered, we check if the column is renamed and the old column's metadata is copied to new column. Issue: While checking if the column is renamed, we compare the name of the column in the create table info with that of in the alter table info. But here the comparison is case sensitive which causes c1 to differ from C1. Thus returning a nullptr when get_renamed_col() is called, i.e., doesn't detect column rename and further leads to skipping the updation of physical position of newly renamed column. Which is why it hits the assertion failure in debug build where a renamed column should have the physical position updated already. Fix: The solution is to compare the column names while ignoring the case sensitivity. Change-Id: Ic30b183666da466a553cea45eb7028be3c2a36bb
Configuration menu - View commit details
-
Copy full SHA for c6de271 - Browse repository at this point
Copy the full SHA c6de271View commit details -
Bug#35115601 InnoDB - Page [...] still fixed or dirty crash during
shutdown Post-push test fix Increased size of the table to ensure second call to insert_direct. The second call to insert_direct is required to test the error path in the DDL. Background: Builder::insert_direct calls m_btr_load->latch and m_btr_load->release. The call to release will buffer fix a page. The corresponding call to latch will unfix the above page provided m_n_recs is non-zero. This means that the first call to latch will not do anything. In the first call to insert_direct, latch will do nothing, but release will buffer fix the page. In the second call to insert_direct, the test will simulate an error in the online log and force the DDL to error out. This way the test checks the cleanup code in the DDL. In the bug, this cleanup had missed the latch, causing the page to remain buffer fixed - which was caught at shutdown. Change-Id: I70db8539c42bb4c1761b56cca5dadf814b2c5020
Configuration menu - View commit details
-
Copy full SHA for a9acf66 - Browse repository at this point
Copy the full SHA a9acf66View commit details -
Bug#35889261: Error when executing prepared statement containing UDF
Item_param::fix_fields(): set data type MYSQL_TYPE_NULL when a parameter has the NULL value, instead of skipping to set data type. Item_param::val_str(): set null_value for the result string when the parameter has the NULL value. udf_handler::fix_fields: add shortcut to is_in_prepare and propagate setting of data type for a parameter. In Item_param::copy_param_actual_type() do the type and value modifications to clones too. Change-Id: I1039600ec957df7b0a2becbce8c8bd076a43a8a8
Configuration menu - View commit details
-
Copy full SHA for 2f30f85 - Browse repository at this point
Copy the full SHA 2f30f85View commit details -
Bug#36042078 connection fail after fallback to primary
When the client connects via the rw-splitting port and sends a read-only statement, but the destination isn't @@super_read_only, the router will failover to the primary. Currently, this results in mysql> select @@PORT; ERROR 2013 (HY000): Lost connection to MySQL server during query No connection. Trying to reconnect... Connection id: 0 Current database: *** NONE *** Query OK, 0 rows affected (0.09 sec) The SELECT return a result, but returns a "OK" instead. Background ========== In the fallback-to-primary scenarion, the router currently sends _two_ Ok packets where it should send _one_: /* ( +9 us) */ connect::fetch_user_attrs::done /* ( +57 us) */ c<-r io::send // Ok - wrong /* ( +5 us) */ connect::check_read_only::failed /* ( +119 us) */ connect::pooled /* ( +5 us) */ connect::fallback_to_write ... /* ( +6 us) */ connect::fetch_user_attrs::done /* ( +41 us) */ c<-r io::send // Ok /* ( +10 us) */ connect::set_server_option::done /* ( +8 us) */ greeting::auth::done Change ====== - Don't send the Auth::Ok earlier, but only at the end of the connect no fallover to another host can happen. - Removed the "super-read-only" check as PRIMARYs may be reported as "read-only" from metadata (which don't have super-read-only set). Change-Id: I77bf4ca25cd6abf101dcd23d76781a453ea699b5
Configuration menu - View commit details
-
Copy full SHA for b169e60 - Browse repository at this point
Copy the full SHA b169e60View commit details -
Bug#35208273 Add base table name to iterator-based EXPLAIN FORMAT=JSON.
Bug#35337193 Add used columns to iterator-based EXPLAIN FORMAT=JSON. Bug#36027152 Add schema name to iterator-based EXPLAIN FORMAT=JSON. Other EXPLAIN JSON format changes: -Change the "table_name" field to be the name of the base table instead of the alias. -Add "alias" field to hold the alias that was previously in "table_name". Change-Id: I1c0ac1c09b3f859f7edbca86c19f2fb0351936a2
Configuration menu - View commit details
-
Copy full SHA for 4b0160b - Browse repository at this point
Copy the full SHA 4b0160bView commit details -
Bug #36044149 Extend support for RPM builds with PGO to EL8+
Change the rhel conditoion from == 7 to >= 7 For 8.3+ this condition can be removed since 6 is not supported. Add DISABLE_MISSING_PROFILE_WARNING() to libfido2 as it otherwise fails to compile in the second phase. This was not needed before since libfido2 does not build on EL7. Change-Id: Id2bcf30bdcc073e3257a83aa4edc5367c7f27f2f
Configuration menu - View commit details
-
Copy full SHA for 4b62660 - Browse repository at this point
Copy the full SHA 4b62660View commit details -
Bug#36017204 MTR tests for SELECT fail with ICP, MRR, possibly other …
…flags Recent versions of Clang have changed their implementation of std::sort(), and our own 'varlen_sort()' function returns wrong results. The result is that some of our .mtr tests using the MRR strategy are failing. The fix is to remove usage of std::sort(), and implement our own sorting algorithm instead. Change-Id: Iec35400503309c026766d5b2f10b1e32e2e7a319
Configuration menu - View commit details
-
Copy full SHA for b3daae0 - Browse repository at this point
Copy the full SHA b3daae0View commit details -
Approved by: Erlend Dahl <erlend.dahl@oracle.com>
Configuration menu - View commit details
-
Copy full SHA for 8689567 - Browse repository at this point
Copy the full SHA 8689567View commit details -
Approved by: Erlend Dahl <erlend.dahl@oracle.com>
Configuration menu - View commit details
-
Copy full SHA for a1b8e20 - Browse repository at this point
Copy the full SHA a1b8e20View commit details -
Bug#36008973 [HCS]Bulk load from many threads crashes server
Post-push fix: storage/innobase/dict/dict0stats.cc:2241:7: error: ret may be used uninitialized [-Werror=maybe-uninitialized] 2241 | if (ret == DB_SUCCESS) { Change-Id: Id0448b37e9182ef60ff98724cd51177b8e445aa5
Configuration menu - View commit details
-
Copy full SHA for e6f067c - Browse repository at this point
Copy the full SHA e6f067cView commit details
Commits on Dec 8, 2023
-
Bug#36008133 Assertion `false' failed in
replace_embedded_rollup_references_with_tmp_fields Regression cause by Bug#35390341 Assertion `m_count > 0 && m_count > m_frame_null_count' failed. That issue involved setting a user variable inside the argument of a window function, which in turn was evaluated using the window frame buffer (Window::m_needs_frame_buffering) row optimizable (Window::m_row_optimizable). The first solution had an unfortunate side-effect as seen by the present bug: here, the window function doesn't have an argument containing the setting of a user variable, it is the other way around: the setting of a user variable requires the value of a window function. Solution: refine the criterion for when we evaluate the setting of user variable before windowing to only include the case of a wf containing a setting (Bug#35390341). Change-Id: Idc6824adf4bd123775a14b92bfe54824acf105c8
Configuration menu - View commit details
-
Copy full SHA for fe73560 - Browse repository at this point
Copy the full SHA fe73560View commit details -
WL#16085 Record usage of information_schema.processlist
To get an impression of the usage of i_s.processlist, we add status counters to the mysqld server to keep track of how many times i_s processlist is used in a query, and the last timestamp for when it was used. We also add a thread in the health monitor component to print the variable values to the error log once an hour, provided the usage is > 0. This makes us able to use log searching in MHS to see how frequently i_s processlist is used. Change-Id: Ia1060151cd265fd81d37e88c5c9d3d30c703bae9
Configuration menu - View commit details
-
Copy full SHA for e003060 - Browse repository at this point
Copy the full SHA e003060View commit details -
WL#15951: Lakehouse error infrastructure improvements Phase 1
Bug#36021695 - WL#15951:Missing error message when a pattern is returning csv file to load but but "dialect":{"format": "parquet"}} is provided * Added error code to error messages struct, updated interface functions. * Added new lakehouse error codes and messages to mysql's messages_to_clients.txt * Parquet and Avro now use the same codes and messages for common errors with CSV. * Removed CanLogWarning function and moved the check into AddWarning. * All datalake tests have been re-recorded to include new error codes. Change-Id: Ib17d631cacf843d0ec744ffb2c27d73e842529a5
Configuration menu - View commit details
-
Copy full SHA for 343e91b - Browse repository at this point
Copy the full SHA 343e91bView commit details -
Bug#35738548: SEGV (Item_ref::real_item() at sql/item.h:5825)
found by fuzzing tool Bug#35779012: SEGV (Item_subselect::print() at sql/item_subselect.cc:835) found by fuzzing tool Bug#35733778: SEGV (Item_subselect::exec() at sql/item_subselect.cc:660) found by fuzzing tool Bug#35738531: Another SEGV (Item_subselect::exec() at sql/item_subselect.cc:660) found by fuzzing tool If an Item_ref object is referenced from multiple places in a query block, and if the item that it refers to is also referenced from multiple places, there is a chance that while removing redundant expressions, we could end up removing the referenced item even though it is still being referred to. E.g. while resolving ORDER BY clause, if the expression is found in the select list, expression used in order by is removed and it starts using the one in select list. When this happens, while removing the redundant expressions from the query block, if the select expression is an Item_ref object, on the first visit to this expression, we mark the object as unlinked. On the second visit, this time because of the order by, as the object is marked as unlinked, it exits the traversal without doing anything. However, when the item it refers to is visited, it does not know that the item is still being referred to. So it ends up deleting the referenced item. Solution is to decrement the ref_count of an item without initiating the clean up of the underlying item unless its the last reference (This necessitated changes to all implementations of clean_up_after_removal). Along with this we also remove m_unlinked member because it's no more needed. If the underlying item of an Item_ref object is not abandoned, we decrement the ref count and stop looking further. Change-Id: I4ef3aaf92a8c0961a541dae09c766929d93bb64e
Configuration menu - View commit details
-
Copy full SHA for 386adb9 - Browse repository at this point
Copy the full SHA 386adb9View commit details -
Bug#36076513: Refactor LEX to use getter/setter instead of public member
Add a getter and a setter to better encapsulate LEX::using_hypergraph_optimizer. Change-Id: I56467ccedbe929fa18961cc37d7990febd77bdc9
Configuration menu - View commit details
-
Copy full SHA for 0241d96 - Browse repository at this point
Copy the full SHA 0241d96View commit details -
Bug#35208273, Bug#35337193, Bug#36027152
Post-push fix: clean up dangling garbage pointer Change-Id: I1c0ac1c09b3f859f7edbca86c19f2fb0351936a2
Configuration menu - View commit details
-
Copy full SHA for 3270e79 - Browse repository at this point
Copy the full SHA 3270e79View commit details -
Bug#36086236 Enable and use correct toolset for rpm builds on el* pla…
…tforms Bug#36054662 With PGO enabled, build fails to produce commercial-debuginfo RPM on EL8 Bug#36072667 EL8 RPM "libmysqlclient.a" exports no symbols strip and dwz from /usr/bin used by rpmbuild might break object files and binaries produced by GCC Toolset on el* platforms. See for example: https://sourceware.org/bugzilla/show_bug.cgi?id=24195 Point to newer strip and make sure corresponding dwz tool is available (set $PATH correctly to let script use it). Post processing in rpmbuild by brp_strip_static_archive runs "strip -g" which breaks static archive on el8 even if newer strip is used, disable this processing completely. Change-Id: I681fd2bc3a7556d09fdc9e77357d779fc8c7d336
Configuration menu - View commit details
-
Copy full SHA for 9fa9ff6 - Browse repository at this point
Copy the full SHA 9fa9ff6View commit details
Commits on Dec 12, 2023
-
Bug#36028828 testNodeRestart -n MixedReadUpdateScan failures
Testcase failure exposed regression from fix to Bug#22602898 NDB : CURIOUS STATE OF TC COMMIT_SENT / COMPLETE_SENT TIMEOUT HANDLING Node failure handling in TC performs 1 pass through the local active transactions to find those affected by a node failure. In this pass, all transactions affected by the node failure are queued for processing, e.g. rollback, commit, complete, via e.g. the serial abort/commit or complete protocols. The exceptions are transactions in transient internal states such as CS_PREPARE_TO_COMMIT, CS_COMMITTING, CS_COMPLETING, which are then followed by stable 'wait' states such as CS_COMMIT_SENT, CS_COMPLETE_SENT. Transactions in these states were handled by doing nothing in the node failure handling pass, and relying on the timeout handling in the subsequent stable states to queue transactions for processing. The fix to Bug#22602898 removed this stable state handling to avoid it accidentally triggering, but also stopped it from triggering when needed in this case where node failure handling found a transaction in a transient state. This is solved by modifying the CS_COMMIT_SENT and CS_COMPLETE_SENT stable state handling to also perform node failure processing if a timeout has occurred for a transaction with a failure number different to the current latest failure number. This ensures that all transactions involving the failed node are handled eventually. A new testcase testNodeRestart -n TransientStatesNF T1 is added to the AT testsuite to give coverage. Change-Id: I0c0d4b6f75a97a3a7ff892cc4eafd2351491a8ff
Configuration menu - View commit details
-
Copy full SHA for 76d2002 - Browse repository at this point
Copy the full SHA 76d2002View commit details -
Bug#36066725 Regular mgmd hangs when sending it a stop node for ndbmtd
Root cause is that the mutexes 'theMultiTransporterMutex' and 'clusterMgrThreadMutex' are taken in different order in the two respective call chains: 1) ClusterMgr::threadMain() -> lock() -> NdbMutex_Lock(clusterMgrThreadMutex) - ::threadMain(), holding clusterMgrThreadMutex -> TransporterFacade::startConnecting() - TF::startConnecting -> lockMultiTransporters() <<<< HANG while holding clusterMgrThreadMutex 2) TransporterRegistry::report_disconnect() -> lockMultiTransporters() - ::report_disconnect(), holding theMultiTransporterMutex, -> TransporterFacade::reportDisconnect() - TF::reportDisconnect -> ClusterMgr::reportDisconnected() - ClusterMgr::reportDisconnected() -> lock() - lock() -> NdbMutex_Lock(clusterMgrThreadMutex) <<<< Held by 1) Patch change TransporterRegistry::report_disconnect() such that the theMultiTransporterMutex is released before calling reportDisconnect(NodeId). It should be sufficient to hold theMultiTransporterMutex while ::report_disconnect check if we are disconnecting a multiTransporter, and if all its Trps are in DISCONNECTED state. When this finished we have set up 'ready_to_disconnect' and can release theMultiTransporterMutex before -> reportDisconnect() Change-Id: I19be0d9d92184efb8f20a92aa7189b9b85f069bc
Configuration menu - View commit details
-
Copy full SHA for 3a6acfd - Browse repository at this point
Copy the full SHA 3a6acfdView commit details
Commits on Dec 14, 2023
-
Bug#35913841: Queries from stored procedures not offloading to second…
…ary engine when we have OUT arguments Before this patch, RAPID could not handle queries of the type SELECT ... INTO <list of variables>. The reason for this was that these kind of queries were set up with a special Query_result interceptor (Query_dumpvar), whereas regular SELECT queries were set up with Query_result_send, which was substituted on the RAPID side with a special protocol adapter. However, RAPID has also implemented another protocol adapter (an Item wrapper), which is used for INSERT INTO and CREATE TABLE ... SELECT statements. It was noted that this adapter could also be used for SELECT INTO, where the Item wrapper is used as an intermediary for Query_dumpvar. Two new property functions on class Query_result are implemented: use_protocol_adapter() returns true for Query_result_send and use_protocol_wrapper() returns true for Query_dumpvar. An alternative implementation may check these functions for when to use adapters resp. wrappers instead of the original Query result classes. The function is_interceptor() is no longer used and is hence removed. We identify this new requirement in IsSupportedProtocol(), where we explicitly allow Query_dumpvar and Query_result_send query results, and in CreateQEP() where we create the Item wrapper for Query_dumpvar. Notice that this implements SELECT ... INTO both as regular statements, as prepared statements and as procedure statements. Notice also that the syntax variants SELECT ... INTO OUTFILE and SELECT ... INTO DUMPFILE are still not supported. Most of the changes are in the test suite where we have eliminated the earlier ER_SECONDARY_ENGINE_PLUGIN error code. A couple of test lines were removed from the tests rapid.sp and rapid.cp_sp because they gave different results in dict and varlen modes. Change-Id: Icd56cb6fbc32e121a59599a7e5b7d651747804f5
Configuration menu - View commit details
-
Copy full SHA for 6a66479 - Browse repository at this point
Copy the full SHA 6a66479View commit details -
Bug#35889990: Setting secondary_engine to OFF causes offload issues w…
…ith executing queries from stored procedures Bug#35988564: CREATE TEMPORARY TABLE failure after Table_ref::restore_properties The problem is that when reaching the function RapidOptimize(), the value of LEX::using_hypergraph_optimizer is not consistent with the value of the session variable use_old_optimizer. The problem is a missing setting of LEX::using_hypergraph_optimizer in the execution of a query. It was only synchronized with the hypergraph optimizer setting in preparation of a query, which is sufficient for regular statements which always perform both preparation and execution, but not for stored procedures that have separate preparation and execution. The solution is to add this setting. But this revealed another problem: sometimes execution is out of sync with the current preparation. An optimization with the hypergraph optimizer requires that the preparation is also performed with settings specific to the hypergraph optimizer. This may happen e.g if the value of the session variable use_secondary_engine is switched from "off" or "on" to "forced" either between a preparation and execution (for prepared statements) or between two executions (for prepared statements and stored procedure statements) The solution to this is to track the current preparation state versus the one desired (the optimizer switch setting). It is now checked that the value of using_hypergraph_optimizer matches the current optimizer switch setting just after opening tables and before optimizing such statements, in which case we call ask_to_reprepare(). During optimization we set using_hypergraph_optimizer according to the optimizer switch. The test rapid.pfs_secondary has an increased reprepare count because we now detect more often that an optimization requires a new preparation. In addition, a test case was added to cover the problem described in bug#35988564. Change-Id: Ibf158576ec4cd1edde655d41f7c8bf2813e208ee
Configuration menu - View commit details
-
Copy full SHA for 29a4e1e - Browse repository at this point
Copy the full SHA 29a4e1eView commit details -
Bug#34904177 Add lines and numbers to callstack
produced by my_print_stacktrace - Add library backtrace in extra/libbacktrace/sha9ae4f4a. Approved as LIC#99705/BID#154396. - Implement stacktrace::{full, simple, pcinfo, syminfo} to encapsulate the backtrace state needed for calling the corresponding backtrace_* functions - this state needs to be created once in the lifetime of the process. The stacktrace namespace is in <backtrace/stacktrace.hpp>. - Add a convenience library backtrace and the alias ext::backtrace. - Add the CMake option WITH_EXT_BACKTRACE to control if the library will be used, in which case HAVE_EXT_BACKTRACE is defined. - If HAVE_EXT_BACKTRACE is defined, use <backtrace/stacktrace.hpp> in my_print_stacktrace. Change-Id: I8e0c5fa30b2dd986d42e008b26d9fd1195871177
Configuration menu - View commit details
-
Copy full SHA for 4c571be - Browse repository at this point
Copy the full SHA 4c571beView commit details -
Bug#34904177 Add lines and numbers to callstack
produced by my_print_stacktrace Additional patch, enabling stacktrace for non-glibc Linux platforms. Change-Id: I8ac697e173cb40810fe37b1685e1495fb2660fa7
Configuration menu - View commit details
-
Copy full SHA for f729fe0 - Browse repository at this point
Copy the full SHA f729fe0View commit details -
Bug#34904177 Add lines and numbers to callstack
produced by my_print_stacktrace Additional patch, enabling stacktrace for Solaris. Note that man syscall(2) says #include <sys/syscall.h> also on Linux. Change-Id: Ic9e567b9468e3ca3a0b8f39412099a75710022d4
Configuration menu - View commit details
-
Copy full SHA for 58ca915 - Browse repository at this point
Copy the full SHA 58ca915View commit details -
Bug#34904177 Add lines and numbers to callstack
produced by my_print_stacktrace Additional patch, enabling stacktrace for freebsd and all Linux platforms. Change-Id: Ibc2c83ea7172a2b3af3bd6757ac09ec057c7d1ca
Configuration menu - View commit details
-
Copy full SHA for 9ed16d4 - Browse repository at this point
Copy the full SHA 9ed16d4View commit details -
Bug#36027494 Add mtr test for my_print_stacktrace
Add an mtr test which is only inteded to be run manually: ./mtr --no-check-testcases print_stacktrace Inspect output in var/log/msqld.1.err Change-Id: Ia308592441df0e4a23a18c590df867a15882cbef
Configuration menu - View commit details
-
Copy full SHA for 824e2b4 - Browse repository at this point
Copy the full SHA 824e2b4View commit details
Commits on Jan 17, 2024
-
Configuration menu - View commit details
-
Copy full SHA for 930b8c5 - Browse repository at this point
Copy the full SHA 930b8c5View commit details
Commits on Jan 24, 2024
-
Conflict resolutions: CMakeLists.txt 7079e5e removed Wno-format and merge conflict is only about disabling google unit tests for PXB. sql_common.h a32b194 removed MYSQL_FIELD and PXb has changed #define to change from && !defined(MYSQL_COMPONENT) to || XTRABACKUP sql/mysqld.h sql/mysqld.cc static to non-static in PXB storage/innobase/log/log0recv.cc Adjust the recv_scan_log_recs to have the 'to_lsn' parameter sql/server_component/mysql_command_backend.cc a32b194 removed MYSQL_FIELD. PXB has #define around this code storage/innobase/xtrabackup/src/backup_mysql.cc mysql_stmt_bind_param() has been deprecated, use mysql_stmt_bind_named_param() instead storage/innobase/xtrabackup/src/xtrabackup.cc 41bc027 changed component_infrastructure_deinit() about print shutdown component messages. PXB uses the same API. Adjusted to new signature
Configuration menu - View commit details
-
Copy full SHA for b2c90ee - Browse repository at this point
Copy the full SHA b2c90eeView commit details
Commits on Feb 7, 2024
-
Fix bug1551634.sh test failure. Upstream removed --master-info-reposi…
…tory and --relay-log-info-repository
Configuration menu - View commit details
-
Copy full SHA for a1d1279 - Browse repository at this point
Copy the full SHA a1d1279View commit details
Commits on Feb 27, 2024
-
Configuration menu - View commit details
-
Copy full SHA for d3ca286 - Browse repository at this point
Copy the full SHA d3ca286View commit details