Posted
almost 5 years
ago
by
Frederic Descamps
With MySQL Shell 8.0.16, a new very interesting feature was released: the Reporting Framework.
Jesper already blogged about it and I recommend you to read his articles if you are interested in writing your own report:
... [More]
https://mysql.wisborg.dk/2019/04/26/mysql-shell-8-0-16-built-in-reports/
https://mysql.wisborg.dk/2019/04/27/mysql-shell-8-0-16-user-defined-reports/
I this post, I will show you one user-defined report that can be used to monitor your MySQL InnoDB Cluster / Group Replication.
Preparation
Before being able to use the report, you need to download 2 files. The first one is the addition in systhat I often use to monitor MySQL InnoDB Cluster:
addition_to_sys_GRDownload
And the second one is the report:
gr_infoDownload
Once downloaded, you can unzip them and install them:
On your Primary-Master run:
mysqlsh --sql clusteradmin@mysql1 < addition_to_sys_GR.sql
Now install the report on your MySQL Shell client’s machine:
$ mdkir -p ~/.mysqlsh/init.dmv gr_info.py ~/.mysqlsh/init.d
Usage
Once installed, you just need to relaunch the Shell and you are ready to call the new report using the \show command:
Now let’s see the report in action when I block all writes on mysql2 with a FTWRL and call the report with \watch:
Conclusion
Yet another nice addition to MySQL Shell. With this report you can see which member still has quorum, how many transactions each nodes have to apply, …
Don’t hesitate to also share your reports too !
[Less]
|
Posted
almost 5 years
ago
by
Sveta Smirnova
When users experience memory usage issues with any software, including MySQL®, their first response is to think that it’s a symptom of a memory leak. As this story will show, this is not always the case.
This story is about a bug.
All Percona
... [More]
Support customers are eligible for bug fixes, but their options vary. For example, Advanced+ customers are offered a HotFix build prior to the public release of software with the patch. Premium customers do not even have to use Percona software: we may port our patches to upstream for them. But for Percona products all Support levels have the right to have a fix.
Even so, this does not mean we will fix every unexpected behavior, even if we accept that behavior to be a valid bug. One of the reasons for such a decision might be that while the behavior is clearly wrong for Percona products, this is still a feature request.
A bug as a case study
A good recent example of such a case is PS-5312 – the bug is repeatable with upstream and reported at bugs.mysql.com/95065
This reports a situation whereby access to InnoDB fulltext indexes leads to growth in memory usage. It starts when someone queries a fulltext index, grows until a maximum, and is not freed for quite a long time.
Yura Sorokin from the Percona Engineering Team investigated if this is a memory leak and found that it is not.
When InnoDB resolves a fulltext query, it creates a memory heap in the function
fts_query_phrase_search This heap may grow up to 80MB. Additionally, it has a big number of blocks (mem_block_t ) which are not always used continuously and this, in turn, leads to memory fragmentation.
In the function
exit , the memory heap is freed. InnoDB does this for each of the allocated blocks. At the end of the function, it calls free() which belongs to one of the memory allocator libraries, such as malloc or jemalloc. From the mysqld point of view, everything is done correctly: there is no memory leak.
However while
free() should release memory when called, it is not required to return it back to the operating system. If the memory allocator decides that the same memory blocks will be required soon, it may still keep them for the mysqld process. This explains why you might see that mysqld still uses a lot of memory after the job is finished and all de-allocations are done.
This in practice is not a big issue and should not cause any harm. But if you need the memory to be returned to the operating system quicker, you could try alternative memory allocators, such as jemalloc. The latter was proven to solve the issue with PS-5312.
Another factor which improves memory management is the number of CPU cores: the more we used for the test, the faster the memory was returned to the operating system. This, probably, can be explained by the fact that if you have multiple CPUs, then the memory allocator can dedicate one of them just for releasing memory to the operating system.
The very first implementation of InnoDB full text indexes introduced this flaw. As our engineer Yura Sorokin found:
The very first 5.6 commit which introduces Full Text Search Functionality for InnoDB WL#5538: InnoDB Full-Text Search Support – https://dev.mysql.com/worklog/task/?id=5538
Implement WL #5538 InnoDB Full-Text Search Support, merge – https://github.com/mysql/mysql-server/commit/b6169e2d944 – also has this problem.
Options to fix
We have a few options to fix this:
Change implementation of InnoDB fulltext index
Use custom memory library like jemalloc
Both have their advantages and disadvantages.
Option 1 means we are introducing an incompatibility with upstream, which may lead to strange bugs in future versions. This also means a full rewrite of the InnoDB fulltext code which is always risky in GA versions, used by our customers.
Option 2 means we may hit flaws in the jemalloc library which is designed for performance and not for the safest memory allocation.
So we have to choose between these two not ideal solutions.
Since option 1 may lead to a situation when Percona Server will be incompatible with upstream, we prefer option 2 and look forward for the upstream fix of this bug.
Conclusion
If you are seeing a high memory usage by the
mysqld process, it is not always a symptom of a memory leak. You can use memory instrumentation in Performance Schema to find out how allocated memory is used. Try alternative memory libraries for better processing of allocations and freeing of memory. Search the user manual for LD_PRELOAD to find out how to set it up at these pages here and here. [Less]
|
Posted
almost 5 years
ago
by
MySQL Server Dev Team
This is a 3 part blog series:
Part 1: The SYSTEM_USER Dynamic Privilege
Part 2 : Partial Revokes from Database Objects
Part 3 : How to create multiple accounts for an app?
Have you ever encountered situations where you want to grant a user access to all databases except a few databases ? …
Facebook
Twitter
Google+
LinkedIn
|
Posted
almost 5 years
ago
by
MySQL Server Dev Team
This is a 3 part blog series:
Part 1: The SYSTEM_USER Dynamic Privilege
Part 2 : Partial Revokes from Database Objects
Part 3 : How to create multiple accounts for an app?
To modify users, you must have the CREATE USER privilege or the UPDATE privilege on the mysql schema.…
Facebook
Twitter
Google+
LinkedIn
|
Posted
almost 5 years
ago
by
MySQL Server Dev Team
This is a 3 part blog series:
Part 1: The SYSTEM_USER Dynamic Privilege
Part 2 : Partial Revokes from Database Objects
Part 3 : How to create multiple accounts for an app?
You can now grant CREATE USER so that your web apps would be able to use
... [More]
multiple accounts without you risking the end user hijacking the database by changing your root credentials for example. …
Facebook
Twitter
Google+
LinkedIn
[Less]
|
Posted
almost 5 years
ago
by
MySQL Performance Blog
Percona is glad to announce the release of Percona XtraBackup 2.4.14 on May 1, 2019. You can download it from our download site and apt and yum repositories.
Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for
... [More]
companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQL backups.
Percona XtraBackup 2.4.14 enables saving backups to an Amazon S3, MinIO, and Google Cloud Storage (using interoperability mode) when using xbcloud. The following example demonstrates how to use an Amazon S3 storage to make a full backup:
$ xtrabackup --backup --stream=xbstream --extra-lsndir=/tmp --target-dir=/tmp | \
xbcloud put --storage=s3 \
--s3-endpoint='s3.amazonaws.com' \
--s3-access-key='YOUR-ACCESSKEYID' \
--s3-secret-key='YOUR-SECRETACCESSKEY' \
--s3-bucket='mysql_backups'
--parallel=10 \
${date -I}-full_backup
All Percona software is open-source and free.
New Features
Amazon S3 is now supported in xbcloud. More information in PXB-1813.
Bugs Fixed
When the row format was changed during the backup, xtrabackup could crash
during the incremental prepare stage. Bug fixed PXB-1824.
If compressed InnoDB undo tablespaces were not removed beforehand, the
incremental backup could crash at the prepare stage. Bug fixed PXB-1552.
Other bugs fixed: PXB-1771, PXB-1809, PXB-1837.
Release notes with all the improvements for version 2.4.14 are available in our online documentation. Please report any bugs to the issue tracker.
[Less]
|
Posted
almost 5 years
ago
by
MySQL Performance Blog
In this research, I wanted to see what kind of performance improvements could be gained by using a ClickHouse data source rather than PostgreSQL. Assuming that I would see performance advantages with using ClickHouse, would those advantages be
... [More]
retained if I access ClickHouse from within postgres using a foreign data wrapper (FDW)? The FDW in question is clickhousedb_fdw – an open source project from Percona!
The database environments under scrutiny are PostgreSQL v11, clickhousedb_fdw and a ClickHouse database. Ultimately, from within PostgreSQL v11, we are going to issue various SQL queries routed through our clickhousedb_fdw to the ClickHouse database. Then we’ll see how the FDW performance compares with those same queries executed in native PostgreSQL and native ClickHouse.
Clickhouse Database
ClickHouse is an open source column based database management system which can achieve performance of between 100 and 1000 times faster than traditional database approaches, capable of processing more than a billion rows in less than a second.
Clickhousedb_fdw
clickhousedb_fdw, a ClickHouse database foreign data wrapper, or FDW, is an open source project from Percona. Here’s a link for the GitHub project repository:
https://github.com/Percona-Lab/clickhousedb_fdw
I wrote a blog in March which tells you more about our FDW: https://www.percona.com/blog/2019/03/29/postgresql-access-clickhouse-one-of-the-fastest-column-dbmss-with-clickhousedb_fdw/
As you’ll see, this provides for an FDW for ClickHouse that allows you to SELECT from, and INSERT INTO, a ClickHouse database from within a PostgreSQL v11 server.
The FDW supports advanced features such as aggregate pushdown and joins pushdown. These significantly improve performance by utilizing the remote server’s resources for these resource intensive operations.
Benchmark environment
Supermicro server:
Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
2 sockets / 28 cores / 56 threads
Memory: 256GB of RAM
Storage: Samsung SM863 1.9TB Enterprise SSD
Filesystem: ext4/xfs
OS: Linux smblade01 4.15.0-42-generic #45~16.04.1-Ubuntu
PostgreSQL: version 11
Benchmark tests
Rather than using some machine generated dataset for this benchmark, we used the “On Time Reporting Carrier On-Time Performance” data from 1987 to 2018. You can access the data using our shell script available here:
https://github.com/Percona-Lab/ontime-airline-performance/blob/master/download.sh
The size of the database is 85GB, providing a single table of 109 columns.
Benchmark Queries
Here are the queries I used to benchmark the ClickHouse, clickhousedb_fdw, and PostgreSQL.
Q#
Query Contains Aggregates and Group By
Q1
SELECT DayOfWeek, count(*) AS c FROM ontime WHERE Year >= 2000 AND Year <= 2008 GROUP BY DayOfWeek ORDER BY c DESC;
Q2
SELECT DayOfWeek, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year >= 2000 AND Year <= 2008 GROUP BY DayOfWeek ORDER BY c DESC;
Q3
SELECT Origin, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year >= 2000 AND Year <= 2008 GROUP BY Origin ORDER BY c DESC LIMIT 10;
Q4
SELECT Carrier, count(*) FROM ontime WHERE DepDelay>10 AND Year = 2007 GROUP BY Carrier ORDER BY count(*) DESC;
Q5
SELECT a.Carrier, c, c2, c*1000/c2 as c3 FROM ( SELECT Carrier, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year=2007 GROUP BY Carrier ) a
INNER JOIN ( SELECT Carrier,count(*) AS c2 FROM ontime WHERE Year=2007 GROUP BY Carrier)b on a.Carrier=b.Carrier ORDER BY c3 DESC;
Q6
SELECT a.Carrier, c, c2, c*1000/c2 as c3 FROM ( SELECT Carrier, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year >= 2000 AND
Year <= 2008 GROUP BY Carrier) a INNER JOIN ( SELECT Carrier, count(*) AS c2 FROM ontime WHERE Year >= 2000 AND Year <= 2008 GROUP BY Carrier ) b on a.Carrier=b.Carrier ORDER BY c3 DESC;
Q7
SELECT Carrier, avg(DepDelay) * 1000 AS c3 FROM ontime WHERE Year >= 2000 AND Year <= 2008 GROUP BY Carrier;
Q8
SELECT Year, avg(DepDelay) FROM ontime GROUP BY Year;
Q9
select Year, count(*) as c1 from ontime group by Year;
Q10
SELECT avg(cnt) FROM (SELECT Year,Month,count(*) AS cnt FROM ontime WHERE DepDel15=1 GROUP BY Year,Month) a;
Q11
select avg(c1) from (select Year,Month,count(*) as c1 from ontime group by Year,Month) a;
Q12
SELECT OriginCityName, DestCityName, count(*) AS c FROM ontime GROUP BY OriginCityName, DestCityName ORDER BY c DESC LIMIT 10;
Q13
SELECT OriginCityName, count(*) AS c FROM ontime GROUP BY OriginCityName ORDER BY c DESC LIMIT 10;
Query Contains Joins
Q14
SELECT a.Year, c1/c2 FROM ( select Year, count(*)*1000 as c1 from ontime WHERE DepDelay>10 GROUP BY Year) a
INNER JOIN (select Year, count(*) as c2 from ontime GROUP BY Year ) b on a.Year=b.Year ORDER BY a.Year;
Q15
SELECT a.”Year”, c1/c2 FROM ( select “Year”, count(*)*1000 as c1 FROM fontime WHERE “DepDelay”>10 GROUP BY “Year”) a INNER JOIN (select “Year”, count(*) as c2
FROM fontime GROUP BY “Year” ) b on a.”Year”=b.”Year”;
Table-1: Queries used in benchmark
Query executions
Here are the results from the each of the queries when run in different database set ups: PostgreSQL with and without indexes, native ClickHouse and clickhousedb_fdw. The time is shown in milliseconds.
Q#
PostgreSQL
PostgreSQL (Indexed)
ClickHouse
clickhousedb_fdw
Q1
27920
19634
23
57
Q2
35124
17301
50
80
Q3
34046
15618
67
115
Q4
31632
7667
25
37
Q5
47220
8976
27
60
Q6
58233
24368
55
153
Q7
30566
13256
52
91
Q8
38309
60511
112
179
Q9
20674
37979
31
81
Q10
34990
20102
56
148
Q11
30489
51658
37
155
Q12
39357
33742
186
1333
Q13
29912
30709
101
384
Q14
54126
39913
124
1364212
Q15
97258
30211
245
259
Table-1: Time taken to execute the queries used in benchmark
Reviewing the results
The graph shows the query execution time in milliseconds, the X-axis shows the query number from the tables above, while the Y-axis shows the execution time in milliseconds. The results for ClickHouse and the data accessed from postgres using clickhousedb_fdw are shown. From the table, you can see there is a huge difference between PostgreSQL and ClickHouse, but there is minimal difference between ClickHouse and clickhousedb_fdw.
Clickhouse Vs Clickhousedb_fdw (Shows the overhead of clickhousedb_fdw)
This graph shows the difference between ClickhouseDB and clickhousedb_fdw. In most of the queries, the FDW overhead is not that great, and barely significant apart from in Q12. This query involves joins and ORDER BY clause. Because of the ORDER BY clause the GROUP/BY and ORDER BY does not push down to ClickHouse.
In Table-2 we can see the spike of time in query Q12 and Q13. To reiterate, this is caused by the ORDER BY clause. To confirm this, I ran a queries Q-14 and Q-15 with and without the ORDER BY clause. Without the ORDER BY clause the completion time is 259ms and with ORDER BY clause, it is 1364212. To debug that the query I explain both the queries, and here is the results of explain.
bm=# EXPLAIN VERBOSE SELECT a."Year", c1/c2
FROM (SELECT "Year", count(*)*1000 AS c1 FROM fontime WHERE "DepDelay" > 10 GROUP BY "Year") a
INNER JOIN(SELECT "Year", count(*) AS c2 FROM fontime GROUP BY "Year") b ON a."Year"=b."Year";
QUERY PLAN
Hash Join (cost=2250.00..128516.06 rows=50000000 width=12)
Output: fontime."Year", (((count(*) * 1000)) / b.c2)
Inner Unique: true Hash Cond: (fontime."Year" = b."Year")
-> Foreign Scan (cost=1.00..-1.00 rows=100000 width=12)
Output: fontime."Year", ((count(*) * 1000))
Relations: Aggregate on (fontime)
Remote SQL: SELECT "Year", (count(*) * 1000) FROM "default".ontime WHERE (("DepDelay" > 10)) GROUP BY "Year"
-> Hash (cost=999.00..999.00 rows=100000 width=12)
Output: b.c2, b."Year"
-> Subquery Scan on b (cost=1.00..999.00 rows=100000 width=12)
Output: b.c2, b."Year"
-> Foreign Scan (cost=1.00..-1.00 rows=100000 width=12)
Output: fontime_1."Year", (count(*))
Relations: Aggregate on (fontime)
Remote SQL: SELECT "Year", count(*) FROM "default".ontime GROUP BY "Year"(16 rows)
bm=# EXPLAIN VERBOSE SELECT a."Year", c1/c2 FROM(SELECT "Year", count(*)*1000 AS c1 FROM fontime WHERE "DepDelay" > 10 GROUP BY "Year") a
INNER JOIN(SELECT "Year", count(*) as c2 FROM fontime GROUP BY "Year") b ON a."Year"= b."Year"
ORDER BY a."Year";
QUERY PLAN
Merge Join (cost=2.00..628498.02 rows=50000000 width=12)
Output: fontime."Year", (((count(*) * 1000)) / (count(*)))
Inner Unique: true Merge Cond: (fontime."Year" = fontime_1."Year")
-> GroupAggregate (cost=1.00..499.01 rows=1 width=12)
Output: fontime."Year", (count(*) * 1000)
Group Key: fontime."Year"
-> Foreign Scan on public.fontime (cost=1.00..-1.00 rows=100000 width=4)
Remote SQL: SELECT "Year" FROM "default".ontime WHERE (("DepDelay" > 10))
ORDER BY "Year" ASC
-> GroupAggregate (cost=1.00..499.01 rows=1 width=12)
Output: fontime_1."Year", count(*) Group Key: fontime_1."Year"
-> Foreign Scan on public.fontime fontime_1 (cost=1.00..-1.00 rows=100000 width=4)
Remote SQL: SELECT "Year" FROM "default".ontime ORDER BY "Year" ASC(16 rows)
Conclusion
The results from these experiments show that ClickHouse offers really good performance, and clickhousedb_fdw offers the benefits of ClickHouse performance from within PostgreSQL. While there is some overhead when using clickhousedb_fdw, it is negligible and is comparable to the performance achieved when running natively within the ClickHouse database. This also confirms that the PostgreSQL foreign data wrapper push-down feature provides wonderful results.
[Less]
|
Posted
almost 5 years
ago
by
Jean-François Gagné
War story of the day: do not use — or be very careful when using — the max_connections beta database flag on CloudSQL... because it has many bugs.
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Times; color: #000000;
... [More]
-webkit-text-stroke: #000000}
span.s1 {font-kerning: none}
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Times; color: [Less]
|
Posted
almost 5 years
ago
by
Jean-François Gagné
War story of the day: do not use — or be very careful when using — the max_connections beta database flag on CloudSQL... because it has many bugs.
I was hit by this today: we set the max_connections flag to 8000 on a primary server a few days ago
... [More]
, we had a failover last night, and the flag was not set on the replica (bug #1).
Update 2019-05-03: there is another bug (bug #1.5). If you set the [Less]
|
Posted
almost 5 years
ago
by
Valeriy Kravchuk
We have a public holiday here today and it's raining outside for a third day in a row already, so I hardly have anything better to do than writing yet another review of public MySQL bug reports that I've subscribed to recently.Not sure if these
... [More]
reviews are really considered useful by anyone but few of my readers, but I am still going to try in a hope to end up with some useful conclusions. Last time I've stopped on Bug #94903, so let me continue with the next bug in my list:
Bug #94912 - "O_DIRECT_NO_FSYNC possible write hole". In this bug report Janet Campbell shared some concerns related to the way O_DIRECT_NO_FSYNC (and O_DIRECT) settings for innodb_flush_method work. Check comments, including those by Sunny Bains, where he agrees that "...this will cause problems where the redo and data are on separate devices.". Useful reading for anyone interested in InnoDB internals or using innodb_dedicated_server setting in MySQL 8.0.14+.
Bug #94971 - "Incorrect key file error during log apply table stage in online DDL". Monty Solomon reported yet another case when "online' ALTER for InnoDB table fails in a weird way. The bug is still "Open" and there is no clear test case to just copy/paste, but both the problem and potential solutions (make sure you have "big enough" innodb_online_alter_log_max_size or better use pt-online-schema-change or gh-ost tools) were already discussed here.
Bug #94973 - "Wrong result with subquery in where clause and order by". Yet another wrong results bug with subquery on MySQL 5.7.25 was reported by Andreas Kohlbecker. We can only guess if MySQL 8 is also affected (MariaDB 10.3.7 is not, based on my test results shared below) as Oracle engineer who verified the bug had NOT card to check or share the results of this check. What can be easier than running this (a bit modified) test case on every MySQL major version and copy pasting the results:MariaDB [test]> CREATE TABLE `ReferenceB` ( -> `id` int(11) NOT NULL, -> `bitField` bit(1) NOT NULL, -> `refType` varchar(255) NOT NULL, -> `externalLink` longtext, -> PRIMARY KEY (`id`) -> ) ENGINE=MyISAM DEFAULT CHARSET=utf8;Query OK, 0 rows affected (0.170 sec)MariaDB [test]> INSERT INTO ReferenceB (id, bitField, refType, externalLink) VALUES(1, 0, 'JOU', NULL);Query OK, 1 row affected (0.027 sec)MariaDB [test]> INSERT INTO ReferenceB (id, bitField, refType, externalLink) VALUES(2, 0, 'JOU', NULL);Query OK, 1 row affected (0.002 sec)MariaDB [test]> SELECT hex(bitField) from ReferenceB where id in (select id asy0_ from ReferenceB where refType='JOU') order by externalLink asc;+---------------+| hex(bitField) |+---------------+| 0 || 0 |+---------------+2 rows in set (0.028 sec)But we do not see anything like that in the bug report... This is sad.
Bug #94994 - "Memory leak detect on temptable storage engine". Yet another memory leak (found with ASan) reported by Zhao Jianwei, who had also suggested a patch.
Bug #95008 - "applying binary log doesn't work with blackhole engine tables". This bug was reported by Thomas Benkert. It seems there is a problem to apply row-based events to BLACKHOLE table and this prevents some nice recovery tricks from working.
Bug #95020 - "select no rows return but check profile process Creating sort index". Interesting finding from cui jacky. I can reproduce this with MariaDB as well. It seems we either have to define some new stage or define "Creating sort index" better than in the current manual. This:The thread is processing a SELECT that is resolved using an internal temporary table. is plain wrong in the case shown in the bug report IMHO.
Bug #95040 - "Duplicately remove locks from lock_sys->prdt_page_hash in btr_compress". One of those rare cases when Zhai Weixiang does not provide the patch, just suggests the fix based on code review :)
Bug #95045 - "Data Truncation error occurred on a write of column 0Data was 0 bytes long and". This really weird regression bug in MySQL 8.0.14+ was reported by Adarshdeep Cheema. MariaDB 10.3 is surely not affected.
Bug #95049 - "Modified rows are not locked after rolling back to savepoint". Bug reporter, John Lin, found that fine MySQL manual does not describe the real current implementation. Surprise!
Bug #95058 - "Index not used for column with IS TRUE or IS FALSE operators". Take extra care when using BOOLEAN columns in MySQL. As it was noted by Monty Solomon, proper index is NOT used when you try to check BOOLEAN values as manual suggests, using IS TRUE or IS FALSE conditions. Roy Lyseng explained how such queries are threated internally, but surely there is a better way. MariaDB 10.3.7 is also affected, unfortunately.
Bug #95064 - "slave server may has gaps in Executed_Gtid_Set when a special case happen ". Nice bug report from yoga yoga, who had also contributed a patch. Parallel slave can easily get out of sync with master in case of lock wait timeout and failed retries. Again, we do NOT see any check if MySQL 8 is affected, unfortunately.
Bug #95065 - "Strange memory management when using full-text indexes". We all know that InnoDB FULLTEXT indexes implementation is far from perfect. Now, thanks to Yura Sorokin, we know also about a verified memory leak bug there that may lead to OOM killing of MySQL server.
Bug #95070 - "INSERT .. VALUES ( .., (SELECT ..), ..) takes shared lock with READ-COMMITTED". Seunguck Lee found yet another case of InnoDB locking behavior that MySQL manual does not explain. The bug is still "Open" for some reason.
Bug #95115 - "mysqld deadlock of all client threads originating from 3-way deadlock". It took some efforts for bug reporter, Sandeep Dube, and other community users (mostly Jacek Cencek) to attract proper attention to this bug from proper Oracle developer, Dmitry Lenev, until it ended up "Verified" based on code review. We still can not be sure if MySQL 8 is also affected.
That's all for now. I have few more new bug reports that I monitor, but I do not plan to continue with this kind of reviews in upcoming few months in this blog. I hope I'll get a reason soon to write different kind of posts, with more in depth study of various topics...In any case you may follow me on Twitter for anything related to recent interesting or wrongly handled MySQL bug reports.
This view of Chelsea from our apartment at Chelsea Cloisters reminds me that last year I spent spring holiday season properly - no time was devoted to MySQL bugs :)
To summarize:
Do not use O_DIRECT_NO_FSYNC value for innodb_flush_method if your redo logs are located on different device than your data files. Just don't.
Some Oracle engineers who process bugs still do not care to check if all supported major versions are affected and/or share the results of such checks in public.
There are still many details of InnoDB locking to study, document properly and maybe fix.
I am really concerned with the state of MySQL optimizer. We see all kinds of weird bugs (including regressions) and very few fixes in each maintenance release.
[Less]
|