Activity Not Available
I Use This!


Analyzed 2 months ago. based on code collected 2 months ago.
Posted 12 days ago by Andy Bang
We are very happy to announce the general availability of MySQL Enterprise Monitor, Version 3.3. MySQL Enterprise Monitor is the best- in-class tool for monitoring and management of your MySQL assets and is included with your MySQL Enterprise ... [More] Edition and MySQL Enterprise Carrier Grade subscriptions. You’ll find information about what’s new here: (also included at the end of this note for convenience) Highlights * 3.3 introduces a new Enterprise Backup Dashboard that brings deeper integration with MySQL Enterprise Backup. View current backup status, history and archive details by group or instance using MySQL Enterprise Backup Advisor collections from existing mysql.backup_history and mysql.backup_progress table data. * Backup Group view shows a summary of backup events, details on the last successful full and Incremental backup types, any failed backups and an indication of whether the group has full backup coverage. Coverage calculations include appropriate checks on replication filters across your replication topologies. * Backup view of a single MySQL Instance shows a summary of backup events, details on the last successful full and Incremental backup types and any failed backups. Sparkline histories graph the total backup time per backup and the total time the MySQL Instance had a global read-lock during the backup process. * Select the Backup History tab to see all known backups in the event you need to perform a restore operation. Each row in the backup history table shows the backup type, status, completion timestamp and the log coordinates when the consistent snapshot was taken. * Drill down to an individual Backup archive to view all its metadata and associated progress logs. * The 3.3 User Interface now makes zooming in on common problems a single click on a named filter. Use these shortcuts to quickly select the search criteria of most interest on the MySQL Instances, Events, Query Analyzer and All Timeseries Graphs pages. Examples of pre-defined filters include all MySQL Instances using an EOL version, old MySQL Enterprise Monitor Agent installations, currently open Emergency events, and all statements that cause a full table scan, temporary tables on disk or SQL Errors. In addition to pre- defined filters, you can assign a name to your own favorite search conditions for reuse across sessions. Download You will find binaries for the new release on My Oracle Support: Choose the "Patches & Updates" tab, and then choose the "Product or Family (Advanced Search)" side tab in the "Patch Search" portlet. You will also find the binaries on the Oracle Software Delivery Cloud: Choose "MySQL Database" as the Product Pack and you will find the Enterprise Monitor along with other MySQL products. If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to, or contact Sales at Please open a bug or a ticket on My Oracle Support to report problems, request features, or give us general feedback about how this release meets your needs. Useful URLs * My Oracle Support - * What's New in 3.3 - * Change log - * Installation documentation - * Complete documentation - * Product information - * Download - (available now) or Thanks and Happy Monitoring! - The MySQL Enterprise Tools Development Team ========== Changes in MySQL Enterprise Monitor 3.3.0 Functionality Added or Changed * Important Change: The Enterprise Backup dashboard is introduced in this release. This dashboard enables monitoring of Full, Incremental, TTS, and Partial backups performed by MySQL Enterprise Backup on the monitored instances. For more information, see: * Named filters are introduced in this release. Named filters make it possible to create and save filters on the following pages: - MySQL Instances - Events - Query Analyzer - All Timeseries Graphs - Advisors A number of default named filters have been added in this release. See System and User-defined Filters for more information. Important: Existing default filters are migrated by the upgrade process and are renamed to User Default. * The event status is now included in the body of the notification mail. (Bug #24311586) * The Warnings Not Being Logged advisor text was updated. * The Tomcat server, bundled with the MySQL Enterprise Service Manager, has been upgraded to 8.0.33. * The MySQL server, bundled with the MySQL Enterprise Service Manager, has been upgraded to MySQL 5.7.15. Important: If you are upgrading from a previous version, the upgrade installer will handle the upgrade of your existing repository to the new version. All previous versions of MySQL Enterprise Service Manager used MySQL 5.6.x, which must be upgraded to 5.7.14, the version delivered with this release. This only applies to installations using the bundled MySQL Server. If you are using an external MySQL Server as the repository, you must upgrade it manually to at least 5.6.14 before running the MySQL Enterprise Service Manager upgrade installer. MySQL 5.7.14 is the recommended version. As of this release, the MySQL Enterprise Service Manager installer and upgrade installer raise an error if an attempt is made to configure MySQL Enterprise Service Manager to use an unsupported MySQL version for the repository. * Important Change: As of this release, it is not possible to use MySQL Server 5.1 or 5.5 as the MySQL Enterprise Monitor repository. The installer displays an error and the installation process stops. Bugs Fixed * The documentation did not specify the correct format for the ssl-ca- keystore-path parameter. It was listed as a file path, but is a URL. For example: ssl-ca-keystore-path=file:///mysql/enterprise/agent/etc/mykeystore (Bug #24386496) * Attempting to monitor a group replication topology caused all replication topology discovery to fail. (Bug #24376827) * The InnoDB Redo Log Pending Writes graph was not displayed for MySQL 5.7 instances. (Bug #23563358) * LDAP configuration was not validated when saved. As a result, if the LDAP connection information was incorrectly configured, NullPointerExceptions were logged, and it was not possible to connect to the LDAP server. (Bug #23299288) * The certificate to keystore upgrade process attempted to use the system's openSSL installation instead of that delivered with the installer. As a result, if openSSL was not installed on the system, the upgrade failed. As of this release, the upgrade only uses the openSSL libraries delivered with the installer. (Bug #22522309) * The MySQL Enterprise Service Manager utility returned a stack trace for unsupported commands, instead of redirecting the user to the help. * Custom agent service names were not maintained by the agent upgrade process. The custom name was overwritten by the default agent service name, MySQLEnterpriseMonitorAgent. * A NullPointerException was logged for replication monitoring. This exception did not affect the performance of replication monitoring. * The advice page generated by the MySQL Enterprise Backup Health advisor contained broken links to the InnoDB chapters of the MySQL Reference Manual. It also incorrectly referred to the --start-lsn option as the --lsn option. * The Overview page elements did not resize properly if the browser window was resized. * On certain browser versions, the symbol << was transformed to Â< * Replication filter information was not collected. As a result, the replica's Filter/Delay Status frame of the Source Replication Status was not correctly populated. [Less]
Posted 14 days ago by Mark Callaghan
I used an IO-heavy configuration to determine the impact of zstandard vs zlib compression for MyRocks. There was about 1 read from SSD per transaction and decompression is done after each page read from the OS page cache and storage.The results are ... [More] impressive. Zstandard compresses like zlib level 1 but uses much less CPU.zstandard reduces CPU by 45% vs zlib level 1 for the load testzstandard reduces CPU by 11% vs zlib level 1 for the query testzstandard gets 8% more TPS vs zlib level 1 for the query testConfigurationConfiguration for MyRocks is still complex. The templates for the MyRocks my.cnf files for Linkbench and general usage are explained on the wiki. I used no compression for L0, L1, L2, then lz4 for all but the max level and then one of zlib level 1, zlib level 6 or zstd for the max level. The tests used an Aug5 build of MyRocks, so this used kZSTDNotFinalCompression as the build preceded the 1.0 release of zstandard.The test host has 50G of RAM available to userland, fast storage (5TB of NVMe MLC) and 24 CPU cores with 48 HW threads. The RocksDB block cache was set to 10G, the binlog was disabled but sync-on-commit was disabled for the binlog and RocksDB. Linkbench is run with maxid1=1B, the load test uses 2 clients and the query tests use 16 clients. Query tests are run as 24 1-hour loops and I report metrics from the 24th hour. I used my branch of linkbench and support scripts.ResultsThe results for zstandard are impressive. I look forward to using this in production. Thanks Yann.Legend:ips/tps - inserts & transactions per secondr/i, r/t - iostat reads per insert and per transactionwKB/i, wKB/t - iostat KB written per insert and per transactionMcpu/i, Mcpu/t - usecs of CPU time per insert and per transactionsize - database size in GBrss - mysqld RSS size in GBun, gn, ul, gl - p99 response time in milliseconds for the most frequent transactions (Update Node, Get Node, Update Link, Get Link List)Results for the loadips     r/i     rKB/i   wKB/i   Mcpu/i  size    rss     engine61543   0       0       0.98     81     324     3.1     zstd61504   0       0       0.98    146     331     2.0     zlib-161457   0       0       0.97    153     312     2.2     zlib-6Results for the 24th hour of the query testtps    r/t   rKB/t   wKB/t  Mcpu/t  size  rss   un    gn   ul  gl   engine39366  1.00  10.38   2.36    878    377   12.2  0.6   0.6  1   0.8  zstd36524  1.00  10.47   2.45    992    381   12.1  0.7   0.6  1   0.9  zlib-137233  0.97   9.76   2.30   1002    360   12.0  0.7   0.7  1   0.9  zlib-6 [Less]
Posted 14 days ago by Marc Alff
Starting with MySQL release 8.0.0, the source code is now commented using the Doxygen documentation generation tool. This project is long overdue … it has been a very well-worn issue, or an ever recurring wish, for the last 10 years. Well, not any more.…
Posted 14 days ago by MySQL Performance Blog
This blog post discusses how you can protect your e-commerce database from a high traffic disaster. Databases power today’s e-commerce. Whether it’s listing items on your site, contacting your distributor for inventory, tracking shipments, payments ... [More] , or customer data, your database must be up, running, tuned and available for your business to be successful. There is no time that this is more important than high-volume traffic days. There are specific events that occur throughout the year (such as Black Friday, Cyber Monday, or Singles Day) that you know are going to put extra strain on your database environment. But these are the specific times that your database can’t go down – these are the days that can make or break your year! So what can you do to guarantee that your database environment is up to the challenge of handling high traffic events? Are there ways of preparing for this type of traffic? Yes, there are! In this blog post, we’ll look at some of the factors that can help prepare your database environment to handle large amounts of traffic. Synchronous versus Asynchronous Applications Before moving to strategies, we need to discuss the difference between synchronous and asynchronous applications. In most web-based applications, user input starts a number of requests for resources. Once the server answers the requests, no communication stops until the next input. This type of communication between a client and server is called synchronous communication. Restricted application updates limit synchronous communication. Even synchronous applications designed to automatically refresh application server information at regular intervals have consistent periods of delay between data refreshes. While usually such delays aren’t an issue, some applications (for example, stock-trading applications) rely on continuously updated information to provide their users optimum functionality and usability. Web 2.0-based applications address this issue by using asynchronous communication. Asynchronous applications deliver continuously updated data to users. Asynchronous applications separate client requests from application updates, so multiple asynchronous communications between the client and server can occur simultaneously or in parallel. The strategy you use to scale the two types of applications to meet growing user and traffic demands will differ. Scaling a Synchronous/Latency-sensitive Application When it comes to synchronous applications, you really have only one option for scaling performance: sharding. With sharding, the tables are divided and distributed across multiple servers, which reduces the total number of rows in each table. This consequently reduces index size, and generally improves search performance. A shard can also be located on its own hardware, with different shards added to different machines. This database distribution over a large multiple of machines spreads the load out, also improving performance. Sharding allows you to scale read and write performance when latency is important. Generally speaking, it is better to avoid synchronous applications when possible – they limit your scalability options. Scaling an Asynchronous Application When it comes to scaling asynchronous applications, we have many more options than with synchronous applications. You should try and use asynchronous applications whenever possible: Secondary/Slave hosts. Replication can be used to add more hardware for read traffic. Replication usually employs a master/slave relationship between a designated “original” server and copies of the server. The master logs and then distributes the updates to the slaves. This setup allows you to distribute the read load across more than one machine. Caching. Database caching (tables, data, and models – caching summaries of data) improves scalability by distributing the query workload from expensive (overhead-wise) backend processes to multiple cheaper ones. It allows more flexibility for data processing: for example premium user data can be cached, while regular user data isn’t. Caching also improves data availability by providing applications that don’t depend on backend services continued service. It also allows for improved data access speeds by localizing the data and avoiding roundtrip queries. There are some specific caching strategies you can use: Pre-Emptive Caching. Ordinarily, an object gets cached the first time it is requested (or if cached data isn’t timely enough). Preemptive caching instead generates cached versions before an application requests them. Typically this is done by a cron process. Hit/Miss Caching. A cache hit occurs when an application or software requests data. First, the central processing unit (CPU) looks for the data in its closest memory location, which is usually the primary cache. If the requested data is found in the cache, it is considered a cache hit. Cache miss occurs within cache memory access modes and methods. For each new request, the processor searched the primary cache to find that data. If the data is not found, it is considered a cache miss. A cache hit serves data more quickly, as the data can be retrieved by reading the cache memory. The cache hit also can be in disk caches where the requested data is stored and accessed by the first query. A cache miss slows down the overall process because after a cache miss, the central processing unit (CPU) will look for a higher level cache, such as random access memory (RAM) for that data. Further, a new entry is created and copied into cache before it can be accessed by the processor. Client-side Caching. Client-side caching allows server data to be copied and cached on the client computer. Client side caching reduces load times by several factors.  Queuing Updates. Queues are used to order queries (and other database functions) in a timely fashion. There are queues for asynchronously sending notifications like email and SMS in most websites. E-commerce sites have queues for storing, processing and dispatching orders. How your database handles queues can affect your performance: Batching. Batch processing can be used for efficient bulk database updates and automated transaction processing, as opposed to interactive online transaction processing (OLTP) applications. Fan-Out Updates. Fan-out duplicates data in the database. When data is duplicated it eliminates slow joins and increases read performance. Efficient Usage of Data at Scale As you scale up in terms of database workload, you need to be able to avoid bad queries or patterns from your applications. Moving expensive queries out of the user request path. Even if your database server uses powerful hardware, its performance can be negatively affected by a handful of expensive queries. Even a single bad query can cause serious performance issues for your database. Make sure to use monitoring tools to track down the queries that are taking up the most resources. Using caching to offload database traffic. Cache data away from the database using something like memcached. This is usually done at the application layer, and is highly effective. Counters and In-Memory Stores. Use memory counters to monitor performance hits: pages/sec, faults/sec, available bytes, total server, target server memory, etc. Percona’s new in-memory storage engine for MongoDB also can help. Connection Pooling. A connection pool made up of cached database connections, remembered so that the connections can be reused for future requests to the database. Connection pools can improve the performance of executing commands on a database. Scaling Out (Horizontal) Tricks Scaling horizontally means adding more nodes to a system, such as adding a new server to a database environment to a distributed software application. For example, scaling out from one Web server to three. Pre-Sharding Data for Flexibility. Pre-sharding the database across the server instances allows you to have the entire environment resources available at the start of the event, rather than having to rebalance during peak event traffic. Using “Kill Switches” to Control Traffic. The idea of a kill switch is a single point where you can stop the flow of data to a particular node. Strategically set up kill switches allow you to stop a destructive workload that begins to impact the entire environment. Limiting Graph Structures. By limiting the size or complexity of graph structures in the database, you will simplify data lookups and data size. Scaling with Hardware (Vertical Scaling) Another option to handle the increased traffic load is adding more hardware to your environment: more servers, more CPUs, more memory, etc. This, of course, can be expensive. One option here is to pre-configure your testing environment to become part of the production environment if necessary. Another is to pre-configure more Database-as-a-Service (DaaS) instances for the event (if you are a using cloud-based services). Whichever method, be sure you verify and test your extra servers and environment before your drop-dead date. Testing Performance and Capacity As always, in any situation where your environment is going to be stressed beyond usual limits, testing under real-world conditions is a key factor. This includes not only testing for raw traffic levels, but also the actual workloads that your database will experience, with the same volume and variety of requests. Knowing Your Application and Questions to Ask at Development Time Finally, it’s important that you understand what applications will be used and querying the database. This sort of common sense idea is often overlooked, especially when teams (such as the development team and the database/operations team) get siloed and don’t communicate. Get to know who is developing the applications that are using the database, and how they are doing it. As an example, a while back I had the opportunity to speak with a team of developers, mostly to just understand what they were doing. In the process of whiteboarding the app with them, we discovered a simple query issue that – now that we were aware of it – took little effort to fix. These sorts of interactions, early in the process, can save a great deal of headache down the line. Conclusion There are many strategies that can help you prepare for high traffic events that will impact your database. I’ve covered a few here briefly. For an even more thorough look at e-commerce database strategies, attend my webinar “Black Friday and Cyber Monday: How to Avoid an E-Commerce Disaster” on Thursday, September 22, 2016 10:00 am Pacific Time. Register here. [Less]
Posted 14 days ago by Gabriela D'Ávila
This is an unstable release, please don’t use in production. It was rumored that the new MySQL version would be 8 and not as 5.8 as a lot of people thought, and it appears the rumors were true. Below are some of the features that caught my eye at ... [More] first glance: Roles Although password expiration was implemented 5.7, the newer version bring a set of collective privileges as a Role. No need to have to copy paste that massive GRANT command you had when creating new users. UTF-8 as default Charset This is not yet the default charset coming with the server, but utf8mb4 will be the main charset instead of latin1, and the default collation will change from latin1_swedish_ci to utf8mb4_800_ci_ai. The plan is to do that before General Availability. Invisible Indexes Giving an index already exists, you can make it active or inactive. It is a toggling feature which enables the debugging work to see if an index really can be dropped (if it is not being used). This is for the search only, on write operations the index is still maintained. IPv6 and UUID Manipulation MySQL do not support those fields natively, however, it is recommended to store those items with the VARBINARY(16) type. MySQL now provides functions to manipulate textual representations of IPv6/UUID and to use bit-wise operations, to test, extract or compare. Having those function built in, you can use a generated column to index that data. Source A more comprehensive list can be found at the MySQL Server Blog. It is worth the read. ™MySQL is a trademark of Oracle.Filed under: Article Tagged: mysql, mysql 8 [Less]
Posted 14 days ago by Joao Osorio
How InnoDB initializes AUTO_INCREMENT counters is actually not a bug, but a documented mechanism. There were some complaints and even people who lost data over this. To initialize an auto-increment counter after a server restart, InnoDB executes the ... [More] equivalent of the following statement on the first insert into a table containing an AUTO_INCREMENT column. SELECT MAX(ai_col) FROM table_name FOR [Less]
Posted 14 days ago by Daniel van Eeden
On the MySQL Bugs website there are some tide stats available. These show rate of bug creation.I've put them in a graph: I made these with this IPython Notebook. There are more detailed graphs per version in the notebook.
Posted 14 days ago by Satej Sahu
A question which would come sometimes to mind when starting with MySQL is whether I should use DATETIME or TIMESTAMP data type since both appear to store same date and time component.Similarities between datetime and timestamp:1. Values contain both ... [More] date and time parts.2. Format of retrieval and display is "YYYY-MM-DD HH:MM:SS".3. Can include a trailing fractional seconds part in up to microseconds (6 digits) precision.4. With the fractional part included, the format for these values is "YYYY-MM-DD HH:MM:SS[.fraction]".5. Both the TIMESTAMP and (as of MySQL 5.6.5) DATETIME offer automatic initialization and updating to the current date and time.But both differ in some ways as mentioned below:Differences between DATETIME and TIMESTAMP data types.DATETIME:1. Supported range is from '1000-01-01 00:00:00' to '9999-12-31 23:59:59'.2. Storage Required Before MySQL 5.6.4 was 8 bytes. *3. Storage Required as of MySQL 5.6.4 is 5 bytes + fractional seconds storage. *4. Preserves the textual representation of the date and time.5. A value in the supported range is saved as it is given to MySQL so lets say if you change the timezone of MySQL, the value remains same that is it stores no timezone information and is timezone independent.Example:By default MySQL uses the SYSTEM timezone as its timezone which is IST in my system.mysql> CREATE TABLE `dt` (  `dt` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_cimysql> SELECT @@time_zone;+-------------+| @@time_zone |+-------------+| SYSTEM      |+-------------+mysql> INSERT INTO dt VALUES ('2016-09-12 12:12:00');mysql> SELECT * FROM dt;+---------------------+| dt                  |+---------------------+| 2016-09-12 12:12:00 |+---------------------+Now lets change the session timezone to CDT which has an offset -05:00 hours with respect to UTC.mysql> SET @@session.time_zone = '-05:00';mysql> SELECT * FROM dt;+---------------------+| dt                  |+---------------------+| 2016-09-12 12:12:00 |+---------------------+The result above is same irrespective of timezone.TIMESTAMP:1. Supported range is from '1970-01-01 00:00:01' UTC to '2038-01-19 03:14:07' UTC. **2. Storage Required Before MySQL 5.6.4 was 4 bytes. *3. Storage Required as of MySQL 5.6.4 is 4 bytes + fractional seconds storage. *4. Preserves values relative to the timezone in use.5. A value in the supported range is saved in UTC timestamp value when the value is supplied to MySQL, so the value contains a timezone reference. While fetching the value again, MySQL will convert that value from UTC to the timezone specific value. If the timezone of MySQL is changed it has no effect on the UTC value stored but when the value is fetched it is displayed as per the current timezone of MySQL and not in the original timezone value which was stored first time. This occurs because the same time zone was not used for conversion in both directions.An example of this would be if timezone of MySQL is currently IST and I save a value of "2016-09-12 12:12:00" into the TIMESTAMP datatype field, so when I fetch this record value from MySQL I will get the same value "2016-09-12 12:12:00". Now if I change the timezone value to CDT and fetch this record value, I will get "2016-09-12 01:42:00" which is the CDT equivalent time of IST "2016-09-12 12:12:00".Example:By default MySQL uses the SYSTEM timezone as its timezone which is IST in my system.mysql> CREATE TABLE `ts` (  `ts` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_cimysql> SELECT @@time_zone;+-------------+| @@time_zone |+-------------+| SYSTEM      |+-------------+mysql> INSERT INTO ts VALUES ('2016-09-12 12:12:00');mysql> SELECT * FROM ts;+---------------------+| ts                  |+---------------------+| 2016-09-12 12:12:00 |+---------------------+Now lets change the session timezone to CDT which has an offset -05:00 hours with respect to UTC.mysql> SET @@session.time_zone = '-05:00';mysql> SELECT * FROM ts;+---------------------+| ts                  |+---------------------+| 2016-09-12 01:42:00 |+---------------------+The result above is the CDT date time equivalent of IST date time "2016-09-12 12:12:00".References:-* As of MySQL 5.6.4, storage for YEAR and DATE remains unchanged. However, TIME, DATETIME, and TIMESTAMP are represented differently. DATETIME is packed more efficiently, requiring 5 rather than 8 bytes for the nonfractional part, and all three parts have a fractional part that requires from 0 to 3 bytes, depending on the fractional seconds precision of stored values.** Why is the TIMESTAMP datatype limited to 2038 years and not beyond?- Excerpt from"The Year 2038 problem is an issue for computing and data storage situations in which time values are stored or calculated as a signed 32-bit integer, and this number is interpreted as the number of seconds since 00:00:00 UTC on 1 January 1970 ("the epoch").[1] Such implementations cannot encode times after 03:14:07 UTC on 19 January 2038, a problem similar to but not entirely analogous to the "Y2K problem" (also known as the "Millennium Bug"), in which 2-digit values representing the number of years since 1900 could not encode the year 2000 or later. Most 32-bit Unix-like systems store and manipulate time in this "Unix time" format, so the year 2038 problem is sometimes referred to as the "Unix Millennium Bug" by association."- It is also known as the Y2K28 bug.A nice illustration from wikipedia showing the Year 2038 bug.Caution: While designing applications please consider the range limitation of TIMESTAMP datatype [1970-2038] before using it or consider using DATETIME datatype instead. [Less]
Posted 14 days ago by Alexander Rubin
In this post, we’ll look at MySQL CDC, streaming binary logs and asynchronous triggers. What is Change Data Capture and why do we need it? Change Data Capture (CDC) tracks data changes (usually close to realtime). In MySQL, the easiest and probably ... [More] most efficient way to track data changes is to use binary logs. However, other approaches exist. For example: General log or Audit Log Plugin (which logs all queries, not just the changes) MySQL triggers (not recommended, as it can slow down the application — more below) One of the first implementations of CDC for MySQL was the FlexCDC project by Justin Swanhart. Nowadays, there are a lot of CDC implementations (see mysql-cdc-projects wiki for a long list). CDC can be implemented for various tasks such as auditing, copying data to another system or processing (and reacting to) events. In this blog post, I will demonstrate how to use a CDC approach to stream MySQL binary logs, process events and save it (stream to) another MySQL instance (or MongoDB). In addition, I will show how to implement asynchronous triggers by streaming binary logs. Streaming binary logs  You can read binary logs using the mysqlbinlog utility, by adding “-vvv” (verbose option). mysqlbinlog can also show human readable version for the ROW based replication. For example:# mysqlbinlog -vvv /var/lib/mysql/master.000001 BINLOG ' JxiqVxMBAAAALAAAAI7LegAAAHQAAAAAAAEABHRlc3QAAWEAAQMAAUTAFAY= JxiqVx4BAAAAKAAAALbLegAAAHQAAAAAAAEAAgAB//5kAAAAedRLHg== '/*!*/; ### INSERT INTO `test`.`a` ### SET ### @1=100 /* INT meta=0 nullable=1 is_null=0 */ # at 8047542 #160809 17:51:35 server id 1 end_log_pos 8047573 CRC32 0x56b36ca5 Xid = 24453 COMMIT/*!*/;Starting with MySQL 5.6, mysqlbinlog can also read the binary log events from a remote master (“fake” replication slave). Reading binary logs is a great basis for CDC. However, there are still some challenges: ROW-based replication is probably the easiest way to get the RAW changes, otherwise we will have to parse SQL. At the same time, ROW-based replication binary logs don’t contain the table metadata, i.e. it does not record the field names, only field number (as in the example above “@1” is the first field in table “a”). We will need to somehow record and store the binary log positions so that the tool can be restarted at any time and proceed from the last position (like a MySQL replication slave). Maxwell’s daemon (Maxwell = Mysql + Kafka), an application recently released by Zendesk, reads MySQL binlogs and writes row updates as JSON (it can write to Kafka, which is its primary goal, but can also write to stdout and can be extended for other purposes). Maxwell stores the metadata about MySQL tables and binary log events (and other metadata) inside MySQL, so it solves the potential issues from the above list. Here is a quick demo of Maxwell: Session 1 (Insert into MySQL):mysql> insert into a (i) values (151); Query OK, 1 row affected (0.00 sec) mysql> update a set i = 300 limit 5; Query OK, 5 rows affected (0.01 sec) Rows matched: 5 Changed: 5 Warnings: 0Session 2 (starting Maxwell):$ ./bin/maxwell --user='maxwell' --password='maxwell' --host='' --producer=stdout 16:00:15,303 INFO Maxwell - Maxwell is booting (StdoutProducer), starting at BinlogPosition[master.000001:15494460] 16:00:15,327 INFO TransportImpl - connecting to host:, port: 3306 16:00:15,350 INFO TransportImpl - connected to host:, port: 3306, context: AbstractTransport.Context[threadId=9,... 16:00:15,350 INFO AuthenticatorImpl - start to login, user: maxwell, host:, port: 3306 16:00:15,354 INFO AuthenticatorImpl - login successfully, user: maxwell, detail: OKPacket[packetMarker=0,affectedRows=0,insertId=0,serverStatus=2,warningCount=0,message=] 16:00:15,533 INFO MysqlSavedSchema - Restoring schema id 1 (last modified at BinlogPosition[master.000001:3921]) {"database":"test","table":"a","type":"insert","ts":1472937475,"xid":211209,"commit":true,"data":{"i":151}} {"database":"test","table":"a","type":"insert","ts":1472937475,"xid":211209,"commit":true,"data":{"i":151}} {"database":"test","table":"a","type":"update","ts":1472937535,"xid":211333,"data":{"i":300},"old":{"i":150}} {"database":"test","table":"a","type":"update","ts":1472937535,"xid":211333,"data":{"i":300},"old":{"i":150}} {"database":"test","table":"a","type":"update","ts":1472937535,"xid":211333,"data":{"i":300},"old":{"i":150}} {"database":"test","table":"a","type":"update","ts":1472937535,"xid":211333,"data":{"i":300},"old":{"i":150}} {"database":"test","table":"a","type":"update","ts":1472937535,"xid":211333,"commit":true,"data":{"i":300},"old":{"i":150}}As we can see in this example, Maxwell get the events from MySQL replication stream and outputs it into stdout (if we change the producer, it can save it to Apache Kafka). Saving binlog events to MySQL document store or MongoDB If we want to save the events to some other place we can use MongoDB or MySQL JSON fields and document store (as Maxwell will provide use with JSON documents). For a simple proof of concept, I’ve created nodeJS scripts to implement a CDC “pipleline”:var mysqlx = require('mysqlx'); var mySession = mysqlx.getSession({ host: '', port: 33060, dbUser: 'root', dbPassword: 'xxx' }); process.on('SIGINT', function() { console.log("Caught interrupt signal. Exiting..."); process.exit() }); process.stdin.setEncoding('utf8'); process.stdin.on('readable', () => { var chunk =; if(chunk != null) { process.stdout.write(`data: ${chunk}`); mySession.then(session => { session.getSchema("mysqlcdc").getCollection("mysqlcdc") .add( JSON.parse(chunk) ) .execute(function (row) { // can log something here }).catch(err => { console.log(err); }) .then( function (notices) { console.log("Wrote to MySQL: " + JSON.stringify(notices)) }); }).catch(function (err) { console.log(err); process.exit(); }); } }); process.stdin.on('end', () => { process.stdout.write('end'); process.stdin.resume(); });And to run it we can use the pipeline:./bin/maxwell --user='maxwell' --password='maxwell' --host='' --producer=stdout --log_level=ERROR | node ./maxwell_to_mysql.jsThe same approach can be used to save the CDC events to MongoDB with mongoimport:$ ./bin/maxwell --user='maxwell' --password='maxwell' --host='' --producer=stdout --log_level=ERROR |mongoimport -d mysqlcdc -c mysqlcdc --host localhost:27017 Reacting to binary log events: asynchronous triggers In the above example, we only recorded the binary log events. Now we can add “reactions”. One of the practical applications is re-implementing MySQL triggers to something more performant. MySQL triggers are executed for each row, and are synchronous (the query will not return until the trigger event finishes). This was known to cause poor performance, and can significantly slow down bulk operations (i.e., “load data infile” or “insert into … values (…), (…)”). With triggers, MySQL will have to process the “bulk” operations row by row, killing the performance. In addition, when using statement-based replication, triggers on the slave can slow down the replication thread (it is much less relevant nowadays with ROW-based replication and potentially multithreaded slaves). With the ability to read binary logs from MySQL (using Maxwell), we can process the events and re-implement triggers — now in asynchronous mode — without delaying MySQL operations. As Maxwell gives us a JSON document with the “new” and “old” values (with the default option binlog_row_image=FULL, MySQL records the previous values for updates and deletes) we can use it to create triggers. Not all triggers can be easily re-implemented based on the binary logs. However, in my experience most of the triggers in MySQL are used for: auditing (if you deleted a row, what was the previous value and/or who did and when) enriching the existing table (i.e., update the field in the same table) Here is a quick algorithm for how to re-implement the triggers with Maxwell: Find the trigger table and trigger event text (SQL) Create an app or a script to parse JSON for the trigger table Create a new version of the SQL changing the NEW. to “data.field” (from JSON) and OLD. to “old.field” (from JSON) For example, if I want to audit all deletes in the “transactions” table, I can do it with Maxwell and a simple Python script (do not use this in production, it is a very basic sample):import json,sys line = sys.stdin.readline() while line: print line, obj=json.loads(line); if obj["type"] == "delete": print "INSERT INTO transactions_delete_log VALUES ('" + str(obj["data"]) + "', Now() )" line = sys.stdin.readline()MySQL:mysql> delete from transactions where user_id = 2; Query OK, 1 row affected (0.00 sec)Maxwell pipeline:$ ./bin/maxwell --user='maxwell' --password='maxwell' --host='' --producer=stdout --log_level=ERROR | python {"database":"test","table":"transactions","type":"delete","ts":1472942384,"xid":214395,"commit":true,"data":{"id":2,"user_id":2,"value":2,"last_updated":"2016-09-03 22:39:31"}} INSERT INTO transactions_delete_log VALUES ('{u'last_updated': u'2016-09-03 22:39:31', u'user_id': 2, u'id': 2, u'value': 2}', Now() ) Maxwell limitations Maxwell was designed for MySQL 5.6 with ROW-based replication. Although it can work with MySQL 5.7, it does not support new MySQL 5.7 data types (i.e., JSON fields). Maxwell does not support GTID, and can’t failover based on GTID (it can parse events with GTID thou). Conclusion Streaming MySQL binary logs (for example with Maxwell application) can help to implement CDC for auditing and other purposes, and also implement asynchronous triggers (removing the MySQL level triggers can increase MySQL performance). [Less]
Posted 15 days ago by Marco Tusa
This blog post discusses ProxySQL and MHA integration, and how they work together. MHA (Master High Availability Manager and tools for MySQL) is almost fully integrated with the ProxySQL process. This means you can count on the MHA standard feature ... [More] to manage failover, and ProxySQL to manage the traffic and shift from one server to another. This is one of the main differences between MHA and VIP, and MHA and ProxySQL: with MHA/ProxySQL, there is no need to move IPs or re-define DNS. The following is an example of an MHA configuration file for use with ProxySQL:server default] user=mha password=mha ssh_user=root repl_password=replica manager_log=/tmp/mha.log manager_workdir=/tmp remote_workdir=/tmp master_binlog_dir=/opt/mysql_instances/mha1/logs client_bindir=/opt/mysql_templates/mysql-57/bin client_libdir=/opt/mysql_templates/mysql-57/lib master_ip_failover_script=/opt/tools/mha/mha4mysql-manager/samples/scripts/master_ip_failover master_ip_online_change_script=/opt/tools/mha/mha4mysql-manager/samples/scripts/master_ip_online_change log_level=debug [server1] hostname=mha1r ip= candidate_master=1 [server2] hostname=mha2r ip= candidate_master=1 [server3] hostname=mha3r ip= candidate_master=1 [server4] hostname=mha4r ip= no_master=1NOTE: Be sure to comment out the “FIX ME ” lines in the sample/scripts. After that, just install MHA as you normally would. In ProxySQL, be sure to have all MHA users and the servers set. When using ProxySQL with standard replication, it’s important to set additional privileges for the ProxySQL monitor user. It must also have “Replication Client” set or it will fail to check the SLAVE LAG. The servers MUST have a defined value for the attribute max_replication_lag, or the check will be ignored. As a reminder:INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_replication_lag) VALUES ('',600,3306,1000,0); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_replication_lag) VALUES ('',601,3306,1000,10); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_replication_lag) VALUES ('',601,3306,1000,10); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_replication_lag) VALUES ('',601,3306,1000,10); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_replication_lag) VALUES ('',601,3306,1000,10); INSERT INTO mysql_replication_hostgroups VALUES (600,601); LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK; insert into mysql_query_rules (username,destination_hostgroup,active) values('mha_W',600,1); insert into mysql_query_rules (username,destination_hostgroup,active) values('mha_R',601,1); insert into mysql_query_rules (username,destination_hostgroup,active,retries,match_digest) values('mha_RW',600,1,3,'^SELECT.*FOR UPDATE'); insert into mysql_query_rules (username,destination_hostgroup,active,retries,match_digest) values('mha_RW',601,1,3,'^SELECT'); LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK; insert into mysql_users (username,password,active,default_hostgroup,default_schema) values ('mha_W','test',1,600,'test_mha'); insert into mysql_users (username,password,active,default_hostgroup,default_schema) values ('mha_R','test',1,601,'test_mha'); insert into mysql_users (username,password,active,default_hostgroup,default_schema) values ('mha_RW','test',1,600,'test_mha'); LOAD MYSQL USERS TO RUNTIME;SAVE MYSQL USERS TO DISKOK, now that all is ready,  let’s rock’n’roll! Controlled fail-over First of all, the masterha_manager should not be running or you will get an error. Now let’s start some traffic:Write sysbench --test=/usr/share/doc/sysbench/tests/db/oltp.lua --mysql-host= --mysql-port=3311 --mysql-user=mha_RW --mysql-password=test --mysql-db=mha_test --db-driver=mysql --oltp-tables-count=50 --oltp-tablesize=5000 --max-requests=0 --max-time=900 --oltp-point-selects=5 --oltp-read-only=off --oltp-dist-type=uniform --oltp-reconnect-mode=transaction --oltp-skip-trx=off --num-threads=10 --report-interval=10 --mysql-ignore-errors=all run Read only sysbench --test=/usr/share/doc/sysbench/tests/db/oltp.lua --mysql-host= --mysql-port=3311 --mysql-user=mha_RW --mysql-password=test --mysql-db=mha_test --db-driver=mysql --oltp-tables-count=50 --oltp-tablesize=5000 --max-requests=0 --max-time=900 --oltp-point-selects=5 --oltp-read-only=on --num-threads=10 --oltp-reconnect-mode=query --oltp-skip-trx=on --report-interval=10 --mysql-ignore-errors=all runLet it run for a bit, then check:mysql> select * from stats_mysql_connection_pool where hostgroup between 600 and 601 order by hostgroup,srv_host desc; +-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+ | hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_ms | +-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+ | 600 | | 3306 | ONLINE | 10 | 0 | 20 | 0 | 551256 | 44307633 | 0 | 285 | select * from stats_mysql_connection_pool where hostgroup between 600 and 601 order by hostgroup,srv_host desc; +-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+ | hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_ms | +-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+ | 600 | | 3306 | ONLINE | 10 | 0 | 10 | 0 | 123457 | 9922280 | 0 | 658 | > /tmp/mha.log 2>&1Then let’s start a load again. Finally, go to the MySQL node that uses master aux|grep mysql mysql 18755 0.0 0.0 113248 1608 pts/0 S Aug28 0:00 /bin/sh /opt/mysql_templates/mysql-57/bin/mysqld_safe --defaults-file=/opt/mysql_instances/mha1/my.cnf mysql 21975 3.2 30.4 4398248 941748 pts/0 Sl Aug28 93:21 /opt/mysql_templates/mysql-57/bin/mysqld --defaults-file=/opt/mysql_instances/mha1/my.cnf --basedir=/opt/mysql_templates/mysql-57/ --datadir=/opt/mysql_instances/mha1/data --plugin-dir=/opt/mysql_templates/mysql-57//lib/plugin --log-error=/opt/mysql_instances/mha1/mysql-3306.err --open-files-limit=65536 --pid-file=/opt/mysql_instances/mha1/ --socket=/opt/mysql_instances/mha1/mysql.sock --port=3306 And kill the MySQL process. kill -9 21975 18755As before, check what happened on the application side:[ 80s] threads: 4, tps: 213.20, reads: 1919.10, writes: 853.20, response time: 28.74ms (95%), errors: 0.00, reconnects: 0.00 [ 90s] threads: 4, tps: 211.30, reads: 1901.80, writes: 844.70, response time: 28.63ms (95%), errors: 0.00, reconnects: 0.00 [ 100s] threads: 4, tps: 211.90, reads: 1906.40, writes: 847.90, response time: 28.60ms (95%), errors: 0.00, reconnects: 0.00 [ 110s] threads: 4, tps: 211.10, reads: 1903.10, writes: 845.30, response time: 29.27ms (95%), errors: 0.30, reconnects: 0.00 [Less]