I Use This!
Moderate Activity

News

Analyzed about 8 hours ago. based on code collected about 14 hours ago.
Posted over 4 years ago by Geir Hoydalsvik
The MySQL Development team is very happy to announce that MySQL 8.0.19 is now available for download at dev.mysql.com. In addition to bug fixes there are a few new features added in this release.  Please download 8.0.19 from dev.mysql.com or from the MySQL  Yum,  APT, or SUSE repositories.… Facebook Twitter LinkedIn
Posted over 4 years ago by Oracle MySQL Group
MySQL will be in Dubai from JAN13-15.   JAN 13 – 7PM-10PM – MySQL Meetup at Place: Dubai Internet City Learn more: https://www.meetup.com/MySQL-User-Group-U-A-E/events/267697018/   JAN 14-15 - Oracle Open World - Dubai Meet the MySQL team ... [More] at our dedicated MySQL booth.  Attend two great MySQL sessions, including a case study with the Digital Solutions’s company Digital 14, which delivers cyber defense and security intelligence solutions with MySQL8.0.   OOW Dubai is located at the Dubai World Trade Centre, Sheikh Zayed Road, Dubai, United Arab Emirates. Dubai World Trade Centre Learn more about OOW Dubai location: https://www.oracle.com/middleeast/openworld/travel.html [Less]
Posted over 4 years ago by Sri Sakthivel Durai Pandian
So far, I have written two tutorial blogs about MySQL InnoDB Cluster . Those blogs describe about the InnoDB Cluster configuration and how to integrate InnoDB Cluster with the MySQL router . You can get them through the below links . MySQL ... [More] InnoDB Cluster Tutorial 1 ( Group Replication + MySQL Shell ) MySQL InnoDB Cluster Tutorial 2 ( Integrating with MySQL router ) In this blog I am going to explain the following two things , How to switch the cluster to ( single | multi ) primary mode without downtime ? How to make the specific node as the Primary member without downtime ? I have already configured the three node InnoDB Cluster with single primary mode topology . MySQL 192.168.33.11:3306 ssl JS > cluster.getName();first_InnoDB_cluster MySQL 192.168.33.11:3306 ssl JS > \sqlSwitching to SQL mode… Commands end with ; MySQL 192.168.33.11:3306 ssl SQL > select channel_name,member_host,member_state,member_role,member_version from performance_schema.replication_group_members\G* 1. row *channel_name: group_replication_appliermember_host: sakthilabs11member_state: ONLINEmember_role: PRIMARYmember_version: 8.0.18* 2. row *channel_name: group_replication_appliermember_host: sakthilabs12member_state: ONLINEmember_role: SECONDARYmember_version: 8.0.18* 3. row *channel_name: group_replication_appliermember_host: sakthilabs13member_state: ONLINEmember_role: SECONDARYmember_version: 8.0.183 rows in set (0.0070 sec) Now, this is the time for the experiment . 1. How to switch the cluster to ( single | multi ) primary mode without downtime ? Right now my cluster topology is single primary mode . } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "sakthilabs11:3306" to convert to Multi primary mode .. MySQL 192.168.33.11:3306 ssl sakthi JS > cluster.switchToMultiPrimaryMode();Switching cluster ‘first_InnoDB_cluster’ to Multi-Primary mode… Instance ‘sakthilabs11:3306’ remains PRIMARY.Instance ‘sakthilabs12:3306’ was switched from SECONDARY to PRIMARY.Instance ‘sakthilabs13:3306’ was switched from SECONDARY to PRIMARY. The cluster successfully switched to Multi-Primary mode. At MySQL Group Replication, you need to turn the variable group_replication_single_primary_mode to OFF, to switch to Multi primary topology . It will not allow when the cluster is active . So, it is difficult to do without MySQL Shell . to convert to single primary mode again , MySQL 192.168.33.11:3306 ssl sakthi JS > cluster.switchToSinglePrimaryMode();Switching cluster ‘first_InnoDB_cluster’ to Single-Primary mode… Instance ‘sakthilabs11:3306’ remains PRIMARY.Instance ‘sakthilabs12:3306’ was switched from PRIMARY to SECONDARY.Instance ‘sakthilabs13:3306’ was switched from PRIMARY to SECONDARY. WARNING: Existing connections that expected a R/W connection must be disconnected, i.e. instances that became SECONDARY. The cluster successfully switched to Single-Primary mode. Perfect !! 2. How to make the specific node as the Primary member without downtime ? Right now , sakthilabs12 is the primary member of my cluster . “primary”: “sakthilabs12:3306”,“ssl”: “REQUIRED”,“status”: “OK”, Because of some mainteneance task, I wanted to switch the primary member to sakthilabs11 without any downtime . MySQL 192.168.33.11:3306 ssl sakthi JS > cluster.setPrimaryInstance(‘sakthilabs11:3306’)Setting instance ‘sakthilabs11:3306’ as the primary instance of cluster ‘first_InnoDB_cluster’… Instance ‘sakthilabs11:3306’ was switched from SECONDARY to PRIMARY.Instance ‘sakthilabs12:3306’ was switched from PRIMARY to SECONDARY.Instance ‘sakthilabs13:3306’ remains SECONDARY. The instance ‘sakthilabs11:3306’ was successfully elected as primary. “primary”: “sakthilabs11:3306”, “ssl”: “REQUIRED”, “status”: “OK”, Perfect , At group replication you need to perform the below task to switch the primary node without any downtime . mysql> select @@hostname; +---------------+ | @@hostname | +---------------+ | sakthilabs13 | +---------------+ 1 row in set (0.00 sec) mysql> show global variables like 'server_uuid'; +---------------+--------------------------------------+ | Variable_name | Value | +---------------+--------------------------------------+ | server_uuid | f374a06e-28a5-11ea-a6c6-080027bc6e8c | +---------------+--------------------------------------+ 1 row in set (0.00 sec) mysql> select group_replication_set_as_primary('f374a06e-28a5-11ea-a6c6-080027bc6e8c'); +--------------------------------------------------------------------------+ | group_replication_set_as_primary('f374a06e-28a5-11ea-a6c6-080027bc6e8c') | +--------------------------------------------------------------------------+ | Primary server switched to: f374a06e-28a5-11ea-a6c6-080027bc6e8c | +--------------------------------------------------------------------------+ 1 row in set (0.08 sec) mysql> select member_host,member_role from performance_schema.replication_group_members; +---------------+-------------+ | member_host | member_role | +---------------+-------------+ | sakthilabs11 | SECONDARY | | sakthilabs12 | SECONDARY | | sakthilabs13 | PRIMARY | +---------------+-------------+ 3 rows in set (0.02 sec) I hope this blog will help someone who has started to learn the MySQL InnoDB cluster . I’ll update my next blog soon. Thanks !! [Less]
Posted over 4 years ago by Marco Tusa
In the many years we have used MySQL, we got accustomed to the fact that upgrades from MySQL 5.7.11 to 5.7.12 was a minor event. This meant that if something was going wrong, we could roll back the binaries and be happy again.  From MySQL 8, this is ... [More] no longer true. Any upgrade, even minor, is seen as irreversible. (This is valid for Percona Server for MySQL as well.) Say we have MySQL 8.0.17 and we upgrade to 8.0.18. In our MySQL log at the start, we will have this: [System] [MY-010116] [Server] /opt/mysql_templates/mysql-8P/bin/mysqld (mysqld 8.0.18) starting as process 13242 … [System] [MY-013381] [Server] Server upgrade from '80017' to '80018' started. … [System] [MY-013381] [Server] Server upgrade from '80017' to '80018' completed. … [System] [MY-010931] [Server] /opt/mysql_templates/mysql-8P/bin/mysqld: ready for connections. Version: '8.0.18' All good we made it! But then we realized that the just-upgraded instance has something that doesn’t work as expected.  At this point, we may want to roll back our installation, and put back the 8.0.17 binaries. But once we have done that and we start MySQL again, our instance will fail to start and our mysql log shows: … [Server] /opt/mysql_templates/mysql-8P/bin/mysqld (mysqld 8.0.17) starting as process 26663 … [ERROR] [MY-013171] [InnoDB] Cannot boot server version 80017 on data directory built by version 80018. Downgrade is not supported … [System] [MY-010910] [Server] /opt/mysql_templates/mysql-8P/bin/mysqld: Shutdown complete (mysqld 8.0.17)  MySQL Community Server - GPL. We cannot roll back This is clearly stated in the MySQL documentation.  You may not like it, but that is how it is now. Here are some things to think about. Upgrading a minor MySQL version in the past was a trivial activity, and very often automation was doing that silently. That was not the best practice but it was the reality.  At the same time, testing was very often superficial, if done at all.  That attitude must change. You CANNOT do an upgrade to a MINOR VERSION without: Full Backup/restore testing (backup must be a logical one, like: mysqldump or mydumper or any other tool that allow you to export the data)  Severe functional/application testing IN A STAGING environment You should disable ANY automation that performs silent/untested upgrade in production Also keep in mind that IF you take a backup, if you have terabytes or even maybe just a GB of data, the rollback will not be immediate at all.  Given that, you should: Plan carefully, use a proper platform for the tests (like a full populated staging). Use as much real load as possible from your application. Upgrade from the last server of the slave chain up. Do not rush trying to catch/implement the latest shiny feature. One last but NOT SUPPORTED approach (see documentation), is to rely on replication as described in Replicating from MySQL 8.0 to MySQL 5.7.  But remember, this is done at your own risk, given it is not fully supported by MySQL/Oracle. These are the minimal steps needed to perform minor version upgrades and to be safe.  Great MySQL to everybody! Reference Bug #95216 – Opened some time ago by our Ceri Williams. [Less]
Posted over 4 years ago by Dave Stokes
I am writing a tutorial on the MySQL Document Store for the sold out (sorry) pre-FOSDEM MySQL days.  For those who do not write such exercise they are often a difficult task as you have a limited time to convey information, need to provide vivid ... [More] examples, and create exercises that give a good idea of what the topic is all about.  And my personal preference is to write once and use the tutorial at other events (please let me know if you have such an event).Indexing records is a well know performance step when creating databases, SQL or NoSQL.  And back in June of 2017 I wrote a blog post on using createIndex() to index documents in the MySQL Document Store. And as part of creating the tutorial I referred to that blog post as a reference and was quite surprised that it was not working.What happened? Well back in 8.0.11 the function was revised and it is no longer a series of chained calls but a function that receives 2 parameters, details can be found at: https://dev.mysql.com/doc/x-devapi-userguide/en/collection-indexing.htmlSo what follows is an update to the old blog post with the new version of the function.Indexing and the MySQL Document StoreThe MySQL Document Store allows developers who do not know Structured Query Language (SQL) to use MySQL as a high efficient NoSQL document store. It has several great features but databases, NoSQL and SQL, have a problem searching through data efficiently. To help searching, you can add an index on certain fields to go directly to certain records. Traditional databases, like MySQL, allow you to add indexes and NoSQL databases, for example MongoDB, lets you add indexes. The MySQL Document Store also allows indexing.So lets take a quick look at some simple data and then create an index.db.b.find(){    "_id": "00005e163bc70000000000000001",    "nbr": 1}{    "_id": "00005e163bc70000000000000002",    "nbr": 3}{    "_id": "00005e163bc70000000000000003",    "nbr": 5}{    "_id": "00005e163bc70000000000000004",    "nbr": 7}{    "_id": "00005e163bc70000000000000005",    "nbr": 99}{    "_id": "00005e163bc70000000000000006",    "nbr": 2}6 documents in set (0.0037 sec)To index the nbr field with the 8.0.11 syntax we need to specify the name of the index and then the parameters for the index. In the example below we name the index nbr_idx and provide a JSON object of {fields:[{"field": "$.nbr", "type":"INT", required:true}]} with the desired settings, which is called the index definition.db.b.createIndex("nbr_idx", {fields:[{"field": "$.nbr", "type":"INT", required:true}]});The index name is up to you but please keep it useful and relevant.What you Specify to Create and IndexRule # 1 -- All the values in an key/value to be indexed MUST be of the same type! So no '1,2,3,Ralph,3.4' pleseThe JSON document used for defining an index, such as {fields: [{field: '$.username', type: 'TEXT'}]}, can contain the following:fields: an array of at least one IndexField object, each of which describes a JSON document field to be included in the index.A single IndexField description consists of the following fields: field: a string with the full document path to the document member or field to be indexed type: a string with one of the supported column types to map the field to. For numeric types, the optional UNSIGNED keyword can follow. For the TEXT type you can define the length to consider for indexing (you do not need to index all that TEXT column, just enough to narrow down your search). required: an optional boolean, set to true if the field is required to exist in the document. Defaults to false for all types except GEOJSON, which defaults to true. options: an optional integer, used as special option flags to use when decoding GEOJSON data. srid: an optional integer, srid value to use when decoding GEOJSON data. array: (for MySQL 8.0.17 and later) an optional boolean, set to true if the field contains arrays. The default value is false.  type: an optional string which defines the type of index. One of INDEX or SPATIAL. The default is INDEX and can be omitted. Quick Review Old syntax (MySQL 8.0.10 and earlier):db.foo.createIndex("nbr_idx").field("nbr","INTEGER", false).execute() New syntax (MySQL 8.0.11 and later): db.b.createIndex("nbr_idx", {fields:[{"field": "$.nbr", "type":"INT", required:true}]}); All opinions expressed in this blog are those of Dave Stokes who is actually amazed to find anyone else agreeing with him [Less]
Posted over 4 years ago by MySQL Performance Blog
If you are thinking of using ProxySQL in our Percona XtraDB Cluster environment, I’ll explain how to use ProxySQL 2 for failover tasks. How to Test ProxySQL uses the “weight” column to define who is the WRITER node. For this example, I’ll use the ... [More] following list of hostnames and IPs for references: +-----------+----------------+ | node_name | ip | +-----------+----------------+ | pxc1 | 192.168.88.134 | | pxc2 | 192.168.88.125 | | pxc3 | 192.168.88.132 | +-----------+----------------+ My current WRITER node is the “pxc1” node, but how can I see who is the current WRITER? It’s easy, just run the following query: proxysql> select hostgroup_id, comment, hostname, status, weight from runtime_mysql_servers; This is the output: +--------------+---------+----------------+--------+--------+ | hostgroup_id | comment | hostname | status | weight | +--------------+---------+----------------+--------+--------+ | 11 | pxc2 | 192.168.88.125 | ONLINE | 100 | | 12 | pxc3 | 192.168.88.132 | ONLINE | 100 | | 12 | pxc2 | 192.168.88.125 | ONLINE | 100 | | 10 | pxc1 | 192.168.88.134 | ONLINE | 1000 | <--- WRITER | 11 | pxc1 | 192.168.88.134 | ONLINE | 1000 | | 11 | pxc3 | 192.168.88.132 | ONLINE | 100 | +--------------+---------+----------------+--------+--------+ Now for some maintenance reasons I need to failover to “pxc2” node because on “pxc1” I need to do some hardware changes (like to increase the physical memory or increase the disk partition), so there are 3 steps. 1. Move the WRITER node to pxc2 node. We need to decrease the “weight” value on the current WRITER and increase the “weight” value on “pxc2” to the new WRITER. proxysql> update mysql_servers set weight=100 where hostgroup_id=11 and hostname=''; proxysql> update mysql_servers set weight=1000 where hostname=''; proxysql> LOAD MYSQL SERVERS TO RUNTIME; Note: I used these names and to be easy to read, but you need to change for the current IPs. 2. Get out the “pxc1” node from the cluster to avoid continuing to receive SELECTs, because after the failover this will be a READER node. mysql> set global wsrep_reject_queries=all; 3. This step is optional in case you need to stop/start MySQL, and you keep this node out from the cluster. $ vim /PATH/TO/my.cnf wsrep_reject_queries=all Finally, this should be the output after the failover: admin ((none))>select hostgroup_id, comment, hostname, status, weight from runtime_mysql_servers; +--------------+---------+----------------+--------+--------+ | hostgroup_id | comment | hostname | status | weight | +--------------+---------+----------------+--------+--------+ | 11 | pxc2 | 192.168.88.125 | ONLINE | 1000 | | 12 | pxc3 | 192.168.88.132 | ONLINE | 100 | | 14 | pxc1 | 192.168.88.134 | ONLINE | 100 | <--- OFFLINE GROUP | 10 | pxc2 | 192.168.88.125 | ONLINE | 1000 | <--- NEW WRITER | 11 | pxc3 | 192.168.88.132 | ONLINE | 100 | +--------------+---------+----------------+--------+--------+ The “pxc2” node is the new WRITER and the “pxc1” node was moved to go the hostgroup 14 (offline_hostgroup), but is still online and in sync without receiving any query. In case you restarted MySQL (previous step 3), it will continue out from ProxySQL, but now you need to allow accepting SELECTs: mysql> set global wsrep_reject_queries=none; And don’t forget to remove from your my.cnf file (previous step 3), if you did that. Observations Why do you need to move the WRITER node? Because sometimes it is needed to get out a node for maintenance, or the server is deprecated, etc. On ProxySQL 1.X you need to configure a scheduler to run an external script to perform the backend health checks and update the database servers state. This “scheduler” table continues on ProxySQL 2 to keep consistency with previous versions, but now you don’t need to define a scheduler because there is a new feature and now this is supported natively for Galera Cluster or Percona XtraDB Cluster. Another thing to keep in mind, for this new feature, is that it will check the MySQL status by monitoring the following statuses/variables: read_only wsrep_desync wsrep_reject_queries wsrep_sst_donor_rejects_queries wsrep_local_state wsrep_local_recv_queue wsrep_cluster_status This variable “pxc_maint_mode” is no longer used any more from ProxySQL 2. There was a bug reported before about this param “pxc_maint_mode” for ProxySQL 2 because it is used a lot for us on ProxySQL 1, but now I recommend to use this new param “wsrep_reject_queries” to remove it from the rotation, in case you need to work on a particular server. Also in case the current WRITER goes down, ProxySQL will failover to another mysql server automatically, checking the following high “weight” column, and when the server comes back, it will move the WRITER to the previous server. Finally, but not least, here there are more details from this new feature/table. CREATE TABLE mysql_galera_hostgroups ( writer_hostgroup INT CHECK (writer_hostgroup>=0) NOT NULL PRIMARY KEY, backup_writer_hostgroup INT CHECK (backup_writer_hostgroup>=0 AND backup_writer_hostgroup<>writer_hostgroup) NOT NULL, reader_hostgroup INT NOT NULL CHECK (reader_hostgroup<>writer_hostgroup AND backup_writer_hostgroup<>reader_hostgroup AND reader_hostgroup>0), offline_hostgroup INT NOT NULL CHECK (offline_hostgroup<>writer_hostgroup AND offline_hostgroup<>reader_hostgroup AND backup_writer_hostgroup<>offline_hostgroup AND offline_hostgroup>=0), active INT CHECK (active IN (0,1)) NOT NULL DEFAULT 1, max_writers INT NOT NULL CHECK (max_writers >= 0) DEFAULT 1, writer_is_also_reader INT CHECK (writer_is_also_reader IN (0,1,2)) NOT NULL DEFAULT 0, max_transactions_behind INT CHECK (max_transactions_behind>=0) NOT NULL DEFAULT 0, comment VARCHAR, UNIQUE (reader_hostgroup), UNIQUE (offline_hostgroup), UNIQUE (backup_writer_hostgroup)); There are a few settings for this table, I’ll explain it below. writer_hostgroup: it will contain the writers (read_only=0 in case master/slave topology) or the writers nodes in Percona XtraDB, this last option depends on the “max_writers” configbackup_writer_hostgroup: this refers to the hostgroup that will contain the candidate serversreader_hostgroup: it will contain the readers (read_only=1 in case master/slave topology) or the readers nodes in Percona XtraDB, this last depends on the “max_writers” configoffline_hostgroup: it will contain all those nodes which were deemed not usableactive: values 1/0 if this configuration needs to be used or notmax_writers: how many nodes you need to write at the same time, you can set up it up to the number of nodes, by default is 1writer_is_also_reader: values 0/1/2, I’ll explain later, but the default value is 1max_transactions_behind: based on wsrep_local_recv_queue status, if the node exceeds the “max_transactions_behind” the node will be marked as SHUNNED and it will not receive more trafficcomment: A little description about this config Summary This new feature makes it easy to manage/configure Percona XtraDB Cluster/Galera nodes. You only need to update the “weight” column and load to the runtime table and check the”runtime_mysql_servers” table to be sure if the WRITER node was changed. I hope you find this post helpful! Check out some of my previous blogs: ProxySQL Experimental Feature: Native ProxySQL Clustering How to Add More Nodes to an Existing ProxySQL Cluster How to Manage ProxySQL Cluster with Core and Satellite Nodes [Less]
Posted over 4 years ago by Severalnines
The management of database performance is an area that businesses when administrators often find themselves contributing more time to than they expected. Monitoring and reacting to the production database performance issues is one of the most ... [More] critical tasks within a database administrator job. It is an ongoing process that requires constant care. Application and underlying databases usually evolve with time; grow in size, number of users, workload, schema changes that come with code changes. Long-running queries are seldom inevitable in a MySQL database. In some circumstances, a long-running query may be a harmful event. If you care about your database, optimizing query performance, and detecting long-running queries must be performed regularly.  In this blog, we are going to take a more in-depth look at the actual database workload, especially on the running queries side. We will check how to track queries, what kind of information we can find in MySQL metadata, what tools to use to analyze such queries. Handling The Long-Running Queries Let’s start with checking Long-running queries. First of all, we have to know the nature of the query, whether it is expected to be a long-running or a short running query. Some analytic and batch operations are supposed to be long-running queries, so we can skip those for now. Also, depending on the table size, modifying table structure with ALTER command can be a long-running operation (especially in MySQL Galera Clusters). Table lock - The table is locked by a global lock or explicit table lock when the query is trying to access it. Inefficient query - Use non-indexed columns while lookup or joining, thus MySQL takes a longer time to match the condition. Deadlock - A query is waiting to access the same rows that are locked by another request. Dataset does not fit into RAM - If your working set data fits into that cache, then SELECT queries will usually be relatively fast. Suboptimal hardware resources - This could be slow disks, RAID rebuilding, saturated network, etc. If you see a query takes longer than usual to execute, do investigate it. Using the MySQL Show Process List ​MYSQL> SHOW PROCESSLIST; This is usually the first thing you run in the case of performance issues. SHOW PROCESSLIST is an internal mysql command which shows you which threads are running. You can also see this information from the information_schema.PROCESSLIST table or the mysqladmin process list command. If you have the PROCESS privilege, you can see all threads. You can see information like Query Id, execution time, who runs it, the client host, etc. The information with slightly wary depending on the MySQL flavor and distribution (Oracle, MariaDB, Percona) SHOW PROCESSLIST; +----+-----------------+-----------+------+---------+------+------------------------+------------------+----------+ | Id | User | Host | db | Command | Time | State | Info | Progress | +----+-----------------+-----------+------+---------+------+------------------------+------------------+----------+ | 2 | event_scheduler | localhost | NULL | Daemon | 2693 | Waiting on empty queue | NULL | 0.000 | | 4 | root | localhost | NULL | Query | 0 | Table lock | SHOW PROCESSLIST | 0.000 | +----+-----------------+-----------+------+---------+------+------------------------+------------------+----------+ we can immediately see the offensive query right away from the output. In the above example that could be a Table lock.  But how often do we stare at those processes? This is only useful if you are aware of the long-running transaction. Otherwise, you wouldn't know until something happens - like connections are piling up, or the server is getting slower than usual. Using MySQL Pt-query-digest If you would like to see more information about a particular workload use pt-query-digest.  The pt-query-digest is a Linux tool from Percona to analyze MySQL queries. It’s part of the Percona Toolkit which you can find here. It supports the most popular 64 bit Linux distributions like Debian, Ubuntu, and Redhat.  To install it you must configure Percona repositories and then install the perona-toolkit package. Install Percona Toolkit using your package manager: Debian or Ubuntu: sudo apt-get install percona-toolkit RHEL or CentOS: sudo yum install percona-toolkit Pt-query-digest accepts data from the process list, general log, binary log, slow log or tcpdump In addition to that, it’s possible to poll the MySQL process list at a defined interval - a process that can be resource-intensive and far from ideal, but can still be used as an alternative. The most common source for pt-query-digest is a slow query log. You can control how much data will go there with parameter log_slow_verbosity.   There are a number of things that may cause a query to take a longer time to execute: microtime - queries with microsecond precision. query_plan - information about the query’s execution plan. innodb  - InnoDB statistics. minimal - Equivalent to enabling just microtime. standard - Equivalent to enabling microtime,innodb. full - Equivalent to all other values OR’ed together without the profiling and profiling_use_getrusage options. profiling - Enables profiling of all queries in all connections. profiling_use_getrusage - Enables usage of the getrusage function. source: Percona documentation For completeness use log_slow_verbosity=full which is a common case. Slow Query Log The slow query log can be used to find queries that take a long time to execute and are therefore candidates for optimization. Slow query log captures slow queries (SQL statements that take more than long_query_time seconds to execute), or queries that do not use indexes for lookups (log_queries_not_using_indexes). This feature is not enabled by default and to enable it simply set the following lines and restart the MySQL server: [mysqld] slow_query_log=1 log_queries_not_using_indexes=1 long_query_time=0.1 The slow query log can be used to find queries that take a long time to execute and are therefore candidates for optimization. However, examining a long slow query log can be a time-consuming task. There are tools to parse MySQL slow query log files and summarize their contents like mysqldumpslow, pt-query-digest. Performance Schema Performance Schema is a great tool available for monitoring MySQL Server internals and execution details at a lower level. It had a bad reputation in an early version (5.6) because enabling it often caused performance issues, however the recent versions do not harm performance. The following tables in Performance Schema can be used to find slow queries: events_statements_current events_statements_history events_statements_history_long events_statements_summary_by_digest events_statements_summary_by_user_by_event_name events_statements_summary_by_host_by_event_name MySQL 5.7.7 and higher include the sys schema, a set of objects that helps DBAs and developers interpret data collected by the Performance Schema into a more easily understandable form. Sys schema objects can be used for typical tuning and diagnosis use cases. Network tracking What if we don’t have access to the query log or direct application logs. In that case, we could use a combination of tcpdump and pt-query digest which could help to capture queries. $ tcpdump -s 65535 -x -nn -q -tttt -i any port 3306 > mysql.tcp.txt Once the capture process ends, we can proceed with processing the data: $ pt-query-digest --limit=100% --type tcpdump mysql.tcp.txt > ptqd_tcp.out ClusterControl Query Monitor ClusterControl Query Monitor is a module in a cluster control that provides combined information about database activity. It can gather information from multiple sources like show process list or slow query log and present it in a pre-aggregated way.  The SQL Monitoring is divided into three sections. Top Queries presents the information about queries that take a significant chunk of resources. Running Queries it’s a process list of information combined from all database cluster nodes into one view. You can use that to kill queries that affect your database operations. Query Outliers present the list of queries with execution time longer than average. Conclusion This is all for part two. This blog is not intended to be an exhaustive guide to how to enhance database performance, but it hopefully gives a clearer picture of what things can become essential and some of the basic parameters that can be configured. Do not hesitate to let us know if we’ve missed any important ones in the comments below.   Tags:  MySQL database performance performance management performance monitoring performance tuning [Less]
Posted over 4 years ago by MySQL Performance Blog
The importance of having periodic backups is a given in Database life. There are different flavors: binary ones (Percona XtraBackup), binlog backups, disk snapshots (lvm, ebs, etc) and the classic ones: logical backups, the ones that you can take ... [More] with tools like mysqldump, mydumper, or mysqlpump. Each of them with a specific purpose, MTTRs, retention policies, etc. Another given is the fact that taking backups can be a very slow task as soon as your datadir grows: more data stored, more data to read and backup. But also, another fact is that not only does data grow but also the amount of MySQL instances available in your environment increases (usually). So, why not take advantage of more MySQL instances to take logical backups in an attempt to make this operation faster? Distributed Backups (or Using all the Slaves Available) The idea is simple: instead of taking the whole backup from a single server, use all the servers available. This Proof of Concept is focused only on using the replicas on a Master/Slave(s) topology. One can use the Master too, but in this case, I’ve decided to leave it alone to avoid adding the backup overhead. Tests! On a Master/3-Slaves topology: Graph from the Orchestrator GUI With a small datadir of around 64GB of data (without the index size) and 300 tables (schema “sb”): +--------------+--------+--------+-----------+----------+-----------+----------+ | TABLE_SCHEMA | ENGINE | TABLES | ROWS      | DATA (M) | INDEX (M) | TOTAL(M) | +--------------+--------+--------+-----------+----------+-----------+----------+ | meta         | InnoDB | 1 |         0 | 0.01 | 0.00 |   0.01 | | percona      | InnoDB | 1 |         2 | 0.01 | 0.01 |   0.03 | | sb           | InnoDB | 300 | 295924962 | 63906.82 |   4654.68 | 68561.51 | | sys          | InnoDB | 1 |         6 | 0.01 | 0.00 |   0.01 | +--------------+--------+--------+-----------+----------+-----------+----------+ Using the 3 replicas, the distributed logical backup with mysqldump took 6 minutes, 13 seconds: [root@mysql1 ~]# ls -lh /data/backups/20200101/ total 56G -rw-r--r--. 1 root root 19G Jan  1 14:37 mysql2.sql -rw-r--r--. 1 root root 19G Jan  1 14:37 mysql3.sql -rw-r--r--. 1 root root 19G Jan  1 14:37 mysql4.sql [root@mysql1 ~]# stat /data/backups/20200101/mysql2.sql   File: '/data/backups/20200101/mysql2.sql'   Size: 19989576285     Blocks: 39042144 IO Block: 4096   regular file Device: 10300h/66304d   Inode: 54096034 Links: 1 Access: (0644/-rw-r--r--)  Uid: ( 0/ root) Gid: (    0/ root) Context: unconfined_u:object_r:unlabeled_t:s0 Access: 2020-01-01 14:31:34.948124516 +0000 Modify: 2020-01-01 14:37:41.297640837 +0000 Change: 2020-01-01 14:37:41.297640837 +0000  Birth: - Same backup type on a single replica took 11 minutes, 59 seconds: [root@mysql1 ~]# time mysqldump -hmysql2 --single-transaction --lock-for-backup sb > /data/backup.sql real    11m58.816s user    9m48.871s sys     2m6.492s [root@mysql1 ~]# ls -lh /data/backup.sql -rw-r--r--. 1 root root 56G Jan  1 14:52 /data/backup.sql In other words: The distributed one was 48% faster! And this is a fairly small dataset. Worth the shot. So, how does it work? Concepts The logic is simple and can be divided into stages.  Stage 1: Preparation Find out how many replicas there are available Find out the number of tables in the schema you want to take a backup of Divide the number of tables between all the available replicas. The resultant chunks will be the tables each replica will backup. Stage 2: Guarantee Consistency Prevent the Master from executing operations that change the binlog position. Typically this is done with FLUSH TABLES WITH READ LOCK, but this PoC is using the cool feature of LOCK BINLOG FOR BACKUP available on Percona Server for MySQL and is way less disruptive. Find the most up-to-date replica Make all the other replicas match the most up to date one with START SLAVE UNTIL Fire up a mysqldump per replica with the correspondent chunk of tables and use –lock-for-backup (another Percona Server feature) The full script can be found here: https://github.com/nethalo/parallel-mysql-backup/blob/master/dist_backup.sh Worth to note that the script has its own log that will describe every step, it looks like this: [200101-16:01:19] [OK] Found 'mysql' bin [200101-16:01:19] [Info] SHOW SLAVE HOSTS executed [200101-16:01:19] [Info] Count tables OK [200101-16:01:19] [Info] table list gathered [200101-16:01:19] [Info] CREATE DATABASE IF NOT EXISTS percona [200101-16:01:19] [Info] CREATE TABLE IF NOT EXISTS percona.metabackups [200101-16:01:19] [Info] TRUNCATE TABLE percona.metabackups [200101-16:01:19] [Info] Executed INSERT INTO percona.metabackups (host,chunkstart) VALUES('mysql3',0) [200101-16:01:19] [Info] Executed INSERT INTO percona.metabackups (host,chunkstart) VALUES('mysql4',100) [200101-16:01:19] [Info] Executed INSERT INTO percona.metabackups (host,chunkstart) VALUES('mysql2',200) [200101-16:01:19] [Info] lock binlog for backup set [200101-16:01:19] [Info] slave status position on mysql3 [200101-16:01:19] [Info] slave status file on mysql3 [200101-16:01:19] [Info] slave status position on mysql4 [200101-16:01:19] [Info] slave status file on mysql4 [200101-16:01:19] [Info] slave status position on mysql2 [200101-16:01:19] [Info] slave status file on mysql2 [200101-16:01:19] [Info] set STOP SLAVE; START SLAVE UNTIL MASTER_LOG_FILE = 'mysql-bin.000358', MASTER_LOG_POS = 895419795 on mysql3 [200101-16:01:20] [Info] set STOP SLAVE; START SLAVE UNTIL MASTER_LOG_FILE = 'mysql-bin.000358', MASTER_LOG_POS = 895419795 on mysql4 [200101-16:01:20] [Info] set STOP SLAVE; START SLAVE UNTIL MASTER_LOG_FILE = 'mysql-bin.000358', MASTER_LOG_POS = 895419795 on mysql2 [200101-16:01:20] [Info] Created /data/backups/20200101/ directory [200101-16:01:20] [Info] Limit chunk OK [200101-16:01:20] [Info] Tables list for mysql3 OK [200101-16:01:20] [OK] Dumping mysql3 [200101-16:01:20] [Info] Limit chunk OK [200101-16:01:20] [Info] Tables list for mysql4 OK [200101-16:01:20] [OK] Dumping mysql4 [200101-16:01:20] [Info] Limit chunk OK [200101-16:01:20] [Info] Tables list for mysql2 OK [200101-16:01:20] [OK] Dumping mysql2 [200101-16:01:20] [Info] UNLOCK BINLOG executed [200101-16:01:20] [Info] set start slave on mysql3 [200101-16:01:20] [Info] set start slave on mysql4 [200101-16:01:20] [Info] set start slave on mysql2 Requirements Some basic requirements: Since the tool uses the command SHOW SLAVE HOSTS, it is mandatory to set the variable report_host, which if you are using Orchestrator, you most likely have it set already. The host set in the “report_host” variable should be one that is accessible. For example, an IP or a host that can actually be resolved (DNS, editing /etc/hosts file). No Replication Filters on any of the replicas involved. This to guarantee data consistency. The script currently should be run locally in the Master server. It only works on Percona Server due to the usage of Backup Locks. MySQL user credentials are expected to be available in the home dir inside the .my.cnf file. We Would Like Your Feedback! Interesting or not? Is this something that would come handy for your backup operations? Is there something else you would like to see from the script? Is there something missing? With this being a Proof of Concept, it lacks features that eventually (if this becomes a more mature tool) will arrive, like: Adding weights to the slaves so the distribution can be modified Option to use the Master as one of the backup servers, if desired Use FTWRL when the server is not Percona Server Use MyDumper/MysqlPump with multi-threads instead of MySQLDump Etc… Let us know in the comments section! [Less]
Posted over 4 years ago by Scott Noyes
Given: a table with JSON arrays CREATE TABLE t (id int auto_increment primary key, d json); INSERT INTO t VALUES (1, '["apple", "apple", "orange"]'); INSERT INTO t VALUES (2, '["apple", "banana", "orange", "orange", "orange"]'); The desired output ... [More] is each row with a count of the unique objects: +------+----------------------------------------+ | id | fruitCount | +------+----------------------------------------+ | 1 | {"apple": 2, "orange": 1} | | 2 | {"apple": 1, "banana": 1, "orange": 3} | +------+----------------------------------------+ JSON_TABLE() can transform the array into rows. SELECT id, fruit FROM t, JSON_TABLE(d, "$[*]" COLUMNS ( fruit VARCHAR(100) PATH "$" ) ) AS dt; +----+--------+ | id | fruit | +----+--------+ | 1 | apple | | 1 | apple | | 1 | orange | | 2 | apple | | 2 | banana | | 2 | orange | | 2 | orange | | 2 | orange | +----+--------+ (If you got an error there about either permissions or an unknown table/column in the JSON_TABLE, upgrade to at least 8.0.14 to get past some bugs.) We can then use the ordinary aggregation functions: SELECT id, fruit, COUNT(*) FROM t, JSON_TABLE(d, "$[*]" COLUMNS ( fruit VARCHAR(100) PATH "$" ) ) AS dt GROUP BY id, fruit; +----+--------+----------+ | id | fruit | COUNT(*) | +----+--------+----------+ | 1 | apple | 2 | | 1 | orange | 1 | | 2 | apple | 1 | | 2 | banana | 1 | | 2 | orange | 3 | +----+--------+----------+ With a subquery or a common table expression, we can aggregate individual rows at a time (meaning you can also select other columns from t). Then JSON_OBJECTAGG will combine those results into a single object. SELECT id, ( WITH cte AS ( SELECT fruit, COUNT(*) AS c FROM JSON_TABLE(d, "$[*]" COLUMNS ( fruit VARCHAR(100) PATH "$" ) ) AS dt GROUP BY fruit ) SELECT JSON_OBJECTAGG(fruit, c) FROM cte ) AS fruitCount FROM t; +------+----------------------------------------+ | id | fruitCount | +------+----------------------------------------+ | 1 | {"apple": 2, "orange": 1} | | 2 | {"apple": 1, "banana": 1, "orange": 3} | +------+----------------------------------------+ et voilà [Less]
Posted over 4 years ago by Oracle MySQL Group
We are happy to announce that on Monday, Jan 13, 2020 there will be a meetup in Dubai on the "MySQL 8 - State of the Dolphin" topic. Please find details below: Name: MySQL User Group U.A.E meetup Topic: MySQL 8 - State of the Dolphin Date: ... [More] Monday, January 13, 2020 Time: 7pm - 10pm Place: Dubai Internet City Agenda: Kenny Gryp from the MySQL Product Management Group talk about MySQL Database Architectures (MySQL InnoDB cluster) Frederic (LeFred) Descamps from the MySQL Community team will talk about MySQL Shell (including NoSQL) Mario Beck, Manager of MySQL Enterprise PreSales team & Chetan Rao from MySQL Enterprise Middle East & Africa team will talk about Gossip around MySQLaaS and MySQL .19 Please check the below url for further changes & updates. More details and registration [Less]