I Use This!
Very High Activity

News

Analyzed 5 days ago. based on code collected 5 days ago.
Posted 2 days ago by Sheeri K. Cabral
We use Percona’s audit log plugin to keep a record of all our logins. Recently we did one of those tasks that everyone knows they should do, but few ever do: change the application user’s password. When we change the application password, we add a ... [More] new application user with the proper permissions and a new password, update the information in our repository and wait a while. Using this procedure, any failures mean the old user is used, and more importantly – failure does not impact the end user. We check the audit logs to see if there were failures – if the user is still being used – when it is no longer in use, we can drop the user. For reference, here are our settings: [mysqlaudit]# grep audit /etc/my.cnf # Percona audit plugin options audit_log_format = JSON audit_log_rotate_on_size = 1073741824 . #1G audit_log_rotations = 10 audit_log_file = /var/log/mysqlaudit/audit.log audit_log_buffer_size = 4194304 audit_log_strategy = ASYNCHRONOUS audit_log_policy = LOGINS This means that we automatically flush logs >1G, keeping 10 audit logs. The other option is to do manual flushing, but we do not want our log files to get very large, and we don’t need to keep audit logs for a very long time. The ideal behavio – we update our app to the new user, and then flush the logs. That way we could start a new audit log and only have to search the current audit log for the old user. I’m sure people are thinking, “well, it’s set to rotate by size, not manually, so you just cannot do it.” However, binary logs are set the same way and FLUSH LOGS do indeed rotate logs manually, even when auto rotate by size is set. The tl;dr is that there is currently no way to do this without restarting MySQL. The audit_log_rotate_on_size variable is not dynamic, so we could not set it to manual without restarting MySQL. Here are some other tests we did to see if we could force an audit log flush while using the auto rotate: We tried moving the audit logs and flushing: [ ~]# cd /var/log/mysqlaudit/ [mysqlaudit]# ls -l total 1459572 -rw-rw—- 1 mysql mysql 420839439 Mar 31 11:04 audit.log -rw-rw—- 1 mysql mysql 1073749720 Mar 25 08:50 audit.log.01 [mysqlaudit]# mv audit.log.01 audit.log.02 [mysqlaudit]# mv audit.log audit.log.01 [mysqlaudit]# ls audit.log.01 audit.log.02 Don’t worry, this doesn’t affect writing the file – the inode is still in MySQL and it still writes to the file, now called audit.log.01: [mysqlaudit]# ls -l total 1459652 -rw-rw—- 1 mysql mysql 420925253 Mar 31 11:07 audit.log.01 -rw-rw—- 1 mysql mysql 1073749720 Mar 25 08:50 audit.log.02 [mysqlaudit]# ls -l total 1459652 -rw-rw—- 1 mysql mysql 420925253 Mar 31 11:07 audit.log.01 -rw-rw—- 1 mysql mysql 1073749720 Mar 25 08:50 audit.log.02 [mysqlaudit]# mysql -u sheeri.cabral -p -e “FLUSH LOGS;” Enter password: [mysqlaudit]# ls -l total 1459688 -rw-rw—- 1 mysql mysql 420958983 Mar 31 11:07 audit.log.01 -rw-rw—- 1 mysql mysql 1073749720 Mar 25 08:50 audit.log.02 Note that this also proves that “FLUSH LOGS” does not close and open the audit log. Can we force it? Let’s try by setting the audit_log_policy to NONE and then to LOGINS (what we have it as by default): [mysqlaudit]# mysql -u sheeri.cabral -p -e “set global audit_log_policy=NONE; set global audit_log_policy=LOGINS” Enter password: [mysqlaudit]# ls -l total 1459768 -rw-rw—- 1 mysql mysql 421043317 Mar 31 11:10 audit.log.01 -rw-rw—- 1 mysql mysql 1073749720 Mar 25 08:50 audit.log.02 Here’s another failed test – let’s see if we can disable then enable the plugin: [mysqlaudit]# mysql -u sheeri.cabral -p -e “UNINSTALL PLUGIN audit_log” Enter password: [mysqlaudit]# ls -rlth total 1.4G -rw-rw—- 1 mysql mysql 1.1G Mar 25 08:50 audit.log.01 -rw-rw—- 1 mysql mysql 403M Mar 31 11:44 audit.log [mysqlaudit]# mysql -u sheeri.cabral -p -e “INSTALL PLUGIN audit_log SONAME ‘audit_log.so’; ” Enter password: ERROR 1125 (HY000) at line 1: Function ‘audit_log’ already exists [mysqlaudit]# mysql -u sheeri.cabral -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1375345 Server version: 5.5.51-38.1-log Percona Server (GPL), Release 38.1, Revision b4a63b4 Copyright (c) 2009-2016 Percona LLC and/or its affiliates Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement. [(none)]> \P grep -i audit PAGER set to ‘grep -i audit’ [(none)]> SHOW PLUGINS; | audit_log | DELETED | AUDIT | audit_log.so | GPL | 41 rows in set (0.00 sec) [(none)]> INSTALL PLUGIN audit_log SONAME ‘audit_log.so’; ERROR 1125 (HY000): Function ‘audit_log’ already exists So, I ended up needing to restart MySQL if I wanted to re-enable the plugin. Sometimes the failures are the most illuminating! [I realize that the plugin probably could not hack FLUSH LOGS but it would be great to get FLUSH AUDIT or something similar…] [Less]
Posted 2 days ago by Øystein Grøvlen
Percona Live is next week, and on Monday morning I will give a tutorial on "How to Analyze and Tune MySQL Queries for Better Performance".  I hope to see some of you there!  For those of you that are not able to come, I recommend my on demand ... [More] webinar on the same topic.  I will also give talks about query tuning at the upcoming Oracle MySQL Innovation Days to be held in the Bay Area (April 28) and in the Boston Area (May 2).I would also like to recommend the following MySQL optimizer related talks at Percona Live: Monday 1:30pm: MySQL 8.0 Optimizer Guide (Morgan Tocker) Tuesday 1:20pm: MySQL 8.0: What is New in the Optimizer? (Manyi Lu) Wednesday 11:10pm: Cookbook for Creating INDEXes - All about Indexing (Rick James) Thursday 12:50pm: Recursive Query Throwdown in MySQL 8 (Bill Karwin) [Less]
Posted 3 days ago by Jean-François Gagné
Yes, another post about my talks at Percona Live Santa Clara: I obviously still have things to share.  This time, I will focus on my parallel replication talks by giving a short preview. I have two parallel replication talks at Percona Live: ... [More] MySQL/MariaDB Parallel Replication: inventory, use cases and limitations (Wednesday talk) MySQL Parallel Replication: all the 5.7 (and some of the 8.0) [Less]
Posted 3 days ago by MySQL Performance Blog
In this blog post, we’ll look at how Percona XtraDB Cluster uses IST. Introduction Percona XtraDB Cluster uses the concept of an Incremental State Transfer (IST). When a node of the cluster leaves the cluster for a short period of time, it can ... [More] rejoin the cluster by getting the delta set of missing changes from any active node in the cluster. This process of getting the delta set of changes is named as IST in Percona XtraDB Cluster. Tracking IST Progress The number of write-sets/changes that the joining node needs to catch up on when rejoining the cluster is dictated by: The duration the node was not present in the cluster The workload of the cluster during that time frame This catch-up process can be time-consuming. Until this process is complete, the rejoining node is not ready to process any active workloads. We believe that any process that is time-consuming should have a progress monitor attached to it. This is exactly what we have done. In the latest release of Percona XtraDB Cluster 5.7.17-29.20, we added an IST progress monitor that is exposed through SHOW STATUS. This helps you to monitor the percentage of write-sets which has been applied by the rejoining node. Let’s see this in a working example: Start a two-node cluster Process some basic workloads, allow cluster replication Shutdown node-2 Node-1 then continues to process more workloads (the workload fits the allocated gcache) Restart Node-2, causing it to trigger an IST mysql> show status like 'wsrep_ist_receive_status'; +--------------------------+--------------------------------------------------------+ | Variable_name | Value | +--------------------------+--------------------------------------------------------+ | wsrep_ist_receive_status | 3% complete, received seqno 1421771 of 1415410-1589676 | +--------------------------+--------------------------------------------------------+ 1 row in set (0.00 sec) .... mysql> show status like 'wsrep_ist_receive_status'; +--------------------------+---------------------------------------------------------+ | Variable_name | Value | +--------------------------+---------------------------------------------------------+ | wsrep_ist_receive_status | 52% complete, received seqno 1506799 of 1415410-1589676 | +--------------------------+---------------------------------------------------------+ 1 row in set (0.00 sec) .... mysql> show status like 'wsrep_ist_receive_status'; +--------------------------+---------------------------------------------------------+ | Variable_name | Value | +--------------------------+---------------------------------------------------------+ | wsrep_ist_receive_status | 97% complete, received seqno 1585923 of 1415410-1589676 | +--------------------------+---------------------------------------------------------+ 1 row in set (0.00 sec) mysql> show status like 'wsrep_ist_receive_status'; +--------------------------+-------+ | Variable_name | Value | +--------------------------+-------+ | wsrep_ist_receive_status | | +--------------------------+-------+ 1 row in set (0.00 sec) As you can see, the wsrep_ist_receive_status monitoring string indicates the percentage completed, currently received write-set and the range of write-sets applicable to the IST. Once the IST activity is complete, the variable shows an empty-string. Closing Comments I hope you enjoy this newly added feature. Percona Engineering would be happy to hear from you, about more such features that can help you make effective use of Percona XtraDB Cluster. We will try our best to include them in our future plans (based on feasibility). Note: Special thanks for Kenn Takara and Roel Van de Paar for helping me edit this post. [Less]
Posted 3 days ago by MySQL Performance Blog
In this blog post, we’ll discuss trackable flow control in Percona XtraDB Cluster. Introduction Percona XtraDB Cluster has a self-regulating mechanism called Flow Control. This mechanism helps to avoid a situation wherein the weakest/slowest member ... [More] of the cluster falls significantly behind other members of the cluster. When a member of a cluster is slow at applying write-sets (while simultaneously continuing to receive write-sets from the cluster group channel), then the incoming/receive queue grows in size. If this queue crosses a set threshold (gcs.fc_limit), the node emits a FLOW_CONTROL message asking other members to slow down or halt processing. While this happens transparently for the end-user, it’s important for the cluster administrator to know whether a node is using flow-control. If it is, it can affect overall cluster productivity. Finding if a node is in flow control FLOW_CONTROL is not a persistent state. A node enters FLOW_CONTROL once the queue size exceeds the set threshold. It is released back again once the queue size falls back below the low-end watermark. So how can one view the higher and lower watermarks? Up until now, this was exposed only in the log files. However, starting with Percona XtraDB Cluster 5.7.17-29.20, this is now exposed and available through SHOW STATUS: mysql> show status like 'wsrep_flow_control_interval'; +-----------------------------+------------+ | Variable_name | Value | +-----------------------------+------------+ | wsrep_flow_control_interval | [ 12, 23 ] | +-----------------------------+------------+ 1 row in set (0.00 sec) mysql> set global wsrep_provider_options="gcs.fc_limit=100"; Query OK, 0 rows affected (0.00 sec) mysql> show status like 'wsrep_flow_control_interval'; +-----------------------------+-------------+ | Variable_name | Value | +-----------------------------+-------------+ | wsrep_flow_control_interval | [ 71, 141 ] | +-----------------------------+-------------+ 1 row in set (0.00 sec) mysql> set global wsrep_provider_options="gcs.fc_factor=0.75"; Query OK, 0 rows affected (0.00 sec) mysql> show status like 'wsrep_flow_control_interval'; +-----------------------------+--------------+ | Variable_name | Value | +-----------------------------+--------------+ | wsrep_flow_control_interval | [ 106, 141 ] | +-----------------------------+--------------+ 1 row in set (0.01 sec) As you can see, the wsrep_flow_control_interval status variable emits a range representing the lower and higher watermark. A value set of (12, 23) means that if the incoming queue size is greater than 23, then FLOW_CONTROL is enabled. If the size falls below 12, FLOW_CONTROL is released. Still, this does not show us if the node is using FLOW_CONTROL at any given moment. To address this, we simultaneously introduced a wsrep_flow_control_status status variable into the same release. This boolean status variable tells the user if the node is in FLOW_CONTROL or not. Once the node is out of flow-control, the variable is switched to OFF, and vice versa to ON when the node is employing flow-control: show status like '%flow%'; ..... | wsrep_flow_control_sent | 28156 | | wsrep_flow_control_recv | 28156 | | wsrep_flow_control_interval | [ 106, 141 ] | | wsrep_flow_control_status | ON | Finally, the wsrep_flow_control_sent/recv counter can be used to track FLOW_CONTROL status. This shows an aggregated count of how many times flow-control kicked in. You can clear them using FLUSH STATUS. mysql> show status like '%flow%'; ..... | wsrep_flow_control_sent | 31491 | | wsrep_flow_control_recv | 31491 | | wsrep_flow_control_interval | [ 106, 141 ] | | wsrep_flow_control_status | OFF | Takeaway thoughts We hope you enjoy the new features added to Percona XtraDB Cluster, including flow control tracking! We like feedback from customers (via the usual Customer Support website), and where possible from the community (via Percona JIRA and Launchpad), on which features they most want to see. We recently re-engineered Percona XtraDB Cluster to have much-increased performance (a request originating from the community), and the new IST Progress Meter and Flow Control Monitoring were introduced to aid in solving customer-facing challenges. If you have a feature you would like to see, let us know! Note: Special thanks for Kenn Takara and Roel Van de Paar for helping me edit this post. [Less]
Posted 3 days ago by MySQL Performance Blog
Welcome to another post in the series of Percona Live featured session blogs! In these blogs, we’ll highlight some of the session speakers that will be at this year’s Percona Live conference. We’ll also discuss how these sessions can help you ... [More] improve your database environment. Make sure to read to the end to get a special Percona Live 2017 registration bonus! In this Percona Live featured session, we’ll meet Casper Kejlberg-Rasmussen, Software Developer at Uber. His session is Placing Databases @ Uber. Uber has many thousands of MySQL databases running inside of Docker containers on thousands of hosts. When deciding exactly which host a database should run on, it is important that you avoid hosts running databases of the same cluster as the one you are placing, and that you avoid placing all databases of a cluster on the same rack or in the same data center. I had a chance to talk to Casper about Uber and database placement: Percona: How did you get into database technology? What do you love about it? Casper: When I took my Ph.D., my thesis area was about dynamic data structures. During my bachelor, master and Ph.D., I took all the algorithms and data structure classes I could. So it was natural for me to also work with databases in my professional career. Databases are a prime example of a very useful dynamic data structure. Percona: Your talk is called Placing Database @ Uber. What do you mean by placing databases, and why is it important? Casper: At Uber, the storage team manages all of Uber’s storage offerings. Our main database technology is an in-house NoSQL database called Schemaless. Schemaless builds on top of MySQL (and we specifically use our own fork of Percona’s MySQL variant, found here in GitHub). We have many thousands of databases that run inside of Docker containers. Whenever we need to create a new Schemaless instance for our internal customers, or we need to add capacity to an existing Schemaless instance, we need to place new Docker containers with Percona Server for MySQL running inside. For our Schemaless instances to be reliable, durable and highly available, we need to place databases in at least two different data centers. We want to avoid placing two databases of the same instance on the same rack or the same host. So consideration needs to be taken when deciding where to place the databases. Percona: What are some of the conditions that affect where you place databases? Casper: When we place a database we have to take into account the labels on the hosts we consider. These labels can be which data center or rack the host is part of, or what Clusto (Clusto is an internal hardware management tool we have) pools a host belongs to. This can be a testing, staging or production host, etc. A host also has “relations.” A relation is also a label, but instead of stating facts about the host, a relation states what other databases are running on the host. An example of a relation label is schemaless.instance.mezzanine, which indicates that the host is running a Schemaless database from the Mezzanine instance. Another example is schemaless.cluster.percona-cluster-mezzanine-us1-db01, which indicates that the database is a Schemaless database belonging to the cluster percona-cluster-mezzanine-us1-db01. Percona: What do you want attendees to take away from your session? Why should they attend? Casper: I want the attendees to remember that there are three steps when placing a database or any container: Filter out any host that fails the hard requirements (like not having enough resources) or fails the label or relation requirements (like having databases of the same instance as the one we want to place) Rank the hosts to select the best one, which can be having a host with the most free space left or having a low number of other databases on it As time passes and databases consume more resources, we want to relocate databases to other hosts at which it makes more sense to place them. People should attend my session to see how to get good database placements in a simple way. Percona: What are you most looking forward to at Percona Live 2017? Casper: I look forward to hearing about other NoSQL solutions, and hearing about different storage engines for MySQL systems. And of course meeting other exciting people from the database community! Register for Percona Live Data Performance Conference 2017, and see Casper present Placing Databases @ Uber. Use the code FeaturedTalk and receive $100 off the current registration price! Percona Live Data Performance Conference 2017 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community, as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world. The Percona Live Data Performance Conference will be April 24-27, 2017 at the Hyatt Regency Santa Clara and the Santa Clara Convention Center. [Less]
Posted 3 days ago by MySQL Performance Blog
Welcome to another post in the series of Percona Live featured session blogs! In these blogs, we’ll highlight some of the session speakers that will be at this year’s Percona Live conference. We’ll also discuss how these sessions can help you ... [More] improve your database environment. Make sure to read to the end to get a special Percona Live 2017 registration bonus! In this Percona Live featured session, we’ll meet Casper Kejlberg-Rasmussen, Software Developer at Uber. His session is Placing Databases @ Uber. Uber has many thousands of MySQL databases running inside of Docker containers on thousands of hosts. When deciding exactly which host a database should run on, it is important that you avoid hosts running databases of the same cluster as the one you are placing, and that you avoid placing all databases of a cluster on the same rack or in the same data center. I had a chance to talk to Casper about Uber and database placement: Percona: How did you get into database technology? What do you love about it? Casper: When I took my Ph.D., my thesis area was about dynamic data structures. During my bachelor, master and Ph.D., I took all the algorithms and data structure classes I could. So it was natural for me to also work with databases in my professional career. Databases are a prime example of a very useful dynamic data structure. Percona: Your talk is called Placing Database @ Uber. What do you mean by placing databases, and why is it important? Casper: At Uber, the storage team manages all of Uber’s storage offerings. Our main database technology is an in-house NoSQL database called Schemaless. Schemaless builds on top of MySQL (and we specifically use the Percona variant of MySQL). We have many thousands of databases that run inside of Docker containers. Whenever we need to create a new Schemaless instance for our internal customers, or we need to add capacity to an existing Schemaless instance, we need to place new Docker containers with Percona Server for MySQL running inside. For our Schemaless instances to be reliable, durable and highly available, we need to place databases in at least two different data centers. We want to avoid placing two databases of the same instance on the same rack or the same host. So consideration needs to be taken when deciding where to place the databases. Percona: What are some of the conditions that affect where you place databases? Casper: When we place a database we have to take into account the labels on the hosts we consider. These labels can be which data center or rack the host is part of, or what Clusto (Clusto is an internal hardware management tool we have) pools a host belongs to. This can be a testing, staging or production host, etc. A host also has “relations.” A relation is also a label, but instead of stating facts about the host, a relation states what other databases are running on the host. An example of a relation label is schemaless.instance.mezzanine, which indicates that the host is running a Schemaless database from the Mezzanine instance. Another example is schemaless.cluster.percona-cluster-mezzanine-us1-db01, which indicates that the database is a Schemaless database belonging to the cluster percona-cluster-mezzanine-us1-db01. Percona: What do you want attendees to take away from your session? Why should they attend? Casper: I want the attendees to remember that there are three steps when placing a database or any container: Filter out any host that fails the hard requirements (like not having enough resources) or fails the label or relation requirements (like having databases of the same instance as the one we want to place) Rank the hosts to select the best one, which can be having a host with the most free space left or having a low number of other databases on it As time passes and databases consume more resources, we want to relocate databases to other hosts at which it makes more sense to place them. People should attend my session to see how to get good database placements in a simple way. Percona: What are you most looking forward to at Percona Live 2017? Casper: I look forward to hearing about other NoSQL solutions, and hearing about different storage engines for MySQL systems. And of course meeting other exciting people from the database community! Register for Percona Live Data Performance Conference 2017, and see Casper present Placing Databases @ Uber. Use the code FeaturedTalk and receive $100 off the current registration price! Percona Live Data Performance Conference 2017 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community, as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world. The Percona Live Data Performance Conference will be April 24-27, 2017 at the Hyatt Regency Santa Clara and the Santa Clara Convention Center. [Less]
Posted 3 days ago by Federico Razzoli
Most InnoDB primary keys are built on integer columns with the AUTO_INCREMENT option (which is a very good practice for reasons that are outside of the purpose of this article). But we have to monitor that we are not going to run out of ... [More] AUTO_INCREMENT value. If this happens, we will get errors like this: ERROR 167 (22003): Out of range value for column 'a' at row 1 Obviously, when creating tables, we should use a type that is sufficiently big, and make it UNSIGNED to avoid wasting half of its space. But there are also some details about AUTO_INCREMENT that we should remember. First, the values to monitor are not MAX(id), but they are the AUTO_INCREMENT column in information_schema TABLES. This column shows the next autoincr value. A used value is lost. It doesn’t matter if you delete a row: its autoincr value will not be reused. This is why we should monitor the next values from information_schema: it is possible that we used all autoincr values, but MAX(id) is much lower. Values are lost even for transactions that fail or are rolled back. This is the reason why MAX(id) is generally higher, sometimes much higher, than the next AUTO_INCREMENT value. This can happen especially if we have periodical bulk inserts inserting many rows in one transaction. If such operation fails, it will waste autoincr values. If it fails often, we could run out of values. Easy trick An easy trick could be use a command like this: ALTER TABLE table_name AUTO_INCREMENT = 1000; But this only works if all values in the table are lower than the specified AUTO_INCREMENT. This can be useful if a bulk insert failed and no new row was still added, but typically this only postpones a bit the problem. Or again, we could change all id’s to fill all the holes, and then set AUTO_INCREMENT. This is possible if, for example, the server is not used by night, or during the week end. But in other cases, it is not a viable solution. Modifying a primary key is not a cheap operation for InnoDB, and most likely, we will have to do this with many rows. Also, we would need to lock the table to avoid that new rows (with high values) are inserted in the process. So, what can we do? If we have a write-heavy workload, lowering the autoincr values is not trivial, because new rows are being inserted any time. And changing an id for many rows can be a long operation. If we do it with one transaction to make things faster, it’s locking. But well, we can do something similar to what pt-online-schema-change does to alter a table without locks. Let’s see the procedure step by step. For our example, we will assume the following trivial table: CREATE TABLE employee ( id INT UNSIGNED AUTO_INCREMENT, name VARCHAR(100), PRIMARY KEY (id) ); 1) First, let’s create a table with the same structure. The id’s in the new table will start from 1. And let’s add a column, which is a reference to the original table’s id. CREATE TABLE tmp_employee LIKE employee; ALTER TABLE tmp_employee ADD COLUMN orig_id INT UNSIGNED NULL; ALTER TABLE tmp_employee ADD UNIQUE INDEX idx_orig_id (orig_id); 2) Now, we need to add a trigger to the original table, to add the new rows in the new table: DELIMITER || CREATE TRIGGER employee_ai AFTER INSERT ON employee FOR EACH ROW BEGIN INSERT INTO tmp_employee (old_id, name) VALUES (NEW.id, NEW.name); END || DELIMITER ; Note that original table’s id and the new table’s old_id columns are kept in sync. This is just an example. In the real world, I would also create triggers for DELETE and UPDATE. This triggers need a reference to the new table, that’s why old_id exists and is indexed. What if the table already has triggers? Fortunately MySQL 5.7 and MariaDB 10.2 support multiple triggers per timing/event. With older versions, just modify them and add the queries you need. 3) Copy rows from the original table to the new table. If the new table already has some rows (added by our INSERT trigger) we won’t try to copy it. INSERT INTO tmp_employee (old_id, name) SELECT id, name FROM employee WHERE (SELECT COUNT(*) FROM tmp_employee) = 0 OR id < (SELECT MIN(old_id) FROM tmp_employee); We are still populating the old_id column for DELETE and UPDATE triggers. We didn’t mention the new table’s primary key, so the values are automatically generated. 4) Until now, I assume we didn’t cause any damage. But the next step is more dangerous. Before proceeding, verify that we didn’t do anything wrong and our transactions didn’t fail. 5) If everything’s ok, let’s switch the tables: RENAME TABLE employee TO old_employee, tmp_employee TO employee; This operation is atomic, so no query is expected to fail. 6) Drop the old table and the old_id column. Enjoy! [Less]
Posted 3 days ago by Aurimas Mikalauskas
This is just a note to myself. I don’t do this often enough to remember the command, but whenever I’m searching for this, it takes half a minute to find it in MySQL manual, so hopefully this gets indexed better (in my memory as well as in Google). ... [More] Here’s the simple command to create a login path: mysql_config_editor set --login-path=mysql1 --host=localhost \ --port=3306 --socket=/path/to/socket --user=root --password Obviously you can remove just about anything and only leave the essentials. Once that’s done, accessing different MySQL instances is as simple as mysql --login-path=mysql1, which is especially useful if you’re accessing different servers from one machine, or if you’re running several MySQL instances on the same machine. More information on login paths here. The post How to create mysql login-path appeared first on Speedemy. [Less]
Posted 3 days ago by Severalnines
Our journey in adopting MySQL and MariaDB in containerized environments continues, with ClusterControl coming into the picture to facilitate deployment and management. We already have our ClusterControl image hosted in Docker Hub, where it can ... [More] deploy different replication/cluster topologies on multiple containers. With the introduction of Docker Swarm, a native orchestration tools embedded inside Docker Engine, scaling and provisioning containers has become much easier. It also has high availability covered by running services on multiple Docker hosts. In this blog post, we’ll be experimenting with automatic provisioning of Galera Cluster on Docker Swarm with ClusterControl. ClusterControl would usually deploy database clusters on bare-metal, virtual machines and cloud instances. ClusterControl relies on SSH (through libssh) as core communication module to connect to the managed hosts, so these would not require any agents. The same rule can be applied to containers, and that’s what we are going to show in this blog post. ClusterControl as Docker Swarm Service We have built a Docker image with extended logic to handle deployment in container environments in a semi-automatic way. The image is now available on Docker Hub and the code is hosted in our Github repository. Please note that only this image is capable of deploying on containers, and is not available in the standard ClusterControl installation packages. The extended logic is inside deploy-container.sh, a script that monitors a custom table inside CMON database called “cmon.containers”. The created database container shall report and register itself into this table and this script will look for new entries and perform the necessary action using a ClusterControl CLI. The deployment is automatic, and you can monitor the progress directly from the ClusterControl UI or using “docker logs” command. Before we go further, take note of some prerequisites for running ClusterControl and Galera Cluster on Docker Swarm: Docker Engine version 1.12 and later. Docker Swarm Mode is initialized. ClusterControl must be connected to the same overlay network as the database containers. To run ClusterControl as a service using “docker stack”, the following definition should be enough: clustercontrol: deploy: replicas: 1 image: severalnines/clustercontrol ports: - 5000:80 networks: - galera_cc Or, you can use the “docker service” command as per below: $ docker service create --name cc_clustercontrol -p 5000:80 --replicas 1 severalnines/clustercontrol Or, you can combine the ClusterControl service together with the database container service and form a “stack” in a compose file as shown in the next section. Base Containers as Docker Swarm Service The base container’s image called “centos-ssh” is based on CentOS 6 image. It comes with a couple of basic packages like SSH server, clients, curl and mysql client. The entrypoint script will download ClusterControl’s public key for passwordless SSH during startup. It will also register itself to the ClusterControl’s CMON database for automatic deployment. Running this container requires a couple of environment variables to be set: CC_HOST - Mandatory. By default it will try to connect to “cc_clustercontrol” service name. Otherwise, define its value in IP address, hostname or service name format. This container will download the SSH public key from ClusterControl node automatically for passwordless SSH. CLUSTER_TYPE - Mandatory. Default to “galera”. CLUSTER_NAME - Mandatory. This name distinguishes the cluster with others from ClusterControl perspective. No space allowed and it must be unique. VENDOR - Default is “percona”. Other supported values are “mariadb”, “codership”. DB_ROOT_PASSWORD - Mandatory. The database root password for the database server. In this case, it should be MySQL root password. PROVIDER_VERSION - Default is 5.6. The database version by the chosen vendor. INITIAL_CLUSTER_SIZE - Default is 3. This indicates how ClusterControl should treat newly registered containers, whether they are for new deployments or for scaling out. For example, if the value is 3, ClusterControl will wait for 3 containers to be running and registered into the CMON database before starting the cluster deployment job. Otherwise, it waits 30 seconds for the next cycle and retries. The next containers (4th, 5th and Nth) will fall under the “Add Node” job instead. To run the container, simply use the following stack definition in a compose file: galera: deploy: replicas: 3 image: severalnines/centos-ssh ports: - 3306:3306 environment: CLUSTER_TYPE: "galera" CLUSTER_NAME: "PXC_Docker" INITIAL_CLUSTER_SIZE: 3 DB_ROOT_PASSWORD: "mypassword123" networks: - galera_cc By combining them both (ClusterControl and database base containers), we can just deploy them under a single stack as per below: version: '3' services: galera: deploy: replicas: 3 restart_policy: condition: on-failure delay: 10s image: severalnines/centos-ssh ports: - 3306:3306 environment: CLUSTER_TYPE: "galera" CLUSTER_NAME: "Galera_Docker" INITIAL_CLUSTER_SIZE: 3 DB_ROOT_PASSWORD: "mypassword123" networks: - galera_cc clustercontrol: deploy: replicas: 1 image: severalnines/clustercontrol ports: - 5000:80 networks: - galera_cc networks: galera_cc: driver: overlay Save the above lines into a file, for example docker-compose.yml in the current directory. Then, start the deployment: $ docker stack deploy --compose-file=docker-compose.yml cc Creating network cc_galera_cc Creating service cc_clustercontrol Creating service cc_galera Docker Swarm will deploy one container for ClusterControl (replicas:1) and another 3 containers for the database cluster containers (replicas:3). The database container will then register itself into the CMON database for deployment. Wait for a Galera Cluster to be ready The deployment will be automatically picked up by the ClusterControl CLI. So you basically don’t have to do anything but wait. The deployment usually takes around 10 to 20 minutes depending on the network connection. Open the ClusterControl UI at http://{any_Docker_host}:5000/clustercontrol, fill in the default administrator user details and log in. Monitor the deployment progress under Activity -> Jobs, as shown in the following screenshot: Or, you can look at the progress directly from the docker logs command of the ClusterControl container: $ docker logs -f $(docker ps | grep clustercontrol | awk {'print $1'}) >> Found the following cluster(s) is yet to deploy: Galera_Docker >> Number of containers for Galera_Docker is lower than its initial size (3). >> Nothing to do. Will check again on the next loop. >> Found the following cluster(s) is yet to deploy: Galera_Docker >> Found a new set of containers awaiting for deployment. Sending deployment command to CMON. >> Cluster name : Galera_Docker >> Cluster type : galera >> Vendor : percona >> Provider version : 5.7 >> Nodes discovered : 10.0.0.6 10.0.0.7 10.0.0.5 >> Initial cluster size : 3 >> Nodes to deploy : 10.0.0.6;10.0.0.7;10.0.0.5 >> Deploying Galera_Docker.. It's gonna take some time.. >> You shall see a progress bar in a moment. You can also monitor >> the progress under Activity (top menu) on ClusterControl UI. Create Galera Cluster - Job 1 RUNNING [██▊ ] 26% Installing MySQL on 10.0.0.6 That’s it. Wait until the deployment completes and you will then be all set with a three-node Galera Cluster running on Docker Swarm, as shown in the following screenshot: In ClusterControl, it has the same look and feel as what you have seen with Galera running on standard hosts (non-container) environment. ClusterControl Single Console for Your Entire Database Infrastructure Find out what else is new in ClusterControl Install ClusterControl for FREE Management Managing database containers is a bit different with Docker Swarm. This section provides an overview of how the database containers should be managed through ClusterControl. Connecting to the Cluster To verify the status of the replicas and service name, run the following command: $ docker service ls ID NAME MODE REPLICAS IMAGE eb1izph5stt5 cc_clustercontrol replicated 1/1 severalnines/clustercontrol:latest ref1gbgne6my cc_galera replicated 3/3 severalnines/centos-ssh:latest If the application/client is running on the same Swarm network space, you can connect to it directly via the service name endpoint. If not, use the routing mesh by connecting to the published port (3306) on any of the Docker Swarm nodes. The connection to these endpoints will be load balanced automatically by Docker Swarm in a round-robin fashion. Scale up/down Typically, when adding a new database, we need to prepare a new host with base operating system together with passwordless SSH. In Docker Swarm, you just need to scale out the service using the following command to the number of replicas that you desire: $ docker service scale cc_galera=5 cc_galera scaled to 5 ClusterControl will then pick up the new containers registered inside cmon.containers table and trigger add node jobs, for one container at a time. You can look at the progress under Activity -> Jobs: Scaling down is similar, by using the “service scale” command. However, ClusterControl doesn’t know whether the containers that have been removed by Docker Swarm were part of the auto-scheduling or just a scale down (which indicates that we deliberately wanted the containers to be removed). Thus, to scale down from 5 nodes to 3 nodes, one would: $ docker service scale cc_galera=3 cc_galera scaled to 3 Then, remove the stopped hosts from the ClusterControl UI by going to Nodes -> rollover the removed container -> click on the ‘X’ icon on the top right -> Confirm & Remove Node: ClusterControl will then execute a remove node job and bring back the cluster to the expected size. Failover In case of container failure, Docker Swarm automatic rescheduling will kick in and there will be a new replacement container with the same IP address as the old one (with different container ID). ClusterControl will then start to provision this node from scratch, by performing the installation process, configuration and getting it to rejoin the cluster. The old container will be removed automatically from ClusterControl before the deployment starts. Go ahead and try to kill one of the database containers: $ docker kill [container ID] You’ll see the new containers that Swarm created will be provisioned automatically by ClusterControl. Creating a new cluster To create a new cluster, just create another service or stack with a different CLUSTER_NAME and service name. The following is an example that we want to create another Galera Cluster running on MariaDB 10.1 (some extra environment variables are required for MariaDB 10.1): version: '3' services: galera2: deploy: replicas: 3 image: severalnines/centos-ssh ports: - 3306 environment: CLUSTER_TYPE: "galera" CLUSTER_NAME: "MariaDB_Galera" VENDOR: "mariadb" PROVIDER_VERSION: "10.1" INITIAL_CLUSTER_SIZE: 3 DB_ROOT_PASSWORD: "mypassword123" networks: - cc_galera_cc networks: cc_galera_cc: external: true Then, create the service: $ docker stack deploy --compose-file=docker-compose.yml db2 Go back to ClusterControl UI -> Activity -> Jobs and you should see a new deployment has started. After a couple of minutes, you will see the new cluster will be listed inside ClusterControl dashboard: Destroying everything To remove everything (including the ClusterControl container), you just need to remove the stack created by Docker Swarm: $ docker stack rm cc Removing service cc_clustercontrol Removing service cc_galera Removing network cc_galera_cc Related resources  MySQL on Docker: Composing the Stack  MySQL on Docker: Deploy a Homogeneous Galera Cluster with etcd  MySQL on Docker: Introduction to Docker Swarm Mode and Multi-Host Networking That’s it, the whole stack has been removed. Pretty neat huh? You can start all over again by running the “docker stack deploy” command and everything will be ready after a couple of minutes. Summary The flexibility you get by running a single command to deploy or destroy a whole environment can be useful in different types of use cases such as backup verification, DDL procedure testing, query performance tweaking, experimenting for proof-of-concepts and also for staging temporary data. These use cases are closer to developer environment. With this approach, you can now treat a stateful service “statelessly”. Would you like to see ClusterControl manage the whole database container stack through the UI via point and click? Let us know your thoughts in the comments section below. In the next blog post, we are going to look at how to perform automatic backup verification on Galera Cluster using containers. Tags:  MySQL docker galera swarm cluster [Less]