I Use This!
Very High Activity

News

Analyzed about 4 hours ago. based on code collected about 4 hours ago.
Posted 1 day ago by MySQL Performance Blog
At AWS Re:Invent 2018 there were many great announcements of AWS New Services and New Features, but one basic feature that I’ve been waiting for years to be released is still nowhere to be  found. AWS Elastic Block Storage (EBS) is great and it’s ... [More] got better through the years, adding different storage types and features like Provisioned IOPS. However, it still has the most basic inconvenient requirement – I have to decide in advance how much space I need to allocate, and pay for all of that allocated space whether I use it or not. It would be so much better if AWS would allow true consumption model pricing with EBS, where you pay for the storage used, not the storage allocated. This is already the case for S3,  RDS, and even EC2 instances (with Unlimited Option on T2/T3 Instances), not to mention Serverless focused services. For example, I would love to be able to create a 1TB EBS volume but only pay for 10GB of storage if I only use this amount of space. Modern storage subsystems do a good job differentiating between the space available on the block device and what’s being used by user files and filesystem metadata. The space that’s not allocated any more can be TRIMmed. This is a basic requirement for working well on flash storage, and as modern EC2 instances already provision EBS storage as emulated NVMe devices, I would imagine Amazon could hook into such functionality to track space actually used. For us at Percona this would make shipping applications on AWS Marketplace much more convenient. Right now for Percona Monitoring and Management (PMM)  we have to choose how much space to allocate to the EBS volume by default, picking between it being expensive to run because of paying for the large unused EBS volume or setting a very limited by default capacity that needs user action to resize the EBS volume. Consumption-based EBS pricing would solve this dilemma. This problem seems to be well recognized and understood. For example Pure Storage Cloud Block Storage (currently in Beta) is  expected to have such a feature. I hope with its insane customer focus AWS will add this feature in the future, but currently we have to get by without it. —Image: Arnold Reinhold [CC BY-SA 2.5], via Wikimedia Commons [Less]
Posted 2 days ago by MySQL Performance Blog
Please join Percona’s CEO Peter Zaitsev as he presents MySQL 8 for Developers on Wednesday, December 12th, 2018 at 11:00 AM PST (UTC-7) / 2:00 PM EST (UTC-5). Register Now There are many great new features in MySQL 8, but how exactly can they help ... [More] your application? This session takes a practical look at MySQL 8 features. It also details which limitations of previous MySQL versions are overcome by MySQL 8. Lastly, what you can do with MySQL 8 that you could not have done before is discussed. Register for MySQL 8 for Developers to learn how MySQL’s new features can help your application and more. [Less]
Posted 2 days ago by Percona Community
Ever wondered how hosting companies manage their MySQL database architecture? At Hostinger, we have various MySQL setups starting from the standalone replica-less instances to Percona XtraDB Cluster (later just PXC), ProxySQL routing-based and even ... [More] absolutely custom and unique solutions which I’m going to describe in this blog post. We do not have elephant-sized databases for internal services like API, billing, and clients. Thus almost every decision ends up with high availability as a top priority instead of scalability. Still, scaling vertically is good enough for our case, as the database size does not exceed 500GB. One and the top requirements is the ability to access the master node, as we have fairly equal-distanced workloads for reading and writing. Our current setup for storing all the data about the clients, servers and so forth is using PXC formed of three nodes without any geo-replication. All nodes are running in the same datacenter. We have plans to migrate this cluster to geo-replicated cluster across three locations: the United States, Netherlands, and Singapore. This would allow us to warrant high availability if one of the locations became unreachable. Since PXC uses fully synchronous replication, there will be higher latencies for writes. But the reads will be much quicker because of the local replica in every location. We did some research on MySQL Group Replication, but it requires instances to be closer to each other and is more sensitive to latencies. Group Replication is designed to be deployed in a cluster environment where server instances are very close to each other, and is impacted by both network latency as well as network bandwidth. PXC was used previously, thus we to know how to deal with it in critical circumstances and make it more highly available. In 000webhost.com project and hAPI (Hostinger API) we use our aforementioned unique solution which selects the master node using Layer3 protocol. One of our best friends is BGP and BGP protocol, which is aged enough to buy its own beer, hence we use it a lot. This implementation also uses BGP as the underlying protocol and helps to point to the real master node. To run BGP protocol we use the ExaBGP service and announce VIP address as anycast from both master nodes. You should be asking: but how are you sure MySQL queries go to the one and the same instance instead of hitting both? We use Zookeeper’s ephemeral nodes to acquire the lock as mutually exclusive. Zookeeper acts like a circuit breaker between BGP speakers and the MySQL clients. If the lock is acquired we announce the VIP from the master node and applications send the queries toward this path. If the lock is released, another node can take it over and announce the VIP, so the application will send the queries without any efforts. MySQL Setup at HostingerThe second question comes: what conditions should be met to stop announcing VIP? This can be implemented differently depending on use case, but we release the lock if MySQL process is down using systemd’s Requires in the unit file of ExaBGP: Besides, with or without specifying After=, this unit will be stopped if one of the other units is explicitly stopped. With systemd we can create a nice dependency tree which ensures all of them are met. Stopping, killing, or even rebooting the MySQL will make systemd stop the ExaBGP process and withdraw the VIP announcement. The final result is a new master selected. We battle tested those master failovers during our Gaming days and nothing critical was noticed yet. If you think good architecture is expensive, try bad architecture —This post was originally published at https://www.hostinger.com/blog/mysql-setup-at-hostinger-explained/ in June 2018. The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up. The post MySQL Setup at Hostinger Explained appeared first on Percona Community Blog. [Less]
Posted 3 days ago by MySQL Performance Blog
Percona is glad to announce the release of Percona XtraBackup 8.0.4 on December 10, 2018. You can download it from our download site and apt and yum repositories. Percona XtraBackup enables MySQL backups without blocking user queries, making it ... [More] ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQL backups. This release of Percona Xtrabackup is a General Availability release ready for use in a production environment. Please note the following about this release: The deprecated innobackupex has been removed. Use the xtrabackup command to back up your instances: $ xtrabackup --backup --target-dir=/data/backup When migrating from earlier database server versions, backup and restore and using XtraBackup 2.4 and then use mysql_upgrade from MySQL 8.0.x If using yum or apt repositories to install Percona Xtrabackup 8.0.4, ensure that you have enabled the new tools repository. You can do this with the percona-release enable tools release command and then install the percona-xtrabackup-80 package. All Percona software is open-source and free. We are grateful to the community for the invaluable contributions to Percona XtraBackup. We would especially like to highlight the input of Alexey Kopytov who has been actively offering improvements and submitting bug reports for Percona XtraBackup. New Features Percona XtraBackup 8.0.4 is based on MySQL 8.0.13 and fully supports Percona Server for MySQL 8.0 series and MySQL 8.0 series. Bugs Fixed PXB-1699:xtrabackup --prepare could fail on backups of MySQL 8.0.13 databases PXB-1704:xtrabackup --prepare could hang while performing insert buffer merge PXB-1668: When the --throttle option was used, the applied value was different from the one specified by the user (off by one error) PXB-1679: PXB could crash when ALTER TABLE … TRUNCATE PARTITION command was run during a backup without locking DDL [Less]
Posted 3 days ago by Alena Subotina
Introduction In this article, we will show how to perform routine data export from multiple files by a certain mask with help of the Data Import functionality of dbForge Studio for MySQL and how to schedule the recurring execution of the import with Microsoft Task Scheduler. Scenario Suppose, we need to simultaneously import multiple daily […]
Posted 5 days ago by Valeriy Kravchuk
Probably all of us, MySQL users, DBAs and developers had seen error 1213 more than once, in one context or the other:mysql> select * from t1;ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transactionThe first thing that ... [More] comes to mind in this case is: "OK, we have InnoDB deadlock, let's check the details", followed by the SHOW ENGINE INNODB STATUS check, like this:mysql> show engine innodb status\G*************************** 1. row ***************************  Type: InnoDB  Name:Status:=====================================2018-12-08 17:41:11 0x7f2f8b8db700 INNODB MONITOR OUTPUT=====================================Per second averages calculated from the last 12 seconds-----------------BACKGROUND THREAD-----------------srv_master_thread loops: 59 srv_active, 0 srv_shutdown, 14824 srv_idlesrv_master_thread log flush and writes: 14882----------SEMAPHORES----------OS WAIT ARRAY INFO: reservation count 326OS WAIT ARRAY INFO: signal count 200RW-shared spins 0, rounds 396, OS waits 195RW-excl spins 0, rounds 120, OS waits 4RW-sx spins 0, rounds 0, OS waits 0Spin rounds per wait: 396.00 RW-shared, 120.00 RW-excl, 0.00 RW-sx------------TRANSACTIONS------------Trx id counter 14960Purge done for trx's n:o < 14954 undo n:o < 0 state: running but idleHistory list length 28LIST OF TRANSACTIONS FOR EACH SESSION:---TRANSACTION 421316960193880, not started0 lock struct(s), heap size 1136, 0 row lock(s)---TRANSACTION 421316960192752, not started0 lock struct(s), heap size 1136, 0 row lock(s)--------FILE I/O--------...Now, what if you get the output like the one above? Without any LATEST DETECTED DEADLOCK section? I've seen people wondering how is it even possible and trying to find some suspicious bug somewhere...Do not be in a hurry - time to recall that there are actually at least 4 quite common reasons to get error 2013 in modern (5.5+) version of MySQL, MariaDB and Friends: InnoDB deadlock happened Metadata deadlock happened If you are lucky enough to use Galera cluster, Galera conflict happened Deadlock happened in some other storage engine (for example, MyRocks) I am not lucky enough to use MySQL's group replication yet, but I know that conflicts there are also possible. I am just not sure if error 1213 is also reported in that case. Feel free to check with a test case similar to the one I've used for Galera below.I also suspect deadlocks with other engines are also possible. As a bonus point, I'll demonstrate the deadlock with MyRocks also.Let's reproduce these 3 cases one by one and check how to get more information on them. In all cases it's enough to have at most 2 InnoDB tables with just two rows:mysql> show create table t1\G*************************** 1. row ***************************       Table: t1Create Table: CREATE TABLE `t1` (  `id` int(11) NOT NULL,  `c1` int(11) DEFAULT NULL,  PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=latin11 row in set (0.00 sec)mysql> show create table t2\G*************************** 1. row ***************************       Table: t2Create Table: CREATE TABLE `t2` (  `id` int(11) NOT NULL,  `c1` int(11) DEFAULT NULL,  PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=latin11 row in set (0.00 sec)mysql> select * from t1;+----+------+| id | c1   |+----+------+|  1 |    1 ||  2 |    2 |+----+------+2 rows in set (0.00 sec)mysql> select * from t2;+----+------+| id | c1   |+----+------+|  1 |    1 ||  2 |    2 |+----+------+2 rows in set (0.00 sec)We'll need two sessions, surely.InnoDB DeadlockWith InnoDB and tables above it's really easy to end up with a deadlock. In the first session execute the following:mysql> start transaction;Query OK, 0 rows affected (0.00 sec)mysql> select * from t1 where id = 1 for update;+----+------+| id | c1   |+----+------+|  1 |    1 |+----+------+1 row in set (0.00 sec)In the second session execute:mysql> start transaction;Query OK, 0 rows affected (0.00 sec)mysql> select * from t1 where id = 2 for update;+----+------+| id | c1   |+----+------+|  2 |    2 |+----+------+1 row in set (0.02 sec)Now in the first session try to access the row with id=2 asking for incompatible lock:mysql> select * from t1 where id = 2 for update;This statement hangs waiting for a lock (up to innodb_lock_wait_timeout seconds). Try to access the row with id=1 asking for incompatible lock in the second session, and you'll get the deadlock error:mysql> select * from t1 where id = 1 for update;ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transactionat this moment SELECT in the first transaction returns data:+----+------+| id | c1   |+----+------+|  2 |    2 |+----+------+1 row in set (5.84 sec)It's that simple, one table and two rows is enough. We can get the details in the output of SHOW ENGINE INNODB STATUS:...------------------------LATEST DETECTED DEADLOCK------------------------2018-12-08 18:32:59 0x7f2f8b8db700*** (1) TRANSACTION:TRANSACTION 15002, ACTIVE 202 sec starting index readmysql tables in use 1, locked 1LOCK WAIT 3 lock struct(s), heap size 1136, 2 row lock(s)MySQL thread id 8, OS thread handle 139842181244672, query id 8545 localhost root statisticsselect * from t1 where id = 2 for update*** (1) WAITING FOR THIS LOCK TO BE GRANTED:RECORD LOCKS space id 94 page no 3 n bits 72 index PRIMARY of table `test`.`t1` trx id 15002 lock_mode X locks rec but not gap waiting*** (2) TRANSACTION:TRANSACTION 15003, ACTIVE 143 sec starting index readmysql tables in use 1, locked 13 lock struct(s), heap size 1136, 2 row lock(s)MySQL thread id 9, OS thread handle 139842181510912, query id 8546 localhost root statisticsselect * from t1 where id = 1 for update*** (2) HOLDS THE LOCK(S):RECORD LOCKS space id 94 page no 3 n bits 72 index PRIMARY of table `test`.`t1` trx id 15003 lock_mode X locks rec but not gap*** (2) WAITING FOR THIS LOCK TO BE GRANTED:RECORD LOCKS space id 94 page no 3 n bits 72 index PRIMARY of table `test`.`t1` trx id 15003 lock_mode X locks rec but not gap waiting*** WE ROLL BACK TRANSACTION (2)------------TRANSACTIONS------------...In the case above I've used Percona Server  5.7.24-26 (why not). Details of output may vary depending on version (and bugs it has :).  If you use MariaDB 5.5+, in case of InnoDB deadlock special innodb_deadlocks status variable is also incremented.Metadata DeadlockUnlike with InnoDB deadlocks, chances that you've seen deadlocks with metadata locks involved are low. One may spend notable time trying to reproduce such a deadlock, but (as usual) quck check of MySQL bugs database may help to find an easy to reproduce case. I mean Bug #65890 - "Deadlock that is not a deadlock with transaction and lock tables".So, let's try the following scenario with two sessions and out InnoDB tables, t1 and t2. In one session:mysql> start transaction;Query OK, 0 rows affected (0.00 sec)mysql> select * from t2 for update;+----+------+| id | c1   |+----+------+|  1 |    1 ||  2 |    2 |+----+------+2 rows in set (0.00 sec)In another session:mysql> lock tables t1 write, t2 write;It hangs, waiting as long as lock_wait_timeout. We can check what happens with metadata locks using performance_schema.metadata_locks table (as we use MySQL or Percona Server 5.7+, more on setup, alternatives for MariaDB etc here and there). In the first session:mysql> select * from performance_schema.metadata_locks;+-------------+--------------------+----------------+-----------------------+----------------------+---------------+-------------+--------+-----------------+----------------+| OBJECT_TYPE | OBJECT_SCHEMA      | OBJECT_NAME    | OBJECT_INSTANCE_BEGIN | LOCK_TYPE            | LOCK_DURATION | LOCK_STATUS | SOURCE | OWNER_THREAD_ID | OWNER_EVENT_ID |+-------------+--------------------+----------------+-----------------------+----------------------+---------------+-------------+--------+-----------------+----------------+| TABLE       | test               | t2             |       139841686765904 | SHARED_WRITE         | TRANSACTION   | GRANTED     |        |              45 |           2850 || GLOBAL      | NULL               | NULL           |       139841688088672 | INTENTION_EXCLUSIVE  | STATEMENT     | GRANTED     |        |              46 |            205 || SCHEMA      | test               | NULL           |       139841688088912 | INTENTION_EXCLUSIVE  | TRANSACTION   | GRANTED     |        |              46 |            205 || TABLE       | test               | t1             |       139841688088992 | SHARED_NO_READ_WRITE | TRANSACTION   | GRANTED     |        |              46 |            207 || TABLE       | test               | t2             |       139841688089072 | SHARED_NO_READ_WRITE | TRANSACTION   | PENDING     |        |              46 |            208 || TABLE       | performance_schema | metadata_locks |       139841686219040 | SHARED_READ          | TRANSACTION   | GRANTED     |        |              45 |           3003 |+-------------+--------------------+----------------+-----------------------+----------------------+---------------+-------------+--------+-----------------+----------------+6 rows in set (0.00 sec)As soon as we try this in the first session:mysql> select * from t1;ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transactionwe get the same deadlock error 1213 and LOCK TABLES in the second session completes. We can find nothing about this deadlock in the output of SHOW ENGINE INNODB STATUS (as shared at the beginning of this post). I am also not aware about any status variables to count metadata deadlocks.You can find some useful information about metadata deadlocks in the manual.Galera ConflictFor simplicity I'll use MariaDB 10.1.x and simple 2 nodes setup on the same box as I described here. I'll start first node as a new cluster and create tables for this test:openxs@ao756:~/dbs/maria10.1$ bin/mysqld_safe --defaults-file=/home/openxs/galera/mynode1.cnf --wsrep-new-cluster &[1] 13022openxs@ao756:~/dbs/maria10.1$ 181208 20:40:52 mysqld_safe Logging to '/tmp/mysql-node1.err'.181208 20:40:52 mysqld_safe Starting mysqld daemon with databases from /home/openxs/galera/node1openxs@ao756:~/dbs/maria10.1$ bin/mysql  --socket=/tmp/mysql-node1.sock testReading table information for completion of table and column namesYou can turn off this feature to get a quicker startup with -AWelcome to the MariaDB monitor.  Commands end with ; or \g.Your MariaDB connection id is 4Server version: 10.1.34-MariaDB Source distributionCopyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.MariaDB [test]> drop table t1, t2;ERROR 1051 (42S02): Unknown table 'test.t2'MariaDB [test]> create table t1(id int, c1 int, primary key(id));Query OK, 0 rows affected (0.29 sec)MariaDB [test]> create table t2(id int, c1 int, primary key(id));Query OK, 0 rows affected (0.22 sec)MariaDB [test]> insert into t1 values (1,1), (2,2);Query OK, 2 rows affected (0.07 sec)Records: 2  Duplicates: 0  Warnings: 0MariaDB [test]> insert into t2 values (1,1), (2,2);Query OK, 2 rows affected (0.18 sec)Records: 2  Duplicates: 0  Warnings: 0Then I'll start second node, make sure it joined the cluster and has the same data:openxs@ao756:~/dbs/maria10.1$ bin/mysqld_safe --defaults-file=/home/openxs/galera/mynode2.cnf &[2] 15110openxs@ao756:~/dbs/maria10.1$ 181208 20:46:11 mysqld_safe Logging to '/tmp/mysql-node2.err'.181208 20:46:11 mysqld_safe Starting mysqld daemon with databases from /home/openxs/galera/node2openxs@ao756:~/dbs/maria10.1$ bin/mysql --socket=/tmp/mysql-node2.sock test     Reading table information for completion of table and column namesYou can turn off this feature to get a quicker startup with -AWelcome to the MariaDB monitor.  Commands end with ; or \g.Your MariaDB connection id is 4Server version: 10.1.34-MariaDB Source distributionCopyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.  MariaDB [test]> show status like 'wsrep_cluster%';+--------------------------+--------------------------------------+| Variable_name            | Value                                |+--------------------------+--------------------------------------+| wsrep_cluster_conf_id    | 4                                    || wsrep_cluster_size       | 2                                    || wsrep_cluster_state_uuid | b1d227b1-0211-11e6-8ce0-3644ad2b03dc || wsrep_cluster_status     | Primary                              |+--------------------------+--------------------------------------+4 rows in set (0.04 sec)MariaDB [test]> select * from t2;+----+------+| id | c1   |+----+------+|  1 |    1 ||  2 |    2 |+----+------+2 rows in set (0.02 sec)Now we are ready to try to provoke Galera conflict. For this we have to try to update the same data in transactions on two different nodes. In one session connected to node1: MariaDB [test]> select @@wsrep_node_name;+-------------------+| @@wsrep_node_name |+-------------------+| node1             |+-------------------+1 row in set (0.00 sec)MariaDB [test]> start transaction;Query OK, 0 rows affected (0.00 sec)MariaDB [test]> update test.t1 set c1 = 5 where id=1;Query OK, 1 row affected (0.00 sec)Rows matched: 1  Changed: 1  Warnings: 0In another session connected to other node:MariaDB [test]> select @@wsrep_node_name;+-------------------+| @@wsrep_node_name |+-------------------+| node2             |+-------------------+1 row in set (0.00 sec)MariaDB [test]> start transaction;Query OK, 0 rows affected (0.00 sec)MariaDB [test]> update test.t1 set c1 = 6 where id=1;Query OK, 1 row affected (0.00 sec)Rows matched: 1  Changed: 1  Warnings: 0 Now in the first we can COMMIT successfully:MariaDB [test]> commit;Query OK, 0 rows affected (0.12 sec)But if we try to COMMIT in the second:MariaDB [test]> commit;ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transactionWe get that same error 1213 about the deadlock. Surely you'll see nothing about this deadlock in INNODB STATUS output, as it was NOT an InnoDB deadlock, but Galera conflict. Check these status variables on the node2:MariaDB [test]> show status like 'wsrep_local%';+----------------------------+--------------------------------------+| Variable_name              | Value                                |+----------------------------+--------------------------------------+| wsrep_local_bf_aborts      | 1                                    || wsrep_local_cached_downto  | 75                                   || wsrep_local_cert_failures  | 0                                    || wsrep_local_commits        | 0                                    || wsrep_local_index          | 0                                    || wsrep_local_recv_queue     | 0                                    || wsrep_local_recv_queue_avg | 0.000000                             || wsrep_local_recv_queue_max | 1                                    || wsrep_local_recv_queue_min | 0                                    || wsrep_local_replays        | 0                                    || wsrep_local_send_queue     | 0                                    || wsrep_local_send_queue_avg | 0.000000                             || wsrep_local_send_queue_max | 1                                    || wsrep_local_send_queue_min | 0                                    || wsrep_local_state          | 4                                    || wsrep_local_state_comment  | Synced                               || wsrep_local_state_uuid     | b1d227b1-0211-11e6-8ce0-3644ad2b03dc |+----------------------------+--------------------------------------+17 rows in set (0.01 sec)If wsrep_local_bf_aborts > 0, you had conflicts and local transaction was rolled back to prevent them. We can see that remote one wins, on node2:MariaDB [test]> select * from t1;+----+------+| id | c1   |+----+------+|  1 |    5 ||  2 |    2 |+----+------+2 rows in set (0.00 sec)To summarize, in Galera "first commit wins" and local transaction involved in conflict is always a looser. You can get a lot of information about conflicts in the error log if you enable conflict logging features through wsrep_log_conflicts and cert.log_conflicts. See this fine manual for details.MyRocks DeadlockWe can easily check how deadlocks are processed by MyRocks by just loading the plugin for the engine, converting tables to MyRocks and trying the same InnoDB scenario with the same Percona Server we used initially. But first, if you use Percona binaries you have to install a separate package:openxs@ao756:~$ dpkg -l | grep rocksdbopenxs@ao756:~$ sudo apt-get install percona-server-rocksdb-5.7[sudo] password for openxs:Reading package lists... DoneBuilding dependency tree...Unpacking percona-server-rocksdb-5.7 (5.7.24-26-1.trusty) ...Setting up percona-server-rocksdb-5.7 (5.7.24-26-1.trusty) ... * This release of Percona Server is distributed with RocksDB storage engine. * Run the following script to enable the RocksDB storage engine in Percona Server:        ps-admin --enable-rocksdb -u -p[mysql_admin_pass] [-S ] [-h -P ]Percona's manual has a lot more details and relies on separate ps-admin script, but basically you have to INSTALL PLUGINs like this (check script's code):mysql> INSTALL PLUGIN ROCKSDB SONAME 'ha_rocksdb.so';Query OK, 0 rows affected (0.86 sec)mysql> INSTALL PLUGIN ROCKSDB_CFSTATS SONAME 'ha_rocksdb.so';Query OK, 0 rows affected (0.06 sec)mysql> INSTALL PLUGIN ROCKSDB_DBSTATS SONAME 'ha_rocksdb.so';Query OK, 0 rows affected (0.08 sec)mysql> INSTALL PLUGIN ROCKSDB_PERF_CONTEXT SONAME 'ha_rocksdb.so';Query OK, 0 rows affected (0.05 sec)mysql> INSTALL PLUGIN ROCKSDB_PERF_CONTEXT_GLOBAL SONAME 'ha_rocksdb.so';Query OK, 0 rows affected (0.06 sec)mysql> INSTALL PLUGIN ROCKSDB_CF_OPTIONS SONAME 'ha_rocksdb.so';Query OK, 0 rows affected (0.05 sec)mysql> INSTALL PLUGIN ROCKSDB_GLOBAL_INFO SONAME 'ha_rocksdb.so';Query OK, 0 rows affected (0.05 sec)mysql> INSTALL PLUGIN ROCKSDB_COMPACTION_STATS SONAME 'ha_rocksdb.so';Query OK, 0 rows affected (0.05 sec)mysql> INSTALL PLUGIN ROCKSDB_DDL SONAME 'ha_rocksdb.so';Query OK, 0 rows affected (0.06 sec)mysql> INSTALL PLUGIN ROCKSDB_INDEX_FILE_MAP SONAME 'ha_rocksdb.so';Query OK, 0 rows affected (0.05 sec)mysql> INSTALL PLUGIN ROCKSDB_LOCKS SONAME 'ha_rocksdb.so';Query OK, 0 rows affected (0.05 sec)mysql> INSTALL PLUGIN ROCKSDB_TRX SONAME 'ha_rocksdb.so';Query OK, 0 rows affected (0.05 sec)mysql> INSTALL PLUGIN ROCKSDB_DEADLOCK SONAME 'ha_rocksdb.so';Query OK, 0 rows affected (0.06 sec)Then check that the engine is there and convert tables:mysql> show engines;+--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+| Engine             | Support | Comment                                                                    | Transactions | XA   | Savepoints |+--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+| ROCKSDB            | YES     | RocksDB storage engine                                                     | YES          | YES  | YES        |...mysql> alter table t1 engine=rocksdb;Query OK, 2 rows affected (0.64 sec)Records: 2  Duplicates: 0  Warnings: 0mysql> alter table t2 engine=rocksdb;Query OK, 2 rows affected (0.58 sec)Records: 2  Duplicates: 0  Warnings: 0Now we are ready to try the same InnoDB scenario. Just note that lock wait timeout for MyRocks is defined by the rocksdb_lock_wait_timeout that is small by default, 1 second, do you have have to increase it first. You also have to set rocksdb_deadlock_detect to ON (as it's OFF by default): mysql> set global rocksdb_lock_wait_timeout=50;Query OK, 0 rows affected (0.00 sec)mysql> set global rocksdb_deadlock_detect=ON;Query OK, 0 rows affected (0.00 sec)mysql> \rConnection id:    14Current database: testmysql> start transaction;Query OK, 0 rows affected (0.02 sec)mysql> select * from t1 where id = 1 for update;+----+------+| id | c1   |+----+------+|  1 |    1 |+----+------+1 row in set (0.00 sec) Then in the second session:mysql> start transaction;Query OK, 0 rows affected (0.00 sec)mysql> select * from t1 where id = 2 for update;+----+------+| id | c1   |+----+------+|  2 |    2 |+----+------+1 row in set (0.00 sec)In the first:mysql> select * from t1 where id = 2 for update;and in the second we can get deadlock error:mysql> select * from t1 where id = 1 for update;ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transactionmysql> show global status like '%deadlock%';+----------------------------+-------+| Variable_name              | Value |+----------------------------+-------+| rocksdb_row_lock_deadlocks | 1     |+----------------------------+-------+1 row in set (0.00 sec)Note that MyRocks has status variable to count deadlocks. Note that Percona Server still does NOT seem to support SHOW ENGINE ROCKSDB TRANSACTION STATUS statement available upstream:mysql> show engine rocksdb transaction status\GERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'transaction status' at line 1I was not able to find a bug about this (sorry if I missed it), and just reported new task to Percona's JIRA: PS-5114 - "Add support for SHOW ENGINE ROCKSDB TRANSACTION STATUS".That's probably more than enough for single blog post (that is mostly NOT about bugs). One day I'll refresh my knowledge of MyRocks etc and maybe write more about deadlocks troubleshooting there. Do not be surprised if you can not find anything in INNODB STATUS when you get error 1213, just proceed with further steps. There are other reasons to explore. Venice hides a lot early in the morning... To summarize, do not be surprised that after you got MySQL error 1213 you see no information about recent InnoDB deadlock - there are at least 3 more reasons for this error to be reported, as explained above. You should know your configuration and use several other commands and sources of information to pinpoint what exactly happened and why. [Less]
Posted 6 days ago by MC Brown
When Tungsten Replicator extracts data, the information that has been extracted is written down into the Tungsten History Log, or THL. These files are in a specific format and they are used to store all of the extracted information in a format that ... [More] can easily be used to recreate and generate data in a target. Each transaction from the source is written into the THL as an event, so within a single THL file there will be one or more events stored. For each event, we record information about the overall transaction, as well as then information about the transaction itself. That event can contain one or more statements, or rows, or both. Because we don’t want to get an ever increasing single file, the replicator will also divide up the THL into multiple files to tmake the data easier to manage. We’ll get down into the details soon, until then, let’s start by looking at the basics of the THL, files and sequence numbers and how to select. The simplest way to look at the files is first of all to start with the thl command. This provides a simple interface first of all into looking at the THL data and then also understand the contents. Let’s start by getting a list of the THL files and the events which we can do with the index command: $ thl index LogIndexEntry thl.data.0000000001(0:295) LogIndexEntry thl.data.0000000002(296:591) This shows us that there are two files, each containing 295 and 296 events. We can look inside using the list command to thl. This supports a number of different selection mechanisms, first you can select a single item: $ thl list -seqno 1 SEQ# = 1 / FRAG# = 0 (last frag) - TIME = 2018-12-06 12:44:10.0 - EPOCH# = 0 - EVENTID = mysql-bin.000108:0000000000001574;-1 - SOURCEID = ubuntu - METADATA = [mysql_server_id=1;dbms_type=mysql;tz_aware=true;strings=utf8;service=alpha;shard=msg;tungsten_filter_columnname=true;tungsten_filter_primarykey=true;tungsten_filter_enumtostring=true] - TYPE = com.continuent.tungsten.replicator.event.ReplDBMSEvent - OPTIONS = [foreign_key_checks = 1, unique_checks = 1, time_zone = '+00:00'] - SQL(0) = - ACTION = INSERT - SCHEMA = msg - TABLE = msg We can also specify a range using either -high or -to and -low and -from. If you specify only one of these, then it assumes you mean from the start or end. For example: $ thl list -to 100 Will list all events from the start up to and including event 100. While: $ thl list -low 45 Lists all events from 45 until the end. You can also be explicit: $ thl list -from 45 -to 60 Will list events from 45 to 60 inclusive. Finally, you can use two special options, -first and -last which will show you the first (Surprise!) and last events. You can also supply an optional number. So: $ thl list -first Shows the first event, while: $ thl list -first 10 Shows the first 10 events. This last option can be really useful when diagnosing an issue because it means we can look at the last events capture by the replicator without having to find the event IDs and work it out. Up to now I’ve focused on the events, but there’s a critical element to THL that also needs to be considered when thinking about replication of data and this is the data and the files themselves. This is important to consider because of the file and THL load generated – if you have a busy system then you will generate a lot of events and that, in turn, will generate a lot of THL that needs to be stored. On a very busy system, it’s possible to fill up a disk very quickly with THL. You can see the files by looking into the directory where they are stored within your installation, by default the thl/SERVICE directory. For example: $ ll /opt/continuent/thl/alpha/ total 896 drwxrwxr-x 2 mc mc   4096 Dec  6 12:44 ./ drwxrwxr-x 3 mc mc   4096 Dec  6 12:43 ../ -rw-rw-r-- 1 mc mc      0 Dec  6 12:43 disklog.lck -rw-rw-r-- 1 mc mc 669063 Dec  6 12:44 thl.data.0000000001 -rw-rw-r-- 1 mc mc 236390 Dec  6 12:45 thl.data.0000000002 You can see that these files corresponding to the output of thl index (fortunately), and you can also see that the two files are different in size. THL is automatically managed by the replicator – it creates new files on a configurable boundary, but also automatically clears those files away. We’ll cover that in a future session, but for the moment I’m going to admit that I skipped something important earlier. These two THL files actually contain two copies of the exact same set of data. The difference is that the second file uses a new feature that will come in a forthcoming release which is compression of the THL. In this case, we are talking about 295 row-based events, but thes second THL file is almost a third of the size of the first. That’s quite a saving for something that normally uses up a lot of space. If we look at the true output of thl index, we can see that the file is indeed compressed: $ thl index LogIndexEntry thl.data.0000000001(0:295) LogIndexEntry thl.data.0000000002(296:591) - COMPRESSED For installations where you are paying for your disk storage in a cloud environment, reducing your THL overhead by less than half is fairly significant. We’ve done a lot of testing and consistently got compression ratios of 2:1 or higher. We’ve also implemented encryption at this level too, which means you get on disk encryption without the need for an encrypted filesystem, and with compression as well you get secure and efficient storage of that THL which should lower your storage and I/O overhead as well as the storage costs. [Less]
Posted 6 days ago by Marco Tusa
… or what I should keep in mind in case of disaster To retrieve and maintain in SQL format the definition of all tables in a database, is a best practice that we all should adopt. To have that under version control is also another best practice to ... [More] keep in mind. While doing that may seem redundant, it can become a life saver in several situations. From the need to review what has historically changed in a table, to knowing who changed what and why… to when you need to recover your data and have your beloved MySQL instance not start… But let’s be honest, only a few do the right thing, and even fewer keep that information up to date. Given that’s the case, what can we do when we have the need to discover/recover the table structure? From the beginning, MySQL has used some external files to describe its internal structure. For instance, if I have a schema named windmills and a table named wmillAUTOINC1, on the file system I will see this: -rw-r-----. 1 mysql mysql     8838 Mar 14 2018 wmillAUTOINC1.frm -rw-r-----. 1 mysql mysql   131072 Mar 14 2018 wmillAUTOINC1.ibd The ibd file contains the data, while the frm file contains the structure information. Putting aside ANY discussion about if this is safe, if it’s transactional and more… when we’ve experienced some major crash and data corruption this approach has been helpful. Being able to read from the frm file was the easiest way to get the information we need. Simple tools like DBSake made the task quite trivial, and allowed us to script table definition when needed to run long, complex tedious data recovery: [root@master1 windmills]# /opt/tools/dbsake frmdump wmillAUTOINC1.frm -- -- Table structure for table `wmillAUTOINC1` -- Created with MySQL Version 5.7.20 -- CREATE TABLE `wmillAUTOINC1` ( `id` bigint(11) NOT NULL AUTO_INCREMENT, `uuid` char(36) COLLATE utf8_bin NOT NULL, `millid` smallint(6) NOT NULL, `kwatts_s` int(11) NOT NULL, `date` date NOT NULL, `location` varchar(50) COLLATE utf8_bin NOT NULL, `active` tinyint(2) NOT NULL DEFAULT '1', `time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `strrecordtype` char(3) COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`), KEY `IDX_millid` (`millid`,`active`), KEY `IDX_active` (`id`,`active`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin ROW_FORMAT=DYNAMIC; Of course, if the frm file was also corrupt, then we could try to get the information from the ibdata dictionary. If that is corrupted too (trust me I’ve seen all of these situations) … well a last resource was hoping the customer has a recent table definition stored somewhere, but as mentioned before, we are not so diligent, are we? Now, though, in MySQL8 we do not have FRM files, they were dropped. Even more interesting is that we do not have the same dictionary, most of the things that we knew have changed, including the dictionary location. So what can be done? Well Oracle have moved the FRM information—and more—to what is called Serialized Dictionary Information (SDI), the SDI is written INSIDE the ibd file, and represents the redundant copy of the information contained in the data dictionary. The SDI is updated/modified by DDL operations on tables that reside in that tablespace. This is it: if you have one file per table normally, then you will have in that file ONLY the SDI for that table, but if you have multiple tables in a tablespace, the SDI information will refer to ALL of the tables. To extract this information from the IBD files, Oracle provides a utility called ibd2sdi. This application parses the SDI information and reports a JSON file that can be easily manipulated to extract and build the table definition. One exception is represented by Partitioned tables. The SDI information is contained ONLY in the first partition, and if you drop it, it is moved to the next one. I will show that later. But let’s see how it works. In the next examples I will look for the table’s name, attributes, and datatype starting from the dictionary tables. To obtain the info I will do this: /opt/mysql_templates/mysql-8P/bin/./ibd2sdi /opt/mysql_instances/master8/data/mysql.ibd |jq '.[]?|.[]?|.dd_object?|("------------------------------------"?,"TABLE NAME = ",.name?,"****",(.columns?|.[]?|(.name?,.column_type_utf8?)))' The result will be something like: "------------------------------------" "TABLE NAME = " "tables" "****" "id" "bigint(20) unsigned" "schema_id" "bigint(20) unsigned" "name" "varchar(64)" "type" "enum('BASE TABLE','VIEW','SYSTEM VIEW')" "engine" "varchar(64)" "mysql_version_id" "int(10) unsigned" "row_format" "enum('Fixed','Dynamic','Compressed','Redundant','Compact','Paged')" "collation_id" "bigint(20) unsigned" "comment" "varchar(2048)" "------------------------------------" "TABLE NAME = " "tablespaces" "****" "id" "bigint(20) unsigned" "name" "varchar(259)" "options" "mediumtext" "se_private_data" "mediumtext" "comment" "varchar(2048)" "engine" "varchar(64)" "DB_TRX_ID" "" "DB_ROLL_PTR" "" I cut the output for brevity, but if you run the above command yourself you’ll be able to see that this retrieves the information for ALL the tables residing in the IBD. The other thing I hope you noticed is that I am NOT parsing ibdata, but mysql.ibd. Why? Because the dictionary was moved out from ibdata and is now in mysql.ibd. Look what happens if I try to parse ibdata: [root@master1 ~]# /opt/mysql_templates/mysql-8P/bin/./ibd2sdi /opt/mysql_instances/master8/data/ibdata1 |jq '.' [INFO] ibd2sdi: SDI is empty. Be very careful here to not mess up your mysql.ibd file. Now what can I do to get information about my wmillAUTOINC1 table in MySQL8? That is quite simple: /opt/mysql_templates/mysql-8P/bin/./ibd2sdi /opt/mysql_instances/master8/data/windmills/wmillAUTOINC.ibd |jq '.' [ "ibd2sdi", { "type": 1, "id": 1068, "object": { "mysqld_version_id": 80013, "dd_version": 80013, "sdi_version": 1, "dd_object_type": "Table", "dd_object": { "name": "wmillAUTOINC", "mysql_version_id": 80011, "created": 20180925095853, "last_altered": 20180925095853, "hidden": 1, "options": "avg_row_length=0;key_block_size=0;keys_disabled=0;pack_record=1;row_type=2;stats_auto_recalc=0;stats_sample_pages=0;", "columns": [ { "name": "id", "type": 9, "is_nullable": false, "is_zerofill": false, "is_unsigned": false, "is_auto_increment": true, "is_virtual": false, "hidden": 1, "ordinal_position": 1, "char_length": 11, "numeric_precision": 19, "numeric_scale": 0, "numeric_scale_null": false, "datetime_precision": 0, "datetime_precision_null": 1, "has_no_default": false, "default_value_null": false, "srs_id_null": true, "srs_id": 0, "default_value": "AAAAAAAAAAA=", "default_value_utf8_null": true, "default_value_utf8": "", "default_option": "", "update_option": "", "comment": "", "generation_expression": "", "generation_expression_utf8": "", "options": "interval_count=0;", "se_private_data": "table_id=1838;", "column_key": 2, "column_type_utf8": "bigint(11)", "elements": [], "collation_id": 83, "is_explicit_collation": false }, "indexes": [ { "name": "PRIMARY", "hidden": false, "is_generated": false, "ordinal_position": 1, "comment": "", "options": "flags=0;", "se_private_data": "id=2261;root=4;space_id=775;table_id=1838;trx_id=6585972;", "type": 1, "algorithm": 2, "is_algorithm_explicit": false, "is_visible": true, "engine": "InnoDB", ], "foreign_keys": [], "partitions": [], "collation_id": 83 } } }, { "type": 2, "id": 780, "object": { "mysqld_version_id": 80011, "dd_version": 80011, "sdi_version": 1, "dd_object_type": "Tablespace", "dd_object": { "name": "windmills/wmillAUTOINC", "comment": "", "options": "", "se_private_data": "flags=16417;id=775;server_version=80011;space_version=1;", "engine": "InnoDB", "files": [ { "ordinal_position": 1, "filename": "./windmills/wmillAUTOINC.ibd", "se_private_data": "id=775;" } ] } } } ] The JSON will contains: A section describing the DB object at high level Array of columns and related information Array of indexes Partition information (not here but in the next example) Table space information That is a lot more detail compared to what we had in the FRM, and it is quite relevant and interesting information as well. Once you have extracted the SDI, any JSON parser tool script can generate the information for the SQL DDL. I mention partitions, so let’s look at this a bit more, given they can be tricky. As mentioned, the SDI information is present ONLY in the first partition. All other partitions hold ONLY the tablespace information. Given that, then the first thing to do is to identify which partition is the first… OR simply try to access all partitions, and when you are able to get the details, extract them. The process is the same: [root@master1 ~]# /opt/mysql_templates/mysql-8P/bin/./ibd2sdi /opt/mysql_instances/master8/data/windmills/wmillAUTOINCPART#P#PT20170301.ibd |jq '.' [ "ibd2sdi", { "type": 1, "id": 1460, "object": { "mysqld_version_id": 80013, "dd_version": 80013, "sdi_version": 1, "dd_object_type": "Table", "dd_object": { "name": "wmillAUTOINCPART", "mysql_version_id": 80013, "created": 20181125110300, "last_altered": 20181125110300, "hidden": 1, "options": "avg_row_length=0;key_block_size=0;keys_disabled=0;pack_record=1;row_type=2;stats_auto_recalc=0;stats_sample_pages=0;", "columns": [ "schema_ref": "windmills", "se_private_id": 18446744073709552000, "engine": "InnoDB", "last_checked_for_upgrade_version_id": 80013, "comment": "", "se_private_data": "autoinc=31080;version=2;", "row_format": 2, "partition_type": 7, "partition_expression": "to_days(`date`)", "partition_expression_utf8": "to_days(`date`)", "default_partitioning": 1, "subpartition_type": 0, "subpartition_expression": "", "subpartition_expression_utf8": "", "default_subpartitioning": 0, ], "foreign_keys": [], "partitions": [ { "name": "PT20170301", "parent_partition_id": 18446744073709552000, "number": 0, "se_private_id": 1847, "description_utf8": "736754", "engine": "InnoDB", "comment": "", "options": "", "se_private_data": "autoinc=0;version=0;", "values": [ { "max_value": false, "null_value": false, "list_num": 0, "column_num": 0, "value_utf8": "736754" } ], The difference, as you can see, is that the section related to partitions and sub partitions will be filled with all the details you might need to recreate the partitions. We will have: Partition type Partition expression Partition values …more Same for sub partitions. Now again see what happens if I parse the second partition: [root@master1 ~]# /opt/mysql_templates/mysql-8P/bin/./ibd2sdi /opt/mysql_instances/master8/data/windmills/wmillAUTOINCPART#P#PT20170401.ibd |jq '.' [ "ibd2sdi", { "type": 2, "id": 790, "object": { "mysqld_version_id": 80011, "dd_version": 80011, "sdi_version": 1, "dd_object_type": "Tablespace", "dd_object": { "name": "windmills/wmillAUTOINCPART#P#PT20170401", "comment": "", "options": "", "se_private_data": "flags=16417;id=785;server_version=80011;space_version=1;", "engine": "InnoDB", "files": [ { "ordinal_position": 1, "filename": "./windmills/wmillAUTOINCPART#P#PT20170401.ibd", "se_private_data": "id=785;" } ] } } } ] I will get only the information about the tablespace, not the table. As promised let me show you now what happens if I delete the first partition, and the second partition becomes the first: (root@localhost) [windmills]>alter table wmillAUTOINCPART drop partition PT20170301; Query OK, 0 rows affected (1.84 sec) Records: 0 Duplicates: 0 Warnings: 0 [root@master1 ~]# /opt/mysql_templates/mysql-8P/bin/./ibd2sdi /opt/mysql_instances/master8/data/windmills/wmillAUTOINCPART#P#PT20170401.ibd |jq '.'|more [ "ibd2sdi", { "type": 1, "id": 1461, "object": { "mysqld_version_id": 80013, "dd_version": 80013, "sdi_version": 1, "dd_object_type": "Table", "dd_object": { "name": "wmillAUTOINCPART", "mysql_version_id": 80013, "created": 20181129130834, "last_altered": 20181129130834, "hidden": 1, "options": "avg_row_length=0;key_block_size=0;keys_disabled=0;pack_record=1;row_type=2;stats_auto_recalc=0;stats_sample_pages=0;", "columns": [ { "name": "id", "type": 9, "is_nullable": false, "is_zerofill": false, "is_unsigned": false, "is_auto_increment": true, "is_virtual": false, "hidden": 1, "ordinal_position": 1, As I mentioned before, each DDL updates the SDI, and here we go: I will have all the information on what’s NOW the FIRST partition. Please note the value of the attribute “created” between the first time I queried the other partition, and the one that I have now: /opt/mysql_instances/master8/data/windmills/wmillAUTOINCPART#P#PT20170301.ibd        "created": 20181125110300, /opt/mysql_instances/master8/data/windmills/wmillAUTOINCPART#P#PT20170401.ibd        "created": 20181129130834, To be clear the second created is NOW (PT20170401) from when I dropped the other partition (PT20170301). Conclusions In the end, this solution is definitely more powerful than the FRM files. It will allow us to parse the file and identify the table definition more easily, providing us with much more detail and information. The problems will arise if and when the IBD file becomes corrupt. As for the manual:  For InnoDB, an SDI record requires a single index page, which is 16KB in size by default. However, SDI data is compressed to reduce the storage footprint. By which it means that for each table I have a page, if I associate record=table. Which means that in case of IBD corruption I should (likely) be able to read those pages. Unless I have bad (very bad) luck. I still wonder how the dimension of an IBD affects the SDI retrieval, but given I have not tried it yet I will have to let you know. As an aside, I am working on a script to facilitate the generation of the SQL, it’s not yet ready but you can find it here Last note but keep this in mind! It is stated in the manual but in a hidden place and in small letters:DDL operations take longer due to writing to storage, undo logs, and redo logs instead of .frm files. References https://stedolan.github.io/jq/ https://dev.mysql.com/doc/refman/8.0/en/ibd2sdi.html https://dev.mysql.com/doc/refman/8.0/en/serialized-dictionary-information.html https://dev.mysql.com/doc/refman/8.0/en/data-dictionary-limitations.html —Photo by chuttersnap on Unsplash [Less]
Posted 6 days ago by parvez alam
This tutorial help to upload files using Laravel 5.7.We will create html form view that upload file into server and save path information into MySQL table. We will use MySQL and php 7 to create file upload functionality. File Upload Using Laravel ... [More] 5.7 and Mysql Lets create new laravel application using laravel CLI. The Artisan […] The post How To Upload File in Laravel 5.7 Using MySQL appeared first on Phpflow.com. [Less]
Posted 7 days ago by MyDBOPS
In this blog, I will provide a step by step procedure to migrate from on-premise MySQL to Amazon RDS/Aurora using Percona-xtrabackup Both RDS and Aurora is a DBAAS provided by Amazon. To know more on DBAAS you can view our presentation here. When ... [More] you are having a database in size of few GB, it would be very convenient to take a logical backup using a logical backup tool such as Mysqldump or Mydumper and restore it Amazon RDS/Aurora easily. But this is not the case when you are having a data size of a few hundred GB or TB, Where the logical backup and restore is very painful and time-consuming. To overcome this we can use Xtrabackup ( open source hot backup ), along with S3 to get the required instance in RDS/Aurora AWS started supporting Xtrabackup restore for Aurora MySQL long time ago and is Supporting RDS from the last few months back, Both follow the same procedure, hence am using RDS for the purpose of restore, since Aurora is on costlier side. Production Use Case: Recently I had come across a situation of restoring xtrabackup to Aurora of data size 300GB and it worked like a charm and the data was absolutely fine and also data was restored in 2 hours. Before going into the demo, first, let us see the limitations in this method Limitations: Only 5.6 is supported source and destination should be in 5.6 Only support for the whole instance (partial backups at database and table is not supported) With RDS the size is limited to 6TB Database objects and user accounts are not migrated properly (will see the workaround here) The below command to stream the backup directly to S3 bucket, if you are low on local backup space streaming directly avoids the usage of local storage, also make sure to have a strong network pipe. # xtrabackup --user=root --password='S3cret!123' --backup --stream=tar | gzip -c | s3cmd put - s3://kabileshs3/xtrabackup56.tar.gz RDS/Aurora also supports the creation of instance from a tar compressed format and also the S3 footprint can be reduced by compressing the backup, Ensure the compressed backup size should not exceed 5TB, this is the max object size supported by S3. Backup is now uploaded to s3 Creating RDS instance: Go to RDS console as below Click “Restore a database instance from Amazon S3” Select engine to be used with the restored instance, Am using MySQL, You can select Aurora based on your use-case. Next Specify source backup details ie, S3 and its details Create a new IAM role for RDS to access the s3 bucket to make the access and restore of the backup Now the last and the final step proceed to Specify the DB details such as the instance size, engine version as shown in the below screenshot. Next choose the backup retention, VPC and security group for accessing the DB instance. Now click next your instance is getting ready It took around some 20 mins to get instance ready In the above restore we have just used data, from sysbench so it shouldn’t be an issue, let’s check the data consistency, By checking the count between the source instance and the restored RDS instance as shown below.   Data restore on RDS    Data on the On prem instance  Hope you have enjoyed and learned, Also share your much-valued feedback. Image Courtesy: Photo by Gareth Davies on Unsplash [Less]