I Use This!
Moderate Activity

News

Analyzed about 5 hours ago. based on code collected about 9 hours ago.
Posted over 4 years ago by Continuent
Overview The Skinny Clustering provides high availability and disaster recovery, along with the ability to read-scale both locally and globally. Some clusters even provide active/active capabilities, which others have a single master. Real time ... [More] database replication is a must for clustering and other key business purposes, like reporting. There are a number of replication technologies available for MySQL, and some are even bundled into various solutions. When choosing a replication methodology, it is paramount to understand just how the data moves from source to target. In this blog post, we will examine how asynchronous, synchronous, and “semi-synchronous” replication behave when used for clustering. Also, we will explore how replication affects database performance and data availability. Asynchronous Replication Fast and Flexible Asynchronous replication (async for short) is used by MySQL natively as well as by our own Tungsten Replicator. With async, all replication is decoupled from the database. Specifically, replication is a background process that reads events from the binary log on disk and makes those events available to the slaves upon request. There is no change to how the application works, and thus is quick and easy to implement. Also, async does not slow down the response from MySQL to the application. With asynchronous replication, you have: The least impact to application performance of any replication type, because replication is handling database events separately in the background by reading the binary logs. The best (and really the only) choice for WAN replication because the application does not need to wait for confirmation from each slave before proceeding – this delay is prohibitive over long distances due to the simple physics of the speed of light creating delays in transit (Round-Trip-Time or RTT). The ability to deploy databases at geo-scale, serving applications that need high performance globally. Replication is de-coupled from your application, so slow WAN links do not imapct application performance. A chance of data lag on the replication slaves, which means that the slaves are not completely up-to-date with the master. In the event of a failure, there is a chance the newly-promoted slave would not have the same data as the master. The risk of data loss when promoting a slave to master per the above point. There are a number techniques to mitigate data loss which will be discussed in another blog post. Synchronous Replication Slow and Steady Used by Galera and its variants, synchronous replication (sync for short) addresses the above data loss issues by guaranteeing that all transactions are committed on all nodes at database commit. Synchronous replication will wait until all nodes have the transaction committed before providing a response to the application, as opposed to asynchronous replication which occurs in the background after a commit to the master. With the sync method, you can be sure that if you commit a transaction, that transaction will be committed on every node. With synchronous replication, you have: The most significant application lag of any type of replication, because your application must wait for all nodes in the cluster to commit the transaction too. Per above, it is complicated to implement over wide-area or slow networks, due to the almost prohibitive application lag, as it waits for transactions to be committed on remote databases. Transaction commit on all nodes, guaranteed. No slave data lag, as slaves are always up to date by definition. No chance of a data loss window in the event of a failover. Possibility to have multiple masters in a local cluster using a process called “certification”; certification will process transactions in order or, in the event of a conflict, raise an error. Semi-Synchronous Replication The New Guy One example of semi-synchronous replication is MySQL group replication. MySQL group replication introduces “semi-synchronous” in an attempt to merge the advantages of both asynchronous and semi-synchronous replication. With semi-synchronous replication (semi-sync for short), transactions are committed on the master, transferred to at least one slave node but NOT NECESSARILY committed. At this point, control is handed back to the application and commits to the slaves are handled in the background. Compared to synchronous replication, applications can potentially be more responsive since they receive control back sooner (though it is not as fast as async). Also, compared to async, there is less chance of data loss and potentially less replication lag. With semi-synchronous replication, you have: Application response that is faster than synchronous replication, but slower than asynchronous replication The potential for a data loss window that is smaller than with asynchronous replication, but still larger than with synchronous replication. Configuration is more complex than with asynchronous replication, since it is not decoupled from the master database. A relatively new technology as compared to other replication methodologies. It remains to be seen if semi-sync will be a viable solution for production workloads. Why does Tungsten Clustering choose Asynchronous Replication? The Wrap-Up Tungsten Clustering uses the Tungsten Replicator, leveraging asynchronous replication, so that complex clusters can be deployed at geo scale without modifying or impacting applications or database servers. When deploying over wide area networks, asynchronous replication is the best and usually only option to protect application performance. Also, even over fast LAN’s, for write-intensive workloads, asynchronous replication is the best choice because the bottom-line impact to the application is minimized. Look out for Part 2 of this blog, that will dive into how different replication technologies impact cluster behavior in day-to-day operations (such as failover, local and global replication breaks, zero downtime, maintenance updates, etc.). For more information about Tungsten clusters, please visit https://docs.continuent.com If you would like to speak with one of our team, please feel free to reach out here: https://www.continuent.com/contact [Less]
Posted over 4 years ago by Continuent
Overview The Skinny Clustering provides high availability and disaster recovery, along with the ability to read-scale both locally and globally. Some clusters even provide active/active capabilities, which others have a single master. Real time ... [More] database replication is a must for clustering and other key business purposes, like reporting. There are a number of replication technologies available for MySQL, and some are even bundled into various solutions. When choosing a replication methodology, it is paramount to understand just how the data moves from source to target. In this blog post, we will examine how asynchronous, synchronous, and “semi-synchronous” replication behave when used for clustering. Also, we will explore how replication affects database performance and data availability. Asynchronous Replication Fast and Flexible Asynchronous replication (async for short) is used by MySQL natively as well as by our own Tungsten Replicator. With async, all replication is decoupled from the database. Specifically, replication is a background process that reads events from the binary log on disk and makes those events available to the slaves upon request. There is no change to how the application works, and thus is quick and easy to implement. Also, async does not slow down the response from MySQL to the application. With asynchronous replication, you have: The least impact to application performance of any replication type, because replication is handling database events separately in the background by reading the binary logs. The best (and really the only) choice for WAN replication because the application does not need to wait for confirmation from each slave before proceeding – this delay is prohibitive over long distances due to the simple physics of the speed of light creating delays in transit (Round-Trip-Time or RTT). The ability to deploy databases at geo-scale, serving applications that need high performance globally. Replication is de-coupled from your application, so slow WAN links do not imapct application performance. A chance of data lag on the replication slaves, which means that the slaves are not completely up-to-date with the master. In the event of a failure, there is a chance the newly-promoted slave would not have the same data as the master. The risk of data loss when promoting a slave to master per the above point. There are a number techniques to mitigate data loss which will be discussed in another blog post. Synchronous Replication Slow and Steady Used by Galera and its variants, synchronous replication (sync for short) addresses the above data loss issues by guaranteeing that all transactions are committed on all nodes at database commit. Synchronous replication will wait until all nodes have the transaction committed before providing a response to the application, as opposed to asynchronous replication which occurs in the background after a commit to the master. With the sync method, you can be sure that if you commit a transaction, that transaction will be committed on every node. With synchronous replication, you have: The most significant application lag of any type of replication, because your application must wait for all nodes in the cluster to commit the transaction too. Per above, it is complicated to implement over wide-area or slow networks, due to the almost prohibitive application lag, as it waits for transactions to be committed on remote databases. Transaction commit on all nodes, guaranteed. No slave data lag, as slaves are always up to date by definition. No chance of a data loss window in the event of a failover. Possibility to have multiple masters in a local cluster using a process called “certification”; certification will process transactions in order or, in the event of a conflict, raise an error. Semi-Synchronous Replication The New Guy MySQL group replication introduces “semi-synchronous” in an attempt to merge the advantages of both asynchronous and semi-synchronous replication. With semi-synchronous replication (semi-sync for short), transactions are committed on the master, transferred to at least one slave node but NOT NECESSARILY committed. At this point, control is handed back to the application and commits to the slaves are handled in the background. Compared to synchronous replication, applications can potentially be more responsive since they receive control back sooner (though it is not as fast as async). Also, compared to async, there is less chance of data loss and potentially less replication lag. With semi-synchronous replication, you have: Application response that is faster than synchronous replication, but slower than asynchronous replication The potential for a data loss window that is smaller than with asynchronous replication, but still larger than with synchronous replication. Configuration is more complex than with asynchronous replication, since it is not decoupled from the master database. A relatively new technology which is not yet widely deployed. It remains to be seen if semi-sync will be a viable solution for production workloads. Why does Tungsten Clustering choose Asynchronous Replication? The Wrap-Up Tungsten Clustering uses the Tungsten Replicator, leveraging asynchronous replication, so that complex clusters can be deployed at geo scale without modifying or impacting applications or database servers. When deploying over wide area networks, asynchronous replication is the best and usually only option to protect application performance. Also, even over fast LAN’s, for write-intensive workloads, asynchronous replication is the best choice because the bottom-line impact to the application is minimized. Look out for Part 2 of this blog, that will dive into how different replication technologies impact cluster behavior in day-to-day operations (such as failover, local and global replication breaks, zero downtime, maintenance updates, etc.). For more information about Tungsten clusters, please visit https://docs.continuent.com If you would like to speak with one of our team, please feel free to reach out here: https://www.continuent.com/contact [Less]
Posted over 4 years ago by MyDBOPS
We are well aware that MySQL Group Replication is one of the faster evolving clustering Technology for MySQL. Flow Control plays a key factor in Group Replication performance and data integrity . In this blog I am going to explain about the Flow ... [More] Control mechanism and How it has evolved in MySQL 8 ? What is Flow Control ? MySQL Group Replication / Native Async replication needs binary logs to get the data flow across the servers. What makes the difference ? In the MySQL Group Replication we are trying to achieve the Synchronous replication with the help of a Flow Control mechanism and transaction acknowledgments ( certification ). Without Flow Control, the MySQL Group Replication is asynchronous replication ? Yes, consistency is lost. Lets us consider We have three nodes ( GR1, GR2, GR3 ) . Gr1 is the master and and other two servers ( GR2, GR3 ) are the appliers . Suddenly, your master node (GR1) is getting more write loads from application. But, your applier servers are not able to catch-up the concurrent writes ( may be, the applier servers having less configurations OR network bandwidth is low between the nodes ) from master servers . So, our applier servers will keep lagging . In this case , the master server slows down the writes for the slave (GR2 / GR3) appliers to be up to date with master node (GR1) . This mechanism is called Flow Control . MySQL dev official document, Group Replication ensures that a transaction only commits after a majority of the members in a group have received it and agreed on the relative order between all transactions that were sent concurrently. This approach works well if the total number of writes to the group does not exceed the write capacity of any member in the group. If it does and some of the members have less write throughput than others, particularly less than the writer members, those members can start lagging behind of the writers. How Flow Control works ? MySQL group replication flow control mainly depends on two factors .The controlling variables are GR certifier queue size ( group_replication_certifier_threshold ) GR applier queue size ( group_replication_applier_threshold ) Group replication have inbuilt health checks , the collect the stats from the group members . The collected stats will be share to other group members periodically . The monitoring mechanism will calculate the below stats from the group members . certifier queue size replication applier queue size total number of transactions certified total number of remote transactions applied in the member total number of local transactions Based on the metrics collected from all the servers in the group, a throttling mechanism kicks in and decides whether to limit the rate a member is able to execute/commit new transactions. The group capacity will be calculated based on the lowest capacity of all the members in the group . Flow Control RepresentationGroup Replication MySQL 5.7 vs MySQL 8 We all know, the Group Replication is introduced at MySQL 5.7 . But, there is a lot of improvements in MySQL 8 ( Particularly MySQL 8.0.2 ) . Below I have mentioned the important key features, which introduced in MYSQL 8 ( MySQL 8.0.2 ) . Also, I am adding the complete list of variables ( both MySQL 5.7 & MySQL 8 ) which involving in the MySQL Flow control mechanism . Can allow the cluster to cach up the backlog under flow control based on the hold percent ( group_replication_flow_control_hold_percent ) Can define, When the group quota should be released, when flow control is no longer needed to throttle the writer members ( group_replication_flow_control_release_percent ) Can control the lowest/highest flow control quota that can be assigned to a member ( group_replication_flow_control_max_commit_quota, group_replication_flow_control_min_quota ) MySQL 5.7 group_replication_flow_control_applier_threshold group_replication_flow_control_certifier_threshold group_replication_flow_control_mode MySQL 8 group_replication_flow_control_hold_percent group_replication_flow_control_max_commit_quota group_replication_flow_control_member_quota_percent group_replication_flow_control_min_quota group_replication_flow_control_min_recovery_quota group_replication_flow_control_period group_replication_flow_control_release_percent Understanding the Flow Control mechanism and its related variables will be helpful for effective tuning of Group Replication cluster . Hope this blog will helps someone who is learning the MySQL flow control in GR . At Mydbops, We are keep testing the new things on MySQL and related tools, will be coming back with a new blog soon. Photo by Jani Brumat on Unsplash [Less]
Posted over 4 years ago by FromDual
Contents Introduction Character Sets Steps to convert Character Set to utf8mb4 Analyzing the Server Analyzing the Application and the Clients Preparation of the Server Settings and the Application Convert Tables to ... [More] utf8mb4 Testing of new Character Set MySQL Pump MySQL Master/Slave Replication for Character Set conversion MySQL Shell, mysqlsh Upgrade Checker Utility Introduction Recently we had a consulting engagement where we had to help the customer to migrate from latin1 Character Set to utf8mb4 Character Set. In the same MySQL consulting engagement we considered to upgrade from MySQL 5.6 to MySQL 5.7 as well [ Lit. ]. We decided to split the change in 2 parts: Upgrading to 5.7 in the first step and converting to uft8mb4 in the second step. There were various reasons for this decision: 2 smaller changes are easier to control then one big shot. We assume that in 5.7 we experience less problems with utf8mb4 because the trend given by MySQL was more towards utf8mb4 in 5.7 than in MySQL 5.6. So we hope to hit less problems and bugs. For Upgrading see also MariaDB and MySQL Upgrade Problems Remark: It makes possibly also sens to think about Collations before starting with the conversion! Character Sets Historically MariaDB and MySQL had the default Character Set latin1 (Latin-1 or ISO-8859-1) which was sufficient for most of the western hemisphere. But as technology spreads and demands increase other cultures want to have their characters represented understandably as well. So Unicode standard was invented. And MariaDB and MySQL applied this standard as well. The original MariaDB/MySQL utf8(mb3) implementation was not perfect or complete so they implemented utf8mb4 as a super set of utf8(mb3). So at least since MariaDB/MySQL version 5.5 latin1, utf8 and utf8mb4 are available. The current MySQL 5.7 utf8mb4 implementation should cover Unicode 9.0.0: SQL> SELECT * FROM information_schema.character_sets WHERE character_set_name LIKE 'utf8%' OR character_set_name = 'latin1'; +--------------------+----------------------+----------------------+--------+ | CHARACTER_SET_NAME | DEFAULT_COLLATE_NAME | DESCRIPTION | MAXLEN | +--------------------+----------------------+----------------------+--------+ | latin1 | latin1_swedish_ci | cp1252 West European | 1 | | utf8 | utf8_general_ci | UTF-8 Unicode | 3 | | utf8mb4 | utf8mb4_general_ci | UTF-8 Unicode | 4 | +--------------------+----------------------+----------------------+--------+ The default Character Set up to MariaDB 10.4 and MySQL 5.7 was latin1. In MySQL 8.0 the default Character Set has changed to utf8mb4. There are no signs so far that MariaDB will take the same step: SQL> status -------------- mysql Ver 8.0.16 for linux-glibc2.12 on x86_64 (MySQL Community Server - GPL) Connection id: 84 Current database: Current user: root@localhost SSL: Not in use Current pager: stdout Using outfile: '' Using delimiter: ; Server version: 8.0.16 MySQL Community Server - GPL Protocol version: 10 Connection: Localhost via UNIX socket Server characterset: utf8mb4 Db characterset: utf8mb4 Client characterset: utf8mb4 Conn. characterset: utf8mb4 UNIX socket: /var/run/mysqld/mysql-3332.sock Uptime: 3 days 47 min 13 sec So we see a general trend from latin1 to utf8(mb3) to utf8mb4 technically and business wise (aka globalization). For the DBA this means sooner or later we have to think about a conversion of all tables of the whole database instance (all tables of all schemata) to utf8mb4! Steps to convert Character Set to utf8mb4 Analyzing the Server First of all one should analyze the system (O/S, database instance and client/application). On the server we can run the following command to verify the actual used and supported Character Set: # locale LANG=en_GB.UTF-8 LANGUAGE= LC_CTYPE="en_GB.UTF-8" LC_NUMERIC="en_GB.UTF-8" LC_TIME="en_GB.UTF-8" LC_COLLATE="en_GB.UTF-8" LC_MONETARY="en_GB.UTF-8" LC_MESSAGES="en_GB.UTF-8" LC_PAPER="en_GB.UTF-8" LC_NAME="en_GB.UTF-8" LC_ADDRESS="en_GB.UTF-8" LC_TELEPHONE="en_GB.UTF-8" LC_MEASUREMENT="en_GB.UTF-8" LC_IDENTIFICATION="en_GB.UTF-8" LC_ALL= On the MariaDB/MySQL database instance we check the current server configuration and the session configuration with the following commands: SQL> SHOW SESSION VARIABLES WHERE Variable_name LIKE 'character_set\_%' OR Variable_name LIKE 'collation%'; +--------------------------+-------------------+ | Variable_name | Value | +--------------------------+-------------------+ | character_set_client | utf8 | | character_set_connection | utf8 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | utf8 | | character_set_server | latin1 | | character_set_system | utf8 | | collation_connection | utf8_general_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | +--------------------------+-------------------+ SQL> SHOW GLOBAL VARIABLES WHERE Variable_name LIKE 'character_set\_%' OR Variable_name LIKE 'collation%'; +--------------------------+-------------------+ | Variable_name | Value | +--------------------------+-------------------+ | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | +--------------------------+-------------------+ These configuration variables are for Client/Server communication: character_set_client, character_set_connection and character_set_results. These for Server configuration: character_set_server and character_set_database (deprecated in MySQL 5.7). And these for System internals and File System access: character_set_system and character_set_filesystem. Sometimes we see customers using the Logon Trigger init_connect to force clients for a specific Character Set: SQL> SHOW GLOBAL VARIABLES LIKE 'init_connect'; +---------------+------------------+ | Variable_name | Value | +---------------+------------------+ | init_connect | SET NAMES latin1 | +---------------+------------------+ The SET NAMES command sets the character_set_client, character_set_connection and character_set_results session variables. [ Lit. ] Analyzing the Application and the Clients Similar steps to analyze the Application and Clients should be taken. We want to answer the following questions: Support of utf8 of Application/Client O/S (Windows)? Support of utf8 of Web Server (Apache (AddDefaultCharset utf-8), Nginx, IIS, ...) Version of programming language (Java, PHP (5.4 and newer?), ...) Version of MariaDB and MySQL Connectors (JDBC (5.1.47 and newer?), ODBC (5.3.11 and newer?), mysqli/mysqlnd (⋝7.0.19?, ⋝7.1.5?), ...) Application code (header('Content-Type: text/html; charset=utf-8');, , , [Less]
Posted over 4 years ago by Michael McLaughlin
There was an option during the Fedora 30 Workstation installation to add the Apache Web Server, but you need to set it to start automatically. Unfortunately, there was no option to install PHP, which I thought odd because of how many web developers ... [More] learn the trade first on PHP with a LAMP (Linux, Apache, MySQL, Perl/PHP/Python) stack. You see how to fix that shortcoming in this post and how to install and test PHP, mysqli, and pdo to support MySQL 8. Before you do that make sure you install MySQL 8. You can find my prior blog post on that here. You set Apache to start automatically, on the next boot of the operating system, with the following command: chkconfig httpd on It creates a symbolic link: Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service. However, that command only starts the Apache server the next time you boot the server. You use the following command as the root user to start the Apache server: apachectl start You can verify the installation with the following command as the root user: ps -ef | grep httpd | grep -v grep It should return: root 5433 1 0 17:03 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND apache 5434 5433 0 17:03 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND apache 5435 5433 0 17:03 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND apache 5436 5433 0 17:03 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND apache 5437 5433 0 17:03 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND apache 5438 5433 0 17:03 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND apache 5442 5433 0 17:03 ? 00:00:00 /usr/sbin/httpd -DFOREGROUND and, then verify the listening port with the following command as the root user: netstat -tulpn | grep :80 It should return the following when both the Apache server is listening on port 80 and the Oracle multi-protocol server is listening on port 8080: tcp6 0 0 :::80 :::* LISTEN 119810/httpd tcp6 0 0 :::8080 :::* LISTEN 1403/tnslsnr You can also enter the following URL in the browser to see the Apache Test Page: http://localhost It should display the test page, like this: You can also create a hello.htm file in the /var/www/html directory to test the ability to read an HTML file. I would suggest the traditional hello.htm file: Hello World! You can call it by using this URL in the browser: http://localhost/hello.htm It should display the test page, like this: Now, let’s install PHP. You use the following command as a privileged user, which is one found in the sudoer’s list: yum install -y php Display detailed console log → Last metadata expiration check: 0:37:02 ago on Fri 16 Aug 2019 11:03:54 AM MDT. Dependencies resolved. ============================================================================= Package Arch Version Repository Size ============================================================================= Installing: php x86_64 7.3.8-1.fc30 updates 2.8 M Installing dependencies: nginx-filesystem noarch 1:1.16.0-3.fc30 updates 11 k php-cli x86_64 7.3.8-1.fc30 updates 4.3 M php-common x86_64 7.3.8-1.fc30 updates 1.1 M Installing weak dependencies: php-fpm x86_64 7.3.8-1.fc30 updates 1.5 M Transaction Summary ============================================================================= Install 5 Packages Total download size: 9.6 M Installed size: 43 M Downloading Packages: (1/5): nginx-filesystem-1.16.0-3.fc30.noarch 34 kB/s | 11 kB 00:00 (2/5): php-common-7.3.8-1.fc30.x86_64.rpm 1.1 MB/s | 1.1 MB 00:00 (3/5): php-7.3.8-1.fc30.x86_64.rpm 2.0 MB/s | 2.8 MB 00:01 (4/5): php-fpm-7.3.8-1.fc30.x86_64.rpm 2.2 MB/s | 1.5 MB 00:00 (5/5): php-cli-7.3.8-1.fc30.x86_64.rpm 1.7 MB/s | 4.3 MB 00:02 ----------------------------------------------------------------------------- Total 3.0 MB/s | 9.6 MB 00:03 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : php-common-7.3.8-1.fc30.x86_64 1/5 Installing : php-cli-7.3.8-1.fc30.x86_64 2/5 Running scriptlet: nginx-filesystem-1:1.16.0-3.fc30.noarch 3/5 Installing : nginx-filesystem-1:1.16.0-3.fc30.noarch 3/5 Installing : php-fpm-7.3.8-1.fc30.x86_64 4/5 Running scriptlet: php-fpm-7.3.8-1.fc30.x86_64 4/5 Installing : php-7.3.8-1.fc30.x86_64 5/5 Running scriptlet: php-7.3.8-1.fc30.x86_64 5/5 Running scriptlet: php-fpm-7.3.8-1.fc30.x86_64 5/5 Verifying : nginx-filesystem-1:1.16.0-3.fc30.noarch 1/5 Verifying : php-7.3.8-1.fc30.x86_64 2/5 Verifying : php-cli-7.3.8-1.fc30.x86_64 3/5 Verifying : php-common-7.3.8-1.fc30.x86_64 4/5 Verifying : php-fpm-7.3.8-1.fc30.x86_64 5/5 Installed: php-7.3.8-1.fc30.x86_64 php-fpm-7.3.8-1.fc30.x86_64 nginx-filesystem-1:1.16.0-3.fc30.noarch php-cli-7.3.8-1.fc30.x86_64 php-common-7.3.8-1.fc30.x86_64 Complete! Before you test the installation of PHP in a browser, you must restart the Apache HTTP Server. You can do that with the following command as a privileged user: sudo apachectl restart After verifying the connection, you can test it by creating the traditional info.php program file in the /var/www/http directory. The file should contain the following: It should display the PHP Version 7.3.8 web page, which ships with Fedora 30: The next step shows you how to install mysqli and pdo with the yum utility. While it’s unnecessary to check for the older mysql library (truly deprecated), its good practice to know how to check for a conflicting library before installing a new one. Also, I’d prefer newbies get exposed to using the yum utility’s shell environment. You start the yum shell, as follows: yum shell With the yum shell, you would remove a mysql package with the following command: > remove php-mysql The command will remove the package or tell you that there is no package to remove. Next, you install the php-mysqli package with this command: install php-mysqli You will then be prompted to confirm the installation of the php-mysqli library. Finally, you exit the yum shell with this command: > quit If you want to see the whole interactive shell, click on the link below. Display detailed console log → Last metadata expiration check: 0:53:05 ago on Fri 16 Aug 2019 11:03:54 AM MDT. > remove php-mysql No match for argument: php-mysql No packages marked for removal. > install php-mysqlnd > run ============================================================================= Package Architecture Version Repository Size ============================================================================= Installing: php-mysqlnd x86_64 7.3.8-1.fc30 updates 195 k Installing dependencies: php-pdo x86_64 7.3.8-1.fc30 updates 91 k Transaction Summary ============================================================================= Install 2 Packages Total download size: 286 k Installed size: 1.4 M Is this ok [y/N]: y Downloading Packages: (1/2): php-pdo-7.3.8-1.fc30.x86_64.rpm 136 kB/s | 91 kB 00:00 (2/2): php-mysqlnd-7.3.8-1.fc30.x86_64.rpm 183 kB/s | 195 kB 00:01 ----------------------------------------------------------------------------- Total 24 kB/s | 286 kB 00:11 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : php-pdo-7.3.8-1.fc30.x86_64 1/2 Installing : php-mysqlnd-7.3.8-1.fc30.x86_64 2/2 Running scriptlet: php-mysqlnd-7.3.8-1.fc30.x86_64 2/2 Verifying : php-mysqlnd-7.3.8-1.fc30.x86_64 1/2 Verifying : php-pdo-7.3.8-1.fc30.x86_64 2/2 Installed: php-mysqlnd-7.3.8-1.fc30.x86_64 php-pdo-7.3.8-1.fc30.x86_64 Last metadata expiration check: 0:53:54 ago on Fri 16 Aug 2019 11:03:54 AM MDT. > quit Leaving Shell The downloaded packages were saved in cache until the next successful transaction. You can remove cached packages by executing 'dnf clean packages'. You need to restart the Apache HTTP listener for these changes to take place, which you do with the same command as shown earlier: sudo apachectl restart I wrote the mysqli_check.php script to verify installation of both the mysqli and pdo libraries. The full code should be put in a mysqli_check.php file in the /var/www/html directory for testing. Static Query Object Sample [Less]
Posted over 4 years ago by Michael McLaughlin
While updating my class image to Fedora 30, I noticed that it installed the Akonadi Server. The documentation on the Akonadi server lacked some straightforward documentation. It also offered a bundled set of software that limited how to approach ... [More] MySQL development. So, I removed all those packages with the following syntax: dnf remove `rpm -qa | grep akonadi` Display detailed console log → Dependencies resolved. ============================================================================= Package Arch Version Repo Size ============================================================================= Removing: akonadi-import-wizard x86_64 19.04.2-1.fc30 @updates 2.8 M kf5-akonadi-calendar x86_64 19.04.2-1.fc30 @updates 2.6 M kf5-akonadi-contacts x86_64 19.04.2-1.fc30 @updates 3.3 M kf5-akonadi-mime x86_64 19.04.2-1.fc30 @updates 1.1 M kf5-akonadi-notes x86_64 19.04.2-1.fc30 @updates 170 k kf5-akonadi-search x86_64 19.04.2-1.fc30 @updates 1.6 M kf5-akonadi-server x86_64 19.04.2-2.fc30 @updates 14 M kf5-akonadi-server-mysql x86_64 19.04.2-2.fc30 @updates 3.4 k kf5-kmailtransport-akonadi x86_64 19.04.2-1.fc30 @updates 204 k kf5-libkdepim-akonadi x86_64 19.04.2-1.fc30 @updates 973 k kf5-mailimporter-akonadi x86_64 19.04.2-1.fc30 @updates 106 k kf5-pimcommon-akonadi x86_64 19.04.2-1.fc30 @updates 542 k Removing dependent packages: akregator x86_64 19.04.2-1.fc30 @updates 3.9 M akregator-libs x86_64 19.04.2-1.fc30 @updates 3.3 M digikam x86_64 6.1.0-7.fc30 @updates 149 M digikam-libs x86_64 6.1.0-7.fc30 @updates 47 M kgpg x86_64 18.12.2-1.fc30 @fedora 8.0 M kontact x86_64 19.04.2-1.fc30 @updates 1.6 M Removing unused dependencies: CharLS x86_64 1.0-18.fc30 @fedora 341 k coin-or-Clp x86_64 1.16.10-8.fc30 @fedora 2.8 M coin-or-CoinUtils x86_64 2.10.14-3.fc30 @fedora 1.5 M coin-or-Osi x86_64 0.107.8-9.fc30 @fedora 1.1 M digikam-doc noarch 6.1.0-7.fc30 @updates 0 enblend x86_64 4.2-10.fc29 @fedora 4.9 M gdcm x86_64 2.8.8-4.fc30 @fedora 11 M grantlee-editor x86_64 19.04.2-1.fc30 @updates 1.4 M grantlee-editor-libs x86_64 19.04.2-1.fc30 @updates 208 k hugin-base x86_64 2019.0.0-1.fc30 @updates 28 M kaddressbook x86_64 19.04.2-1.fc30 @updates 758 k kaddressbook-libs x86_64 19.04.2-1.fc30 @updates 847 k kdepim-addons x86_64 19.04.2-1.fc30 @updates 11 M kdepim-apps-libs x86_64 19.04.2-1.fc30 @updates 1.1 M kdepim-runtime x86_64 1:19.04.2-1.fc30 @updates 20 M kdepim-runtime-libs x86_64 1:19.04.2-1.fc30 @updates 2.6 M kf5-calendarsupport x86_64 19.04.2-1.fc30 @updates 3.4 M kf5-eventviews x86_64 19.04.2-1.fc30 @updates 3.7 M kf5-grantleetheme x86_64 19.04.2-1.fc30 @updates 283 k kf5-incidenceeditor x86_64 19.04.2-1.fc30 @updates 3.4 M kf5-kalarmcal x86_64 19.04.2-1.fc30 @updates 1.1 M kf5-kcalendarcore x86_64 19.04.2-1.fc30 @updates 1.4 M kf5-kcalendarutils x86_64 19.04.2-1.fc30 @updates 1.9 M kf5-kcontacts x86_64 19.04.2-1.fc30 @updates 2.1 M kf5-kdav x86_64 19.04.2-1.fc30 @updates 591 k kf5-kidentitymanagement x86_64 19.04.2-1.fc30 @updates 511 k kf5-kimap x86_64 19.04.2-1.fc30 @updates 1.3 M kf5-kitinerary x86_64 19.04.2-1.fc30 @updates 1.8 M kf5-kldap x86_64 19.04.2-1.fc30 @updates 885 k kf5-kmailtransport x86_64 19.04.2-1.fc30 @updates 1.2 M kf5-kmbox x86_64 19.04.2-1.fc30 @updates 116 k kf5-kmime x86_64 19.04.2-1.fc30 @updates 798 k kf5-kontactinterface x86_64 19.04.2-1.fc30 @updates 242 k kf5-kpimtextedit x86_64 19.04.2-2.fc30 @updates 1.3 M kf5-kpkpass x86_64 19.04.2-1.fc30 @updates 172 k kf5-ksmtp x86_64 19.04.2-1.fc30 @updates 258 k kf5-ktnef x86_64 19.04.2-1.fc30 @updates 650 k kf5-libgravatar x86_64 19.04.2-1.fc30 @updates 247 k kf5-libkdepim x86_64 19.04.2-1.fc30 @updates 1.6 M kf5-libkleo x86_64 19.04.2-1.fc30 @updates 2.7 M kf5-libksieve x86_64 19.04.2-1.fc30 @updates 5.2 M kf5-mailcommon x86_64 19.04.2-1.fc30 @updates 4.6 M kf5-mailimporter x86_64 19.04.2-1.fc30 @updates 1.5 M kf5-messagelib x86_64 19.04.2-1.fc30 @updates 18 M kf5-pimcommon x86_64 19.04.2-1.fc30 @updates 1.9 M kmail x86_64 19.04.2-2.fc30 @updates 14 M kmail-account-wizard x86_64 19.04.2-1.fc30 @updates 3.3 M kmail-libs x86_64 19.04.2-2.fc30 @updates 5.5 M kontact-libs x86_64 19.04.2-1.fc30 @updates 433 k korganizer x86_64 19.04.2-1.fc30 @updates 7.3 M korganizer-libs x86_64 19.04.2-1.fc30 @updates 3.9 M lensfun x86_64 0.3.2-19.fc30 @fedora 2.0 M libdc1394 x86_64 2.2.2-12.fc30 @fedora 379 k libical x86_64 3.0.4-3.fc30 @fedora 1.8 M libkgapi x86_64 19.04.2-1.fc30 @updates 3.6 M libkolabxml x86_64 1.1.6-10.fc30 @fedora 3.9 M liblqr-1 x86_64 0.4.2-12.fc30 @fedora 120 k libpano13 x86_64 2.9.19-9.fc30 @fedora 672 k libucil x86_64 0.9.10-18.fc30 @fedora 217 k libunicap x86_64 0.9.12-23.fc30 @fedora 485 k libva x86_64 2.4.1-1.fc30 @fedora 284 k mariadb x86_64 3:10.3.16-1.fc30 @updates 39 M mariadb-backup x86_64 3:10.3.16-1.fc30 @updates 27 M mariadb-common x86_64 3:10.3.16-1.fc30 @updates 179 k mariadb-cracklib-password-check x86_64 3:10.3.16-1.fc30 @updates 21 k mariadb-errmsg x86_64 3:10.3.16-1.fc30 @updates 2.3 M mariadb-gssapi-server x86_64 3:10.3.16-1.fc30 @updates 28 k mariadb-server x86_64 3:10.3.16-1.fc30 @updates 96 M mariadb-server-utils x86_64 3:10.3.16-1.fc30 @updates 7.4 M mesa-libOSMesa x86_64 19.1.3-1.fc30 @updates 9.6 M netcdf-cxx x86_64 4.2-21.fc30 @fedora 153 k opencv-contrib x86_64 3.4.4-10.fc30 @updates 19 M opencv-core x86_64 3.4.4-10.fc30 @updates 20 M openni x86_64 1.5.7.10-15.fc30 @updates 2.7 M perl-DBD-MySQL x86_64 4.050-2.fc30 @fedora 367 k perl-Image-ExifTool noarch 11.50-1.fc30 @updates 14 M pim-data-exporter x86_64 19.04.2-1.fc30 @updates 1.2 M pim-data-exporter-libs x86_64 19.04.2-1.fc30 @updates 738 k pim-sieve-editor x86_64 19.04.2-1.fc30 @updates 1.7 M protobuf x86_64 3.6.1-3.fc30 @fedora 3.8 M qt5-qtbase-mysql x86_64 5.12.4-4.fc30 @updates 96 k tinyxml x86_64 2.6.2-18.fc30 @fedora 156 k vigra x86_64 1.11.1-13.fc30 @fedora 714 k vtk x86_64 8.1.1-5.fc30 @updates 100 M Transaction Summary ============================================================================= Remove 102 Packages Freed space: 783 M Is this ok [y/N]: y Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: opencv-contrib-3.4.4-10.fc30.x86_64 1/1 Erasing : opencv-contrib-3.4.4-10.fc30.x86_64 1/102 Erasing : kontact-19.04.2-1.fc30.x86_64 2/102 Erasing : kmail-19.04.2-2.fc30.x86_64 3/102 Erasing : kmail-libs-19.04.2-2.fc30.x86_64 4/102 Erasing : korganizer-19.04.2-1.fc30.x86_64 5/102 Erasing : korganizer-libs-19.04.2-1.fc30.x86_64 6/102 Erasing : kmail-account-wizard-19.04.2-1.fc30.x86_64 7/102 Erasing : grantlee-editor-19.04.2-1.fc30.x86_64 8/102 Erasing : pim-data-exporter-19.04.2-1.fc30.x86_64 9/102 Erasing : pim-data-exporter-libs-19.04.2-1.fc30.x86_64 10/102 Erasing : digikam-6.1.0-7.fc30.x86_64 11/102 Erasing : digikam-libs-6.1.0-7.fc30.x86_64 12/102 Erasing : opencv-core-3.4.4-10.fc30.x86_64 13/102 Erasing : kaddressbook-19.04.2-1.fc30.x86_64 14/102 Erasing : kdepim-addons-19.04.2-1.fc30.x86_64 15/102 Erasing : kdepim-runtime-1:19.04.2-1.fc30.x86_64 16/102 Erasing : kf5-incidenceeditor-19.04.2-1.fc30.x86_64 17/102 Erasing : kaddressbook-libs-19.04.2-1.fc30.x86_64 18/102 Erasing : kdepim-runtime-libs-1:19.04.2-1.fc30.x86_64 19/102 Erasing : akonadi-import-wizard-19.04.2-1.fc30.x86_64 20/102 Erasing : kf5-mailcommon-19.04.2-1.fc30.x86_64 21/102 Erasing : kf5-eventviews-19.04.2-1.fc30.x86_64 22/102 Erasing : kf5-calendarsupport-19.04.2-1.fc30.x86_64 23/102 Erasing : kf5-akonadi-calendar-19.04.2-1.fc30.x86_64 24/102 Erasing : akregator-19.04.2-1.fc30.x86_64 25/102 Erasing : akregator-libs-19.04.2-1.fc30.x86_64 26/102 Erasing : kf5-messagelib-19.04.2-1.fc30.x86_64 27/102 Erasing : kf5-pimcommon-akonadi-19.04.2-1.fc30.x86_64 28/102 Erasing : kf5-libkdepim-akonadi-19.04.2-1.fc30.x86_64 29/102 Erasing : kdepim-apps-libs-19.04.2-1.fc30.x86_64 30/102 Erasing : kf5-akonadi-search-19.04.2-1.fc30.x86_64 31/102 Erasing : kf5-mailimporter-akonadi-19.04.2-1.fc30.x86_6 32/102 Erasing : kf5-kalarmcal-19.04.2-1.fc30.x86_64 33/102 Erasing : kf5-kitinerary-19.04.2-1.fc30.x86_64 34/102 Erasing : hugin-base-2019.0.0-1.fc30.x86_64 35/102 Erasing : pim-sieve-editor-19.04.2-1.fc30.x86_64 36/102 Erasing : kf5-kmailtransport-19.04.2-1.fc30.x86_64 37/102 Erasing : kf5-libksieve-19.04.2-1.fc30.x86_64 38/102 Erasing : kf5-ktnef-19.04.2-1.fc30.x86_64 39/102 Erasing : kontact-libs-19.04.2-1.fc30.x86_64 40/102 Erasing : kgpg-18.12.2-1.fc30.x86_64 41/102 Erasing : kf5-akonadi-contacts-19.04.2-1.fc30.x86_64 42/102 Erasing : kf5-kcalendarutils-19.04.2-1.fc30.x86_64 43/102 Erasing : kf5-kmailtransport-akonadi-19.04.2-1.fc30.x86 44/102 Erasing : kf5-akonadi-mime-19.04.2-1.fc30.x86_64 45/102 Erasing : libkgapi-19.04.2-1.fc30.x86_64 46/102 Erasing : kf5-kcalendarcore-19.04.2-1.fc30.x86_64 47/102 Erasing : coin-or-Clp-1.16.10-8.fc30.x86_64 48/102 Erasing : vtk-8.1.1-5.fc30.x86_64 49/102 Erasing : coin-or-Osi-0.107.8-9.fc30.x86_64 50/102 Erasing : kf5-akonadi-server-19.04.2-2.fc30.x86_64 51/102 Running scriptlet: kf5-akonadi-server-19.04.2-2.fc30.x86_64 51/102 Erasing : kf5-akonadi-server-mysql-19.04.2-2.fc30.x86_6 52/102 Running scriptlet: kf5-akonadi-server-mysql-19.04.2-2.fc30.x86_6 52/102 Erasing : kf5-kidentitymanagement-19.04.2-1.fc30.x86_64 53/102 Erasing : enblend-4.2-10.fc29.x86_64 54/102 Erasing : kf5-mailimporter-19.04.2-1.fc30.x86_64 55/102 Erasing : kf5-libkleo-19.04.2-1.fc30.x86_64 56/102 Erasing : kf5-kimap-19.04.2-1.fc30.x86_64 57/102 Erasing : kf5-libgravatar-19.04.2-1.fc30.x86_64 58/102 Erasing : kf5-pimcommon-19.04.2-1.fc30.x86_64 59/102 Erasing : kf5-libkdepim-19.04.2-1.fc30.x86_64 60/102 Erasing : kf5-kmbox-19.04.2-1.fc30.x86_64 61/102 Erasing : kf5-akonadi-notes-19.04.2-1.fc30.x86_64 62/102 Running scriptlet: openni-1.5.7.10-15.fc30.x86_64 63/102 Erasing : openni-1.5.7.10-15.fc30.x86_64 63/102 Erasing : gdcm-2.8.8-4.fc30.x86_64 64/102 Erasing : libucil-0.9.10-18.fc30.x86_64 65/102 Erasing : grantlee-editor-libs-19.04.2-1.fc30.x86_64 66/102 Erasing : mariadb-gssapi-server-3:10.3.16-1.fc30.x86_64 67/102 Erasing : libunicap-0.9.12-23.fc30.x86_64 68/102 Erasing : perl-Image-ExifTool-11.50-1.fc30.noarch 69/102 Erasing : libkolabxml-1.1.6-10.fc30.x86_64 70/102 Erasing : digikam-doc-6.1.0-7.fc30.noarch 71/102 Erasing : mariadb-3:10.3.16-1.fc30.x86_64 72/102 Erasing : mariadb-backup-3:10.3.16-1.fc30.x86_64 73/102 Erasing : mariadb-cracklib-password-check-3:10.3.16-1.f 74/102 Running scriptlet: mariadb-server-3:10.3.16-1.fc30.x86_64 75/102 Erasing : mariadb-server-3:10.3.16-1.fc30.x86_64 75/102 Running scriptlet: mariadb-server-3:10.3.16-1.fc30.x86_64 75/102 Erasing : mariadb-errmsg-3:10.3.16-1.fc30.x86_64 76/102 Erasing : mariadb-server-utils-3:10.3.16-1.fc30.x86_64 77/102 Erasing : mariadb-common-3:10.3.16-1.fc30.x86_64 78/102 Erasing : perl-DBD-MySQL-4.050-2.fc30.x86_64 79/102 Erasing : kf5-kpimtextedit-19.04.2-2.fc30.x86_64 80/102 Erasing : CharLS-1.0-18.fc30.x86_64 81/102 Erasing : tinyxml-2.6.2-18.fc30.x86_64 82/102 Erasing : kf5-kmime-19.04.2-1.fc30.x86_64 83/102 Erasing : kf5-kcontacts-19.04.2-1.fc30.x86_64 84/102 Erasing : kf5-kldap-19.04.2-1.fc30.x86_64 85/102 Erasing : vigra-1.11.1-13.fc30.x86_64 86/102 Erasing : qt5-qtbase-mysql-5.12.4-4.fc30.x86_64 87/102 Erasing : coin-or-CoinUtils-2.10.14-3.fc30.x86_64 88/102 Erasing : mesa-libOSMesa-19.1.3-1.fc30.x86_64 89/102 Erasing : netcdf-cxx-4.2-21.fc30.x86_64 90/102 Running scriptlet: netcdf-cxx-4.2-21.fc30.x86_64 90/102 Erasing : libical-3.0.4-3.fc30.x86_64 91/102 Erasing : kf5-grantleetheme-19.04.2-1.fc30.x86_64 92/102 Erasing : kf5-kontactinterface-19.04.2-1.fc30.x86_64 93/102 Erasing : kf5-ksmtp-19.04.2-1.fc30.x86_64 94/102 Erasing : libpano13-2.9.19-9.fc30.x86_64 95/102 Erasing : kf5-kpkpass-19.04.2-1.fc30.x86_64 96/102 Erasing : kf5-kdav-19.04.2-1.fc30.x86_64 97/102 Erasing : libdc1394-2.2.2-12.fc30.x86_64 98/102 Erasing : libva-2.4.1-1.fc30.x86_64 99/102 Erasing : lensfun-0.3.2-19.fc30.x86_64 100/102 Erasing : liblqr-1-0.4.2-12.fc30.x86_64 101/102 Erasing : protobuf-3.6.1-3.fc30.x86_64 102/102 Running scriptlet: protobuf-3.6.1-3.fc30.x86_64 102/102 Verifying : CharLS-1.0-18.fc30.x86_64 1/102 Verifying : akonadi-import-wizard-19.04.2-1.fc30.x86_64 2/102 Verifying : akregator-19.04.2-1.fc30.x86_64 3/102 Verifying : akregator-libs-19.04.2-1.fc30.x86_64 4/102 Verifying : coin-or-Clp-1.16.10-8.fc30.x86_64 5/102 Verifying : coin-or-CoinUtils-2.10.14-3.fc30.x86_64 6/102 Verifying : coin-or-Osi-0.107.8-9.fc30.x86_64 7/102 Verifying : digikam-6.1.0-7.fc30.x86_64 8/102 Verifying : digikam-doc-6.1.0-7.fc30.noarch 9/102 Verifying : digikam-libs-6.1.0-7.fc30.x86_64 10/102 Verifying : enblend-4.2-10.fc29.x86_64 11/102 Verifying : gdcm-2.8.8-4.fc30.x86_64 12/102 Verifying : grantlee-editor-19.04.2-1.fc30.x86_64 13/102 Verifying : grantlee-editor-libs-19.04.2-1.fc30.x86_64 14/102 Verifying : hugin-base-2019.0.0-1.fc30.x86_64 15/102 Verifying : kaddressbook-19.04.2-1.fc30.x86_64 16/102 Verifying : kaddressbook-libs-19.04.2-1.fc30.x86_64 17/102 Verifying : kdepim-addons-19.04.2-1.fc30.x86_64 18/102 Verifying : kdepim-apps-libs-19.04.2-1.fc30.x86_64 19/102 Verifying : kdepim-runtime-1:19.04.2-1.fc30.x86_64 20/102 Verifying : kdepim-runtime-libs-1:19.04.2-1.fc30.x86_64 21/102 Verifying : kf5-akonadi-calendar-19.04.2-1.fc30.x86_64 22/102 Verifying : kf5-akonadi-contacts-19.04.2-1.fc30.x86_64 23/102 Verifying : kf5-akonadi-mime-19.04.2-1.fc30.x86_64 24/102 Verifying : kf5-akonadi-notes-19.04.2-1.fc30.x86_64 25/102 Verifying : kf5-akonadi-search-19.04.2-1.fc30.x86_64 26/102 Verifying : kf5-akonadi-server-19.04.2-2.fc30.x86_64 27/102 Verifying : kf5-akonadi-server-mysql-19.04.2-2.fc30.x86_6 28/102 Verifying : kf5-calendarsupport-19.04.2-1.fc30.x86_64 29/102 Verifying : kf5-eventviews-19.04.2-1.fc30.x86_64 30/102 Verifying : kf5-grantleetheme-19.04.2-1.fc30.x86_64 31/102 Verifying : kf5-incidenceeditor-19.04.2-1.fc30.x86_64 32/102 Verifying : kf5-kalarmcal-19.04.2-1.fc30.x86_64 33/102 Verifying : kf5-kcalendarcore-19.04.2-1.fc30.x86_64 34/102 Verifying : kf5-kcalendarutils-19.04.2-1.fc30.x86_64 35/102 Verifying : kf5-kcontacts-19.04.2-1.fc30.x86_64 36/102 Verifying : kf5-kdav-19.04.2-1.fc30.x86_64 37/102 Verifying : kf5-kidentitymanagement-19.04.2-1.fc30.x86_64 38/102 Verifying : kf5-kimap-19.04.2-1.fc30.x86_64 39/102 Verifying : kf5-kitinerary-19.04.2-1.fc30.x86_64 40/102 Verifying : kf5-kldap-19.04.2-1.fc30.x86_64 41/102 Verifying : kf5-kmailtransport-19.04.2-1.fc30.x86_64 42/102 Verifying : kf5-kmailtransport-akonadi-19.04.2-1.fc30.x86 43/102 Verifying : kf5-kmbox-19.04.2-1.fc30.x86_64 44/102 Verifying : kf5-kmime-19.04.2-1.fc30.x86_64 45/102 Verifying : kf5-kontactinterface-19.04.2-1.fc30.x86_64 46/102 Verifying : kf5-kpimtextedit-19.04.2-2.fc30.x86_64 47/102 Verifying : kf5-kpkpass-19.04.2-1.fc30.x86_64 48/102 Verifying : kf5-ksmtp-19.04.2-1.fc30.x86_64 49/102 Verifying : kf5-ktnef-19.04.2-1.fc30.x86_64 50/102 Verifying : kf5-libgravatar-19.04.2-1.fc30.x86_64 51/102 Verifying : kf5-libkdepim-19.04.2-1.fc30.x86_64 52/102 Verifying : kf5-libkdepim-akonadi-19.04.2-1.fc30.x86_64 53/102 Verifying : kf5-libkleo-19.04.2-1.fc30.x86_64 54/102 Verifying : kf5-libksieve-19.04.2-1.fc30.x86_64 55/102 Verifying : kf5-mailcommon-19.04.2-1.fc30.x86_64 56/102 Verifying : kf5-mailimporter-19.04.2-1.fc30.x86_64 57/102 Verifying : kf5-mailimporter-akonadi-19.04.2-1.fc30.x86_6 58/102 Verifying : kf5-messagelib-19.04.2-1.fc30.x86_64 59/102 Verifying : kf5-pimcommon-19.04.2-1.fc30.x86_64 60/102 Verifying : kf5-pimcommon-akonadi-19.04.2-1.fc30.x86_64 61/102 Verifying : kgpg-18.12.2-1.fc30.x86_64 62/102 Verifying : kmail-19.04.2-2.fc30.x86_64 63/102 Verifying : kmail-account-wizard-19.04.2-1.fc30.x86_64 64/102 Verifying : kmail-libs-19.04.2-2.fc30.x86_64 65/102 Verifying : kontact-19.04.2-1.fc30.x86_64 66/102 Verifying : kontact-libs-19.04.2-1.fc30.x86_64 67/102 Verifying : korganizer-19.04.2-1.fc30.x86_64 68/102 Verifying : korganizer-libs-19.04.2-1.fc30.x86_64 69/102 Verifying : lensfun-0.3.2-19.fc30.x86_64 70/102 Verifying : libdc1394-2.2.2-12.fc30.x86_64 71/102 Verifying : libical-3.0.4-3.fc30.x86_64 72/102 Verifying : libkgapi-19.04.2-1.fc30.x86_64 73/102 Verifying : libkolabxml-1.1.6-10.fc30.x86_64 74/102 Verifying : liblqr-1-0.4.2-12.fc30.x86_64 75/102 Verifying : libpano13-2.9.19-9.fc30.x86_64 76/102 Verifying : libucil-0.9.10-18.fc30.x86_64 77/102 Verifying : libunicap-0.9.12-23.fc30.x86_64 78/102 Verifying : libva-2.4.1-1.fc30.x86_64 79/102 Verifying : mariadb-3:10.3.16-1.fc30.x86_64 80/102 Verifying : mariadb-backup-3:10.3.16-1.fc30.x86_64 81/102 Verifying : mariadb-common-3:10.3.16-1.fc30.x86_64 82/102 Verifying : mariadb-cracklib-password-check-3:10.3.16-1.f 83/102 Verifying : mariadb-errmsg-3:10.3.16-1.fc30.x86_64 84/102 Verifying : mariadb-gssapi-server-3:10.3.16-1.fc30.x86_64 85/102 Verifying : mariadb-server-3:10.3.16-1.fc30.x86_64 86/102 Verifying : mariadb-server-utils-3:10.3.16-1.fc30.x86_64 87/102 Verifying : mesa-libOSMesa-19.1.3-1.fc30.x86_64 88/102 Verifying : netcdf-cxx-4.2-21.fc30.x86_64 89/102 Verifying : opencv-contrib-3.4.4-10.fc30.x86_64 90/102 Verifying : opencv-core-3.4.4-10.fc30.x86_64 91/102 Verifying : openni-1.5.7.10-15.fc30.x86_64 92/102 Verifying : perl-DBD-MySQL-4.050-2.fc30.x86_64 93/102 Verifying : perl-Image-ExifTool-11.50-1.fc30.noarch 94/102 Verifying : pim-data-exporter-19.04.2-1.fc30.x86_64 95/102 Verifying : pim-data-exporter-libs-19.04.2-1.fc30.x86_64 96/102 Verifying : pim-sieve-editor-19.04.2-1.fc30.x86_64 97/102 Verifying : protobuf-3.6.1-3.fc30.x86_64 98/102 Verifying : qt5-qtbase-mysql-5.12.4-4.fc30.x86_64 99/102 Verifying : tinyxml-2.6.2-18.fc30.x86_64 100/102 Verifying : vigra-1.11.1-13.fc30.x86_64 101/102 Verifying : vtk-8.1.1-5.fc30.x86_64 102/102 Removed: akonadi-import-wizard-19.04.2-1.fc30.x86_64 kf5-akonadi-calendar-19.04.2-1.fc30.x86_64 kf5-akonadi-contacts-19.04.2-1.fc30.x86_64 kf5-akonadi-mime-19.04.2-1.fc30.x86_64 kf5-akonadi-notes-19.04.2-1.fc30.x86_64 kf5-akonadi-search-19.04.2-1.fc30.x86_64 kf5-akonadi-server-19.04.2-2.fc30.x86_64 kf5-akonadi-server-mysql-19.04.2-2.fc30.x86_64 kf5-kmailtransport-akonadi-19.04.2-1.fc30.x86_64 kf5-libkdepim-akonadi-19.04.2-1.fc30.x86_64 kf5-mailimporter-akonadi-19.04.2-1.fc30.x86_64 kf5-pimcommon-akonadi-19.04.2-1.fc30.x86_64 akregator-19.04.2-1.fc30.x86_64 akregator-libs-19.04.2-1.fc30.x86_64 digikam-6.1.0-7.fc30.x86_64 digikam-libs-6.1.0-7.fc30.x86_64 kgpg-18.12.2-1.fc30.x86_64 kontact-19.04.2-1.fc30.x86_64 CharLS-1.0-18.fc30.x86_64 coin-or-Clp-1.16.10-8.fc30.x86_64 coin-or-CoinUtils-2.10.14-3.fc30.x86_64 coin-or-Osi-0.107.8-9.fc30.x86_64 digikam-doc-6.1.0-7.fc30.noarch enblend-4.2-10.fc29.x86_64 gdcm-2.8.8-4.fc30.x86_64 grantlee-editor-19.04.2-1.fc30.x86_64 grantlee-editor-libs-19.04.2-1.fc30.x86_64 hugin-base-2019.0.0-1.fc30.x86_64 kaddressbook-19.04.2-1.fc30.x86_64 kaddressbook-libs-19.04.2-1.fc30.x86_64 kdepim-addons-19.04.2-1.fc30.x86_64 kdepim-apps-libs-19.04.2-1.fc30.x86_64 kdepim-runtime-1:19.04.2-1.fc30.x86_64 kdepim-runtime-libs-1:19.04.2-1.fc30.x86_64 kf5-calendarsupport-19.04.2-1.fc30.x86_64 kf5-eventviews-19.04.2-1.fc30.x86_64 kf5-grantleetheme-19.04.2-1.fc30.x86_64 kf5-incidenceeditor-19.04.2-1.fc30.x86_64 kf5-kalarmcal-19.04.2-1.fc30.x86_64 kf5-kcalendarcore-19.04.2-1.fc30.x86_64 kf5-kcalendarutils-19.04.2-1.fc30.x86_64 kf5-kcontacts-19.04.2-1.fc30.x86_64 kf5-kdav-19.04.2-1.fc30.x86_64 kf5-kidentitymanagement-19.04.2-1.fc30.x86_64 kf5-kimap-19.04.2-1.fc30.x86_64 kf5-kitinerary-19.04.2-1.fc30.x86_64 kf5-kldap-19.04.2-1.fc30.x86_64 kf5-kmailtransport-19.04.2-1.fc30.x86_64 kf5-kmbox-19.04.2-1.fc30.x86_64 kf5-kmime-19.04.2-1.fc30.x86_64 kf5-kontactinterface-19.04.2-1.fc30.x86_64 kf5-kpimtextedit-19.04.2-2.fc30.x86_64 kf5-kpkpass-19.04.2-1.fc30.x86_64 kf5-ksmtp-19.04.2-1.fc30.x86_64 kf5-ktnef-19.04.2-1.fc30.x86_64 kf5-libgravatar-19.04.2-1.fc30.x86_64 kf5-libkdepim-19.04.2-1.fc30.x86_64 kf5-libkleo-19.04.2-1.fc30.x86_64 kf5-libksieve-19.04.2-1.fc30.x86_64 kf5-mailcommon-19.04.2-1.fc30.x86_64 kf5-mailimporter-19.04.2-1.fc30.x86_64 kf5-messagelib-19.04.2-1.fc30.x86_64 kf5-pimcommon-19.04.2-1.fc30.x86_64 kmail-19.04.2-2.fc30.x86_64 kmail-account-wizard-19.04.2-1.fc30.x86_64 kmail-libs-19.04.2-2.fc30.x86_64 kontact-libs-19.04.2-1.fc30.x86_64 korganizer-19.04.2-1.fc30.x86_64 korganizer-libs-19.04.2-1.fc30.x86_64 lensfun-0.3.2-19.fc30.x86_64 libdc1394-2.2.2-12.fc30.x86_64 libical-3.0.4-3.fc30.x86_64 libkgapi-19.04.2-1.fc30.x86_64 libkolabxml-1.1.6-10.fc30.x86_64 liblqr-1-0.4.2-12.fc30.x86_64 libpano13-2.9.19-9.fc30.x86_64 libucil-0.9.10-18.fc30.x86_64 libunicap-0.9.12-23.fc30.x86_64 libva-2.4.1-1.fc30.x86_64 mariadb-3:10.3.16-1.fc30.x86_64 mariadb-backup-3:10.3.16-1.fc30.x86_64 mariadb-common-3:10.3.16-1.fc30.x86_64 mariadb-cracklib-password-check-3:10.3.16-1.fc30.x86_64 mariadb-errmsg-3:10.3.16-1.fc30.x86_64 mariadb-gssapi-server-3:10.3.16-1.fc30.x86_64 mariadb-server-3:10.3.16-1.fc30.x86_64 mariadb-server-utils-3:10.3.16-1.fc30.x86_64 mesa-libOSMesa-19.1.3-1.fc30.x86_64 netcdf-cxx-4.2-21.fc30.x86_64 opencv-contrib-3.4.4-10.fc30.x86_64 opencv-core-3.4.4-10.fc30.x86_64 openni-1.5.7.10-15.fc30.x86_64 perl-DBD-MySQL-4.050-2.fc30.x86_64 perl-Image-ExifTool-11.50-1.fc30.noarch pim-data-exporter-19.04.2-1.fc30.x86_64 pim-data-exporter-libs-19.04.2-1.fc30.x86_64 pim-sieve-editor-19.04.2-1.fc30.x86_64 protobuf-3.6.1-3.fc30.x86_64 qt5-qtbase-mysql-5.12.4-4.fc30.x86_64 tinyxml-2.6.2-18.fc30.x86_64 vigra-1.11.1-13.fc30.x86_64 vtk-8.1.1-5.fc30.x86_64 Complete! After removing those Akonadi packages, I installed the MySQL Community Edition from the Fedora repo with this syntax: yum install -y community-mysql* Display detailed console log → Last metadata expiration check: 1:03:17 ago on Thu 15 Aug 2019 11:01:30 PM MDT. Dependencies resolved. ============================================================================= Package Arch Version Repository Size ============================================================================= Installing: community-mysql x86_64 8.0.16-1.fc30 updates 10 M community-mysql-devel x86_64 8.0.16-1.fc30 updates 89 k community-mysql-errmsg x86_64 8.0.16-1.fc30 updates 487 k community-mysql-test x86_64 8.0.16-1.fc30 updates 92 M Installing dependencies: community-mysql-common x86_64 8.0.16-1.fc30 updates 86 k community-mysql-libs x86_64 8.0.16-1.fc30 updates 1.1 M community-mysql-server x86_64 8.0.16-1.fc30 updates 21 M openssl-devel x86_64 1:1.1.1c-2.fc30 updates 2.2 M perl-Memoize noarch 1.03-438.fc30 updates 66 k perl-Importer noarch 0.025-4.fc30 fedora 40 k perl-JSON noarch 4.02-1.fc30 fedora 98 k perl-MIME-Charset noarch 1.012.2-7.fc30 fedora 49 k perl-Term-Size-Perl x86_64 0.031-4.fc30 fedora 21 k perl-Term-Table noarch 0.013-2.fc30 fedora 41 k perl-Test-Simple noarch 2:1.302162-1.fc30 fedora 513 k protobuf x86_64 3.6.1-3.fc30 fedora 907 k protobuf-lite x86_64 3.6.1-3.fc30 fedora 149 k sombok x86_64 2.4.0-9.fc30 fedora 45 k Installing weak dependencies: perl-Term-Size-Any noarch 0.002-27.fc30 updates 13 k perl-Unicode-LineBreak x86_64 2019.001-2.fc30 fedora 120 k Transaction Summary ============================================================================= Install 20 Packages Total download size: 129 M Installed size: 597 M Downloading Packages: (1/20): community-mysql-devel-8.0.16-1.fc30. 96 kB/s | 89 kB 00:00 (2/20): community-mysql-common-8.0.16-1.fc30 90 kB/s | 86 kB 00:00 (3/20): community-mysql-errmsg-8.0.16-1.fc30 391 kB/s | 487 kB 00:01 (4/20): community-mysql-8.0.16-1.fc30.x86_64 4.0 MB/s | 10 MB 00:02 (5/20): community-mysql-libs-8.0.16-1.fc30.x 397 kB/s | 1.1 MB 00:02 (6/20): community-mysql-server-8.0.16-1.fc30 7.1 MB/s | 21 MB 00:02 (7/20): openssl-devel-1.1.1c-2.fc30.x86_64.r 1.6 MB/s | 2.2 MB 00:01 (8/20): perl-Memoize-1.03-438.fc30.noarch.rp 109 kB/s | 66 kB 00:00 (9/20): perl-Term-Size-Any-0.002-27.fc30.noa 34 kB/s | 13 kB 00:00 (10/20): perl-Importer-0.025-4.fc30.noarch.r 75 kB/s | 40 kB 00:00 (11/20): perl-MIME-Charset-1.012.2-7.fc30.no 170 kB/s | 49 kB 00:00 (12/20): perl-JSON-4.02-1.fc30.noarch.rpm 120 kB/s | 98 kB 00:00 (13/20): perl-Term-Size-Perl-0.031-4.fc30.x8 128 kB/s | 21 kB 00:00 (14/20): perl-Term-Table-0.013-2.fc30.noarch 223 kB/s | 41 kB 00:00 (15/20): perl-Unicode-LineBreak-2019.001-2.f 303 kB/s | 120 kB 00:00 (16/20): perl-Test-Simple-1.302162-1.fc30.no 583 kB/s | 513 kB 00:00 (17/20): protobuf-lite-3.6.1-3.fc30.x86_64.r 795 kB/s | 149 kB 00:00 (18/20): sombok-2.4.0-9.fc30.x86_64.rpm 172 kB/s | 45 kB 00:00 (19/20): protobuf-3.6.1-3.fc30.x86_64.rpm 837 kB/s | 907 kB 00:01 (20/20): community-mysql-test-8.0.16-1.fc30. 7.4 MB/s | 92 MB 00:12 ----------------------------------------------------------------------------- Total 8.0 MB/s | 129 MB 00:16 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : community-mysql-common-8.0.16-1.fc30.x86_64 1/20 Installing : community-mysql-8.0.16-1.fc30.x86_64 2/20 Installing : community-mysql-errmsg-8.0.16-1.fc30.x86_64 3/20 Installing : community-mysql-libs-8.0.16-1.fc30.x86_64 4/20 Installing : sombok-2.4.0-9.fc30.x86_64 5/20 Installing : protobuf-lite-3.6.1-3.fc30.x86_64 6/20 Running scriptlet: community-mysql-server-8.0.16-1.fc30.x86_64 7/20 Installing : community-mysql-server-8.0.16-1.fc30.x86_64 7/20 Running scriptlet: community-mysql-server-8.0.16-1.fc30.x86_64 7/20 Installing : protobuf-3.6.1-3.fc30.x86_64 8/20 Installing : perl-Term-Size-Perl-0.031-4.fc30.x86_64 9/20 Installing : perl-Term-Size-Any-0.002-27.fc30.noarch 10/20 Installing : perl-MIME-Charset-1.012.2-7.fc30.noarch 11/20 Installing : perl-Unicode-LineBreak-2019.001-2.fc30.x86_64 12/20 Installing : perl-JSON-4.02-1.fc30.noarch 13/20 Installing : perl-Importer-0.025-4.fc30.noarch 14/20 Installing : perl-Term-Table-0.013-2.fc30.noarch 15/20 Installing : perl-Test-Simple-2:1.302162-1.fc30.noarch 16/20 Installing : perl-Memoize-1.03-438.fc30.noarch 17/20 Installing : openssl-devel-1:1.1.1c-2.fc30.x86_64 18/20 Installing : community-mysql-devel-8.0.16-1.fc30.x86_64 19/20 Installing : community-mysql-test-8.0.16-1.fc30.x86_64 20/20 Running scriptlet: community-mysql-test-8.0.16-1.fc30.x86_64 20/20 Verifying : community-mysql-8.0.16-1.fc30.x86_64 1/20 Verifying : community-mysql-common-8.0.16-1.fc30.x86_64 2/20 Verifying : community-mysql-devel-8.0.16-1.fc30.x86_64 3/20 Verifying : community-mysql-errmsg-8.0.16-1.fc30.x86_64 4/20 Verifying : community-mysql-libs-8.0.16-1.fc30.x86_64 5/20 Verifying : community-mysql-server-8.0.16-1.fc30.x86_64 6/20 Verifying : community-mysql-test-8.0.16-1.fc30.x86_64 7/20 Verifying : openssl-devel-1:1.1.1c-2.fc30.x86_64 8/20 Verifying : perl-Memoize-1.03-438.fc30.noarch 9/20 Verifying : perl-Term-Size-Any-0.002-27.fc30.noarch 10/20 Verifying : perl-Importer-0.025-4.fc30.noarch 11/20 Verifying : perl-JSON-4.02-1.fc30.noarch 12/20 Verifying : perl-MIME-Charset-1.012.2-7.fc30.noarch 13/20 Verifying : perl-Term-Size-Perl-0.031-4.fc30.x86_64 14/20 Verifying : perl-Term-Table-0.013-2.fc30.noarch 15/20 Verifying : perl-Test-Simple-2:1.302162-1.fc30.noarch 16/20 Verifying : perl-Unicode-LineBreak-2019.001-2.fc30.x86_64 17/20 Verifying : protobuf-3.6.1-3.fc30.x86_64 18/20 Verifying : protobuf-lite-3.6.1-3.fc30.x86_64 19/20 Verifying : sombok-2.4.0-9.fc30.x86_64 20/20 Installed: community-mysql-8.0.16-1.fc30.x86_64 community-mysql-devel-8.0.16-1.fc30.x86_64 community-mysql-errmsg-8.0.16-1.fc30.x86_64 community-mysql-test-8.0.16-1.fc30.x86_64 perl-Term-Size-Any-0.002-27.fc30.noarch perl-Unicode-LineBreak-2019.001-2.fc30.x86_64 community-mysql-common-8.0.16-1.fc30.x86_64 community-mysql-libs-8.0.16-1.fc30.x86_64 community-mysql-server-8.0.16-1.fc30.x86_64 openssl-devel-1:1.1.1c-2.fc30.x86_64 perl-Memoize-1.03-438.fc30.noarch perl-Importer-0.025-4.fc30.noarch perl-JSON-4.02-1.fc30.noarch perl-MIME-Charset-1.012.2-7.fc30.noarch perl-Term-Size-Perl-0.031-4.fc30.x86_64 perl-Term-Table-0.013-2.fc30.noarch perl-Test-Simple-2:1.302162-1.fc30.noarch protobuf-3.6.1-3.fc30.x86_64 protobuf-lite-3.6.1-3.fc30.x86_64 sombok-2.4.0-9.fc30.x86_64 Complete! Having installed MySQL Community Edition, I wanted to start the mysql service with this command: sudo service mysqld start Unfortunately, the service utility wasn’t installed. That surprised me. While I could have run this command: systemctl start mysqld..service A better solution was to install any missing code components. I determined that the service utility is part of the initscripts package; and I installed it with the following command: sudo yum install -y initscripts Display detailed console log → Fedora Modular 30 - x86_64 30 kB/s | 18 kB 00:00 Fedora Modular 30 - x86_64 - Updates 40 kB/s | 17 kB 00:00 Fedora 30 - x86_64 - Updates 43 kB/s | 17 kB 00:00 Fedora 30 - x86_64 58 kB/s | 19 kB 00:00 google-chrome-unstable 12 kB/s | 1.3 kB 00:00 google-chrome 16 kB/s | 1.3 kB 00:00 Dependencies resolved. ============================================================================= Package Architecture Version Repository Size ============================================================================= Installing: initscripts x86_64 10.02-1.fc30 updates 202 k Transaction Summary ============================================================================= Install 1 Package Total download size: 202 k Installed size: 1.1 M Downloading Packages: initscripts-10.02-1.fc30.x86_64.rpm 296 kB/s | 202 kB 00:00 ----------------------------------------------------------------------------- Total 162 kB/s | 202 kB 00:01 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : initscripts-10.02-1.fc30.x86_64 1/1 Running scriptlet: initscripts-10.02-1.fc30.x86_64 1/1 Verifying : initscripts-10.02-1.fc30.x86_64 1/1 Installed: initscripts-10.02-1.fc30.x86_64 Complete! Then, I ran the mysql_secure_installation script to secure the installation: mysql_secure_installation The script set the root user’s password, remove the anonymous user, disallow remote root login, and remove the test databases. Then, I verified connecting to the MySQL database with the following syntax: mysql -uroot -ppassword I enabled the MySQL Service to start with each reboot of the Fedora instance. I used the following command: systemctl enable mysqld.service It creates the following link: ln -s '/etc/systemd/system/multi-user.target.wants/mysqld.service' '/usr/lib/systemd/system/mysqld.service' The next step requires setting up a sample studentdb database. The syntax has changed from prior releases. Here are the three steps: Create the studentdb database with the following command as the MySQL root user: mysql> CREATE DATABASE studentdb; Grant the root user the privilege to grant to others, which root does not have by default. You use the following syntax as the MySQL root user: mysql> GRANT ALL ON *.* TO 'root'@'localhost'; Create the user with a clear English password and grant the user student full privileges on the studentdb database: mysql> CREATE USER 'student'@'localhost' IDENTIFIED WITH mysql_native_password BY 'student'; mysql> GRANT ALL ON studentdb.* TO 'student'@'localhost'; If you fail to specify mysql_native_password when creating the user and use the older syntax like the following example: mysql> CREATE USER 'student'@'localhost' IDENTIFIED BY 'student'; mysql> GRANT ALL ON studentdb.* TO 'student'@'localhost'; The GRANT command will raise the following error: ERROR 1410 (42000): You are not allowed to create a user with GRANT [Less]
Posted over 4 years ago by Frederic Descamps
Answering this question is not easy. Like always, the best response is “it depends” ! But let’s try to give you all the necessary info the provide the most accurate answer. Also, may be fixing one single query is not enough and looking for that ... [More] specific statement will lead in finding multiple problematic statements. The most consuming one The first candidate to be fixed is the query that consumes most of the execution time (latency). To identify it, we will use the sys schema and join it with events_statements_summary_by_digest from performance_schemato retrieve a real example of the query (see this post for more details). Let’s take a look at what sys schema has to offer us related to our mission: > show tables like 'statements_with%'; +---------------------------------------------+ | Tables_in_sys (statements_with%) | +---------------------------------------------+ | statements_with_errors_or_warnings | | statements_with_full_table_scans | | statements_with_runtimes_in_95th_percentile | | statements_with_sorting | | statements_with_temp_tables | +---------------------------------------------+ We will then use the statements_with_runtimes_in_95th_percentile to achieve our first task. However we will use the version of the view with raw data (not human readable formatted), to be able to sort the results as we want. The raw data version of sysschema views start with x$: SELECT schema_name, format_time(total_latency) tot_lat, exec_count, format_time(total_latency/exec_count) latency_per_call, query_sample_text FROM sys.x$statements_with_runtimes_in_95th_percentile AS t1 JOIN performance_schema.events_statements_summary_by_digest AS t2 ON t2.digest=t1.digest WHERE schema_name NOT in ('performance_schema', 'sys') ORDER BY (total_latency/exec_count) desc LIMIT 1\G *************************** 1. row *************************** schema_name: library tot_lat: 857.29 ms exec_count: 1 latency_per_call: 857.29 ms query_sample_text: INSERT INTO `books` (`doc`) VALUES ('{\"_id\": \"00005d44289d000000000000007d\", \"title\": \"lucky luke, tome 27 : l alibi\", \"isbn10\": \"2884710086\", \"isbn13\": \"978-2884710084\", \"langue\": \"français\", \"relié\": \"48 pages\", \"authors\": [\"Guylouis (Auteur)\", \"Morris (Illustrations)\"], \"editeur\": \"lucky comics (21 décembre 1999)\", \"collection\": \"lucky luke\", \"couverture\": \" data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDABQODxIPDRQSEBIXFRQYHjIhHhwcHj0sLiQySUBMS0dARkVQWnNiUFVtVkVGZIhlbXd7gYKBTmCNl4x9lnN+gXz/2wBDARUXFx4aHjshITt8U0ZTfHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHz/wAARCAEfANwDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBA... 1 row in set (0.2838 sec) This statement is complicated to optimize as it’s a simple insert, and it was run only once. Insert can be slower because of disk response time (I run in full durability of course). Having too many indexes may also increase the response time, this is why I invite you to have a look at these two sysschema tables: schema_redundant_indexes schema_unused_indexes You will have to play with the limit of the query to find some valid candidates and then, thanks to the query_sample_text we have the possibility to run an EXPLAIN on the query without having to rewrite it ! Full table scans Another query I would try to optimize is the one doing full table scans: SELECT schema_name, sum_rows_examined, (sum_rows_examined/exec_count) avg_rows_call, format_time(total_latency) tot_lat, exec_count, format_time(total_latency/exec_count) AS latency_per_call, query_sample_text FROM sys.x$statements_with_full_table_scans AS t1 JOIN performance_schema.events_statements_summary_by_digest AS t2 ON t2.digest=t1.digest WHERE schema_name NOT in ('performance_schema', 'sys') ORDER BY (total_latency/exec_count) desc LIMIT 1\G *************************** 1. row *************************** schema_name: wp_lefred sum_rows_examined: 268075 avg_rows_call: 3277.0419 tot_lat: 31.31 s exec_count: 124 latency_per_call: 252.47 ms query_sample_text: SELECT count(*) as mytotal FROM wp_posts WHERE (post_content LIKE '%youtube.com/%' OR post_content LIKE '%youtu.be/%') AND post_status = 'publish' 1 row in set (0.0264 sec) We can then see that this query was executed 124 times for a total execution time of 31.31 seconds which makes 252.47 milliseconds per call. We can also see that this query examined more than 268k rows which means that on average those full table scans are examining 3277 records per query. This is a very good one for optimization. Temp tables Creating temporary tables is also sub optimal for your workload, if you have some slow ones you should have identified them already with the previous queries. But if you want to hunt those specifically, once again, sys schema helps you to catch them: SELECT schema_name, format_time(total_latency) tot_lat, exec_count, format_time(total_latency/exec_count) latency_per_call, query_sample_text FROM sys.x$statements_with_temp_tables AS t1 JOIN performance_schema.events_statements_summary_by_digest AS t2 ON t2.digest=t1.digest WHERE schema_name NOT in ('performance_schema', 'sys') AND disk_tmp_tables=1 ORDER BY 2 desc,(total_latency/exec_count) desc LIMIT 1\G Fortunately, I had none on my system. Query optimization is not the most exiting part of the DBA job… but it has to be done ;-). You have now an easy method to find where to start, good luck ! And don’t forget that if you need any help, you can always joins the MySQL Community Slack channel ! [Less]
Posted over 4 years ago by Frederic Descamps
Answering this question is not easy. Like always, the best response is “it depends” ! But let’s try to give you all the necessary info the provide the most accurate answer. Also, may be fixing one single query is not enough and looking for that ... [More] specific statement will lead in finding multiple problematic statements. The most consuming one The first candidate to be fixed is the query that consumes most of the execution time (latency). To identify it, we will use the sys schema and join it with events_statements_summary_by_digest from performance_schemato retrieve a real example of the query (see this post for more details). Let’s take a look at what sys schema has to offer us related to our mission: > show tables like 'statements_with%'; +---------------------------------------------+ | Tables_in_sys (statements_with%) | +---------------------------------------------+ | statements_with_errors_or_warnings | | statements_with_full_table_scans | | statements_with_runtimes_in_95th_percentile | | statements_with_sorting | | statements_with_temp_tables | +---------------------------------------------+ We will then use the statements_with_runtimes_in_95th_percentile to achieve our first task. However we will use the version of the view with raw data (not human readable formatted), to be able to sort the results as we want. The raw data version of sysschema views start with x$: SELECT schema_name, format_time(total_latency) tot_lat, exec_count, format_time(total_latency/exec_count) latency_per_call, query_sample_text FROM sys.x$statements_with_runtimes_in_95th_percentile AS t1 JOIN performance_schema.events_statements_summary_by_digest AS t2 ON t2.digest=t1.digest WHERE schema_name NOT in ('performance_schema', 'sys') ORDER BY (total_latency/exec_count) desc LIMIT 1\G *************************** 1. row *************************** schema_name: library tot_lat: 857.29 ms exec_count: 1 latency_per_call: 857.29 ms query_sample_text: INSERT INTO `books` (`doc`) VALUES ('{\"_id\": \"00005d44289d000000000000007d\", \"title\": \"lucky luke, tome 27 : l alibi\", \"isbn10\": \"2884710086\", \"isbn13\": \"978-2884710084\", \"langue\": \"français\", \"relié\": \"48 pages\", \"authors\": [\"Guylouis (Auteur)\", \"Morris (Illustrations)\"], \"editeur\": \"lucky comics (21 décembre 1999)\", \"collection\": \"lucky luke\", \"couverture\": \" data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDABQODxIPDRQSEBIXFRQYHjIhHhwcHj0sLiQySUBMS0dARkVQWnNiUFVtVkVGZIhlbXd7gYKBTmCNl4x9lnN+gXz/2wBDARUXFx4aHjshITt8U0ZTfHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHz/wAARCAEfANwDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBA... 1 row in set (0.2838 sec) This statement is complicated to optimize as it’s a simple insert, and it was run only once. Insert can be slower because of disk response time (I run in full durability of course). Having too many indexes may also increase the response time, this is why I invite you to have a look at these two sysschema tables: schema_redundant_indexes schema_unused_indexes You will have to play with the limit of the query to find some valid candidates and then, thanks to the query_sample_text we have the possibility to run an EXPLAIN on the query without having to rewrite it ! Full table scans Another query I would try to optimize is the one doing full table scans: SELECT schema_name, sum_rows_examined, (sum_rows_examined/exec_count) avg_rows_call, format_time(total_latency) tot_lat, exec_count, format_time(total_latency/exec_count) AS latency_per_call, query_sample_text FROM sys.x$statements_with_full_table_scans AS t1 JOIN performance_schema.events_statements_summary_by_digest AS t2 ON t2.digest=t1.digest WHERE schema_name NOT in ('performance_schema', 'sys') ORDER BY (total_latency/exec_count) desc LIMIT 1\G *************************** 1. row *************************** schema_name: wp_lefred sum_rows_examined: 268075 avg_rows_call: 3277.0419 tot_lat: 31.31 s exec_count: 124 latency_per_call: 252.47 ms query_sample_text: SELECT count(*) as mytotal FROM wp_posts WHERE (post_content LIKE '%youtube.com/%' OR post_content LIKE '%youtu.be/%') AND post_status = 'publish' 1 row in set (0.0264 sec) We can then see that this query was executed 124 times for a total execution time of 31.31 seconds which makes 252.47 milliseconds per call. We can also see that this query examined more than 268k rows which means that on average those full table scans are examining 3277 records per query. This is a very good one for optimization. Temp tables Creating temporary tables is also sub optimal for your workload, if you have some slow ones you should have identified them already with the previous queries. But if you want to hunt those specifically, once again, sys schema helps you to catch them: SELECT schema_name, format_time(total_latency) tot_lat, exec_count, format_time(total_latency/exec_count) latency_per_call, query_sample_text FROM sys.x$statements_with_temp_tables AS t1 JOIN performance_schema.events_statements_summary_by_digest AS t2 ON t2.digest=t1.digest WHERE schema_name NOT in ('performance_schema', 'sys') AND disk_tmp_tables=1 ORDER BY 2 desc,(total_latency/exec_count) desc LIMIT 1\G Fortunately, I had none on my system. Query optimization is not the most exciting part of the DBA job… but it has to be done ;-). You have now an easy method to find where to start, good luck ! And don’t forget that if you need any help, you can always joins the MySQL Community Slack channel ! [Less]
Posted over 4 years ago by MySQL Performance Blog
Percona announces the release of Percona Server for MySQL 8.0.16-7 on August 15, 2019 (downloads are available here and from the Percona Software Repositories). This release is based on MySQL 8.0.16. It includes all bug fixes in these releases. ... [More] Percona Server for MySQL 8.0.16-7 is now the current GA release in the 8.0 series. All of Percona’s software is open-source and free. Percona Server for MySQL 8.0.16 includes all the features available in MySQL 8.0.16 Community Edition in addition to enterprise-grade features developed by Percona. For a list of highlighted features from both MySQL 8.0 and Percona Server for MySQL 8.0, please see the GA release announcement. Encryption Features General Availability (GA) Temporary File Encryption (Temporary File Encryption) InnoDB Undo Tablespace Encryption InnoDB System Tablespace Encryption (InnoDB System Tablespace Encryption) default_table_encryption  =OFF/ON (General Tablespace Encryption) table_encryption_privilege_check =OFF/ON (Verifying the Encryption Settings) InnoDB redo log encryption (for master key encryption only) (Redo Log Encryption) InnoDB merge file encryption (Verifying the Encryption Setting) Percona Parallel doublewrite buffer encryption (InnoDB Tablespace Encryption) Known Issues 5865: Percona Server 8.0.16 does not support encryption for the MyRocks storage engine. An attempt to move any table from InnoDB to MyRocks fails as MyRocks currently will see all InnoDB tables as being encrypted. Bugs Fixed Parallel doublewrite buffer writes must crash the server on an I/O error occurs. Bug fixed #5678. After resetting the innodb_temp_tablespace_encrypt to OFF during runtime the subsequent file-per-table temporary tables continue to be encrypted. Bug fixed #5734. Setting the encryption to ON for the system tablespace generates an encryption key and encrypts system temporary tablespace pages. Resetting the encryption to OFF, all subsequent pages are written to the temporary tablespace without encryption. To allow any encrypted tables to be decrypted, the generated keys are not erased. Modifying they innodb_temp_tablespace_encrypt does not affect file-per-table temporary tables. This type of table is encrypted if ENCRYPTION='Y' is set during table creation. Bug fixed #5736. An instance started with the default values but setting the redo log to encrypt without specifying the keyring plugin parameters does not fail or throw an error. Bug fixed #5476. The rocksdb_large_prefix allows index key prefixes up to 3072 bytes. The default value is changed to TRUE to match the behavior of the innodb_large_prefix. Bug fixed #5655. On a server with two million or more tables, a shutdown may take a measurable length of time. Bug fixed #5639. The changed page tracking uses the LOG flag during read operations. The redo log encryption may attempt to decrypt pages with a specific bit set and fail. This failure generates error messages. A NO_ENCRYPTION flag lets the read process safely disable decryption errors in this case. Bug fixed #5541. If large pages are enabled on MySQL side, the maximum size for innodb_buffer_pool_chunk_size is effectively limited to 4GB. Bug fixed 5517 (upstream #94747 ). The TokuDB hot backup library continually dumps TRACE information to the server error log. The user cannot enable or disable the dump of this information. Bug fixed #4850. Other bugs fixed: #5688,#5723, #5695, #5749, #5752, #5610, #5689, #5645, #5734, #5772, #5753, #5129, #5102, #5681, #5686, #5681, #5310, #5713, #5007, #5102, #5129, #5130, #5149, #5696, #3845, #5149, #5581, #5652, #5662, #5697, #5775, #5668, #5752, #5782, #5767, #5669, #5753, #5696, #5733, #5803, #5804, #5820, #5827, #5835, #5724, #5767, #5782, #5794, #5796, #5746, and #5748. Note: If you are upgrading from 5.7 to 8.0, please ensure that you read the upgrade guide and the document Changed in Percona Server for MySQL 8.0. Find the release notes for Percona Server for MySQL 8.0.16-7 in our online documentation. Report bugs in the Jira bug tracker. [Less]
Posted over 4 years ago by Severalnines
Deployment and management your database environment can be a tedious task. It's very common nowadays to use tools for automating your deployment to make these tasks easier. Automation solutions such as Chef, Puppet, Ansible, or SaltStack are just ... [More] some of the ways to achieve these goals. This blog will show you how to use Puppet to deploy a Galera Cluster (specifically Percona XtraDB Cluster or PXC) utilizing ClusterControl Puppet Modules. This module makes the deployment, setup, and configuration easier than coding yourself from scratch. You may also want to check out one of our previous blogs about deploying a Galera Cluster using Chef,  “How to Automate Deployment of MySQL Galera Cluster Using S9S CLI and Chef.” Our S9S CLI tools are designed to be used in the terminal (or console) and can be utilized to automatically deploy databases. In this blog, we'll show you how to do deploy a Percona XtraDB Cluster on AWS using Puppet, using ClusterControl and its s9s CLI tools to help automate the job. Installation and Setup  For The Puppet Master and Agent Nodes On this blog, I used Ubuntu 16.04 Xenial as the target Linux OS for this setup. It might be an old OS version for you, but we know it works with RHEL/CentOS and Debian/Ubuntu recent versions of the OS. I have two nodes that I used on this setup locally with the following host/IP: Master Hosts:      IP = 192.168.40.200      Hostname = master.puppet.local Agent Hosts:      IP = 192.168.40.20      Hostname = clustercontrol.puppet.local Let's go over through the steps. 1) Setup the Master ## Install the packages required wget https://apt.puppetlabs.com/puppet6-release-xenial.deb sudo dpkg -i puppet6-release-xenial.deb sudo apt update sudo apt install -y puppetserver ## Now, let's do some minor configuration for Puppet sudo vi /etc/default/puppetserver ## edit from  JAVA_ARGS="-Xms2g -Xmx2g -Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger" ## to JAVA_ARGS="-Xms512m -Xmx512m -Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger" ## add alias hostnames in /etc/hosts sudo vi /etc/hosts ## and add 192.168.40.10 client.puppet.local 192.168.40.200 server.puppet.local ## edit the config for server settings. sudo vi /etc/puppetlabs/puppet/puppet.conf ## This can be depending on your setup so you might approach it differently than below.  [master] vardir = /opt/puppetlabs/server/data/puppetserver logdir = /var/log/puppetlabs/puppetserver rundir = /var/run/puppetlabs/puppetserver pidfile = /var/run/puppetlabs/puppetserver/puppetserver.pid codedir = /etc/puppetlabs/code dns_alt_names = master.puppet.local,master [main] certname = master.puppet.local server = master.puppet.local environment = production runinterval = 15m ## Generate a root and intermediate signing CA for Puppet Server sudo /opt/puppetlabs/bin/puppetserver ca setup ## start puppet server sudo systemctl start puppetserver sudo systemctl enable puppetserver 2) Setup the Agent/Client Node ## Install the packages required wget https://apt.puppetlabs.com/puppet6-release-xenial.deb sudo dpkg -i puppet6-release-xenial.deb sudo apt update sudo apt install -y puppet-agent ## Edit the config settings for puppet client sudo vi /etc/puppetlabs/puppet/puppet.conf And add the example configuration below, [main] certname = clustercontrol.puppet.local server = master.puppet.local environment = production runinterval = 15m 3) Authenticating (or Signing the Certificate Request) for Master/Client Communication ## Go back to the master node and run the following to view the view outstanding requests.                  sudo /opt/puppetlabs/bin/puppetserver ca list ## The Result Requested Certificates: clustercontrol.puppet.local (SHA256) 0C:BA:9D:A8:55:75:30:27:31:05:6D:F1:8C:CD:EE:D7:1F:3C:0D:D8:BD:D3:68:F3:DA:84:F1:DE:FC:CD:9A:E1 ## sign a request from agent/client sudo /opt/puppetlabs/bin/puppetserver ca sign --certname clustercontrol.puppet.local ## The Result Successfully signed certificate request for clustercontrol.puppet.local ## or you can also sign all request sudo /opt/puppetlabs/bin/puppetserver ca sign --all ## in case you want to revoke, just do sudo /opt/puppetlabs/bin/puppetserver ca revoke --certname ## to list all unsigned, sudo /opt/puppetlabs/bin/puppetserver ca list --all ## Then verify or test in the client node, ## verify/test puppet agent sudo /opt/puppetlabs/bin/puppet agent --test Scripting Your Puppet Manifests and Setting up the ClusterControl Puppet Module Our ClusterControl Puppet module can be downloaded here https://github.com/severalnines/puppet. Otherwise, you can also easily grab the Puppet Module from Puppet-Forge. We're regularly updating and modifying the Puppet Module, so we suggest you grab the github copy to ensure the most up-to-date version of the script.  You should also take into account that our Puppet Module is tested on CentOS/Ubuntu running with the most updated version of Puppet (6.7.x.). For this blog, the Puppet Module is tailored to work with the most recent release of ClusterControl (which as of this writing is 1.7.3). In case you missed it, you can check out our releases and patch releases over here. 1) Setup the ClusterControl Module in the Master Node # Download from github and move the file to the module location of Puppet: wget https://github.com/severalnines/puppet/archive/master.zip -O clustercontrol.zip; unzip -x clustercontrol.zip; mv puppet-master /etc/puppetlabs/code/environments/production/modules/clustercontrol 2) Create Your Manifest File and Add the Contents as Shown Below vi /etc/puppetlabs/code/environments/production/manifests/site.pp Now, before we proceed, we need to discuss the manifest script and the command to be executed. First, we'll have to define the type of ClusterControl and its variables we need to provide. ClusterControl requires every setup to have token and SSH keys be specified and provided  accordingly. Hence, this can be done by running the following command below: ## Generate the key bash /etc/puppetlabs/code/environments/production/modules/clustercontrol/files/s9s_helper.sh --generate-key ## Then, generate the token bash /etc/puppetlabs/code/environments/production/modules/clustercontrol/files/s9s_helper.sh --generate-token Now, let's discuss what we'll have to input within the manifest file one by one. node 'clustercontrol.puppet.local' { # Applies only to mentioned node. If nothing mentioned, applies to all. class { 'clustercontrol': is_controller => true, ip_address => '', mysql_cmon_password => '', api_token => '' } Now, we'll have to define the of your ClusterControl node where it's actually the clustercontrol.puppet.local in this example. Specify also the cmon password and then place the API token as generated by the command mentioned earlier. Afterwards, we'll use ClusterControl RPC to send a POST request to create an AWS entry: exec { 'add-aws-credentials': path => ['/usr/bin', '/usr/sbin', '/bin'], command => "echo '{\"operation\" : \"add_credentials\", \"provider\" : aws, \"name\" : \"\", \"comment\" : \"\", \"credentials\":{\"access_key_id\":\"\",\"access_key_secret\" : \"\",\"access_key_region\" : \"\"}}' | curl -sX POST -H\"Content-Type: application/json\" -d @- http://localhost:9500/0/cloud" } The placeholder variables I set are self-explanatory. You need to provide the desired credential name for your AWS, provide a comment if you wanted to, provided the AWS access key id, your AWS key secret and AWS region where you'll be deploying the Galera nodes. Lastly, we'll have to run the command using s9s CLI tools. exec { 's9s': path => ['/usr/bin', '/usr/sbin', '/bin'], onlyif => "test -f $(/usr/bin/s9s cluster --list --cluster-format='%I' --cluster-name '' 2> /dev/null) > 0 ", command => "/usr/bin/s9s cluster --create --cloud=aws --vendor percona --provider-version 5.7 --containers=,, --nodes=,, --cluster-name= --cluster-type= --image --template --subnet-id --region --image-os-user= --os-user= --os-key-file --vpc-id --firewalls --db-admin --db-admin-passwd --wait --log", timeout => 3600, logoutput => true } Let’s look at the key-points of this command. First, the "onlyif" is defined by a conditional check to determine if such cluster name exists, then do not run since it's already added in the cluster. We'll proceed on running the command which utilizes the S9S CLI Tools. You'll need to specify the AWS IDs in the placeholder variables being set. Since the placeholder names are self-explanatory, its values will be taken from your AWS Console or by using the AWS CLI tools. Now, let's check the succeeding steps remaining. 3) Prepare the Script for Your Manifest File # Copy the example contents below (edit according to your desired values) and paste it to the manifest file, which is the site.pp. node 'clustercontrol.puppet.local' { # Applies only to mentioned node. If nothing mentioned, applies to all. class { 'clustercontrol': is_controller => true, ip_address => '192.168.40.20', mysql_cmon_password => 'R00tP@55', mysql_server_addresses => '192.168.40.30,192.168.40.40', api_token => '0997472ab7de9bbf89c1183f960ba141b3deb37c' } exec { 'add-aws-credentials': path => ['/usr/bin', '/usr/sbin', '/bin'], command => "echo '{\"operation\" : \"add_credentials\", \"provider\" : aws, \"name\" : \"paul-aws-sg\", \"comment\" : \"my SG AWS Connection\", \"credentials\":{\"access_key_id\":\"XXXXXXXXXXX\",\"access_key_secret\" : \"XXXXXXXXXXXXXXX\",\"access_key_region\" : \"ap-southeast-1\"}}' | curl -sX POST -H\"Content-Type: application/json\" -d @- http://localhost:9500/0/cloud" } exec { 's9s': path => ['/usr/bin', '/usr/sbin', '/bin'], onlyif => "test -f $(/usr/bin/s9s cluster --list --cluster-format='%I' --cluster-name 'cli-aws-repl' 2> /dev/null) > 0 ", command => "/usr/bin/s9s cluster --create --cloud=aws --vendor percona --provider-version 5.7 --containers=db1,db2,db3 --nodes=db1,db2,db3 --cluster-name=cli-aws-repl --cluster-type=galera --image ubuntu18.04 --template t2.small --subnet-id subnet-xxxxxxxxx --region ap-southeast-1 --image-os-user=s9s --os-user=s9s --os-key-file /home/vagrant/.ssh/id_rsa --vpc-id vpc-xxxxxxx --firewalls sg-xxxxxxxxx --db-admin root --db-admin-passwd R00tP@55 --wait --log", timeout => 3600, logoutput => true } } Let's Do the Test and Run Within the Agent Node /opt/puppetlabs/bin/puppet agent --test The End Product Now, let's have a look once the agent is being ran. Once you have this running, visiting the URL http:///clustercontrol, you'll be asked by ClusterControl to register first.  Now, you wonder where's the result after we had run the RPC request with resource name 'add-aws-credentials' in our manifest file, it'll be found in the Integrations section within the ClusterControl.  Let's see how it looks like after the Puppet perform the runbook. You can modify this in accordance to your like through the UI but you can also modify this by using our RPC API.  Now, let's check the cluster, From the UI view, it shows that it has been able to create the cluster, display the cluster in the dashboard, and also shows the job activities that were performed in the background. Lastly, our AWS nodes are already present now in our AWS Console. Let's check that out, All nodes are running healthy and are expected to its designated names and region. Conclusion In this blog, we are able to deploy a Galera/Percona Xtradb Cluster using automation with Puppet. We did not create the code from scratch, nor did we use any external tools that would have complicated the task. Instead, we used the CusterControl Module and the S9S CLI tool to build and deploy a highly available Galera Cluster. Tags:  MySQL galera cluster percona xtradb cluster percona amazon AWS cloud database automation Puppet [Less]