I Use This!
Very High Activity


Analyzed 17 days ago. based on code collected 17 days ago.
Posted 4 days ago by Keith Larson
So when it comes to routing your MySQL traffic several options exist. HAproxy MariaDB MaxScale NGINX MySQL Router ProxySQL You can even still get MySQL Proxy if you wanted it but it is EOF.  Now I have seen HAproxy used more often with clients, it ... [More] is pretty straight forward to set up. Percona has an example for those interested:  https://www.percona.com/doc/percona-xtradb-cluster/LATEST/howtos/haproxy.html Personally I like ProxySQL. Percona also has  few blogs on this as well https://github.com/sysown/proxysql/wiki/ProxySQL-Configuration https://www.percona.com/blog/2017/01/19/setup-proxysql-for-high-availability-not-single-point-failure/ https://www.percona.com/blog/2017/01/25/proxysql-admin-configuration/ https://www.percona.com/blog/2016/09/15/proxysql-percona-cluster-galera-integration/ Percona also has ProxySQL version available  https://www.percona.com/downloads/proxysql/ I was thinking I would write up some examples but overall Percona has explained it all very well.  I do not want to take anything away from those posts, instead point out that a lot of good information is available via those urls. So instead of rewriting what has already been written, I will create a collection of information for those interested.  First compare and decide for yourself what you need and want. The following link of course is going to be biased towards ProxySQL but it gives you an overall scope for you to consider.  http://www.proxysql.com/compare If you have a cluster or master to master and you do not care which server the writes vs reads go onto, just as long as you have a connection; then HAproxy is likely a simple fast set up for you.  The bonus with ProxySQL is the ability to sort traffic in a weighted fashion, EASY. So you can have writes go to node 1, and selects pull from node 2 and node 3. Documentation on this can be found here: https://github.com/sysown/proxysql/wiki/ProxySQL-Read-Write-Split-(HOWTO) Yes it can be done with HAproxy but you have to instruct the application accordingly.  https://severalnines.com/resources/tutorials/mysql-load-balancing-haproxy-tutorial This is handled in ProxySQL based on your query rules. https://github.com/sysown/proxysql/wiki/Main-(runtime)#mysql_query_rules https://github.com/sysown/proxysql/wiki/ProxySQL-Configuration#mysql-query-rules Now the obvious question here: OK so how do you keep ProxySQL from becoming the single point of failure?   You can invest is a robust load balancer and etc etc etc ... Toss hardware at it.... Or make it easy on yourself and support open source and use KeepAlived.  This is VERY easy to set up and all of it is documented again well here:  https://www.percona.com/blog/2017/01/19/setup-proxysql-for-high-availability-not-single-point-failure/ http://www.keepalived.org/doc/  To be fair here is an example for keepalived and HAproxy as well   https://andyleonard.com/2011/02/01/haproxy-and-keepalived-example-configuration/ If you ever dealt with lua and mysql-proxy, ProxySQL and Keepalived should be very simple for you. If you still want it for some reason: https://launchpad.net/mysql-proxy Regardless if you choose HAproxy, ProxySQL or another solution, you need to ensure not to replace once single point of failure with another and keepalived is a great for that. So little reason to not do this if you are using a proxy.  So a few more things on ProxySQL.  If you track hosts that connect to your database via your reporting or monitoring , realize those IPS or hostnames are now going to be the proxy server.  What about all the users you already have in MySQL then? Can you migrate them to proxysql? Yes you can. It takes a few steps but it is do able. Here is an example of this: https://dba.stackexchange.com/questions/164705/how-to-easily-bring-80-mysql-users-into-proxysql Make sure you understand the Multi layer configuration system. Save your info to disk! https://github.com/sysown/proxysql/wiki/Main-(runtime)#runtime-tables https://github.com/sysown/proxysql/wiki/Main-(runtime)#disk-database Can ProxySQL run on the MySQL Default port 3306  Yes Edit the mysql-interfaces Keep in mind now your max_connections. If you have Max_connections in mysql set to 500, then that is your limit of course for standard users. With ProxySQL you can now spread users across the system and set a max per node. So to help ensure you do not hit 500 connections set the mysql-max_connections a little bit lower than MySQL value.  Take advantage of the Monitor Module and STATS .. Know what is going on with  your proxy and traffic.  Take advantage of Query Caching if applicable for your application. [Less]
Posted 4 days ago by MySQL Performance Blog
Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based ... [More] analysis for MySQL® and MongoDB® servers to ensure that your data works as efficiently as possible. In PMM Release 1.11.0, we deliver the following changes: Configurable MySQL Slow Log Rotation – enable or disable rotation, and specify how many files to keep on disk Predictable Graphs – we’ve updated our formulas to use aggregation functions over time for more reliable graphs MySQL Exporter Parsing of my.cnf – we’ve improved how we read my.cnf Annotation improvements – passing multiple strings results in single annotation being written The issues in the release includes 1 new features & improvements, and 9 bugs fixed. MySQL Slow Log Rotation Improvements We spent some time this release going over how we handle MySQL’s Slow Log rotation logic. Query Analytics requires that slow logging be enabled (either to file, or to PERFORMANCE_SCHEMA) and we found that users of Percona Server for MySQL overwhelmingly choose logging to a file in order to take advantage of log_slow_verbosity which provides enhanced InnoDB Usage information. However, the challenge with MySQL’s Slow Log is that it is very verbose and thus the number one concern is disk space. PMM strives to do no harm and so MySQL Slow Log Rotation was a natural fit, but until this release we were very strict and hadn’t enabled any configuration of these parameters. Percona Server for MySQL Users have long known about Slow Query Log Rotation and Expiration, but until now had no way of using the in-built Percona Server for MySQL feature while ensuring that PMM wasn’t missing any queries from the Slow Log during file rotation. Or perhaps your use case is that you want to do Slow Log Rotation using logrotate or some other facility. Today with Release 1.11 this is now possible! We’ve made two significant changes: You can now specify the number of Slow Log files to remain on disk, and let PMM handle deleting the oldest files first. Default remains unchanged – 1 Slow Log to remain on disk. Slow Log rotation can now be disabled, for example if you want to manage rotation using logrotate or Percona Server for MySQL Slow Query Log Rotation and Expiration. Default remains unchanged – Slow Log Rotation is ON. Number of Slow Logs Retained on Disk Slow Logs Rotation – On or Off You specify each of these two new controls when setting up the MySQL service. The following example specifies that 5 Slow Log files should remain on disk: pmm-admin add mysql ... --retain-slow-logs=5 While the following example specifies that Slow Log rotation is to be disabled (flag value of false), with the assumption that you will perform your own Slow Log Rotation: pmm-admin add mysql ... --slow-log-rotation=false We don’t currently support modifying option parameters for an existing service definition. This means you must remove, then re-add the service and include the new options. We’re including a logrotate script in this post to get you started, and it is designed to keep 30 copies of Slow Logs at 1GB each. Note that you’ll need to update the Slow Log location, and ensure a MySQL User Account with SUPER, RELOAD are used for this script to successfully execute. Example logrotate /var/mysql/mysql-slow.log {     nocompress     create 660 mysql mysql     size 1G     dateext     missingok     notifempty     sharedscripts     postrotate        /bin/mysql -e 'SELECT @@global.long_query_time INTO @LQT_SAVE; SET GLOBAL long_query_time=2000; SELECT SLEEP(2); FLUSH SLOW LOGS; SELECT SLEEP(2); SET GLOBAL long_query_time=@LQT_SAVE;'     endscript     rotate 30 } Predictable Graphs We’ve updated the logic on four dashboards to better handle predictability and also to allow zooming to look at shorter time ranges.  For example, refreshing PXC/Galera graphs prior to 1.11 led to graphs spiking at different points during the metric series. We’ve reviewed each of these graphs and their corresponding queries and added in _over_time() functions so that graphs display a consistent view of the metric series. This improves your ability to drill in on the dashboards so that no matter how short your time range, you will still observe the same spikes and troughs in your metric series. The four dashboards affected by this improvement are: Home Dashboard PXC/Galera Graphs Dashboard MySQL Overview Dashboard MySQL InnoDB Metrics Dashboard MySQL Exporter parsing of my.cnf In earlier releases, the MySQL Exporter expected only key=value type flags. It would ignore options without values (i.e. disable-auto-rehash), and could sometimes read the wrong section of the my.cnf file.  We’ve updated the parsing engine to be more MySQL compatible. Annotation improvements Annotations permit the display of an event on all dashboards in PMM.  Users reported that passing more than one string to pmm-admin annotate would generate an error, so we updated the parsing logic to assume all strings passed during annotation creation generates a single annotation event.  Previously you needed to enclose your strings in quotes so that it would be parsed as a single string. Issues in this release New Features & Improvements PMM-2432 – Configurable MySQL Slow Log File Rotation Bug fixes PMM-1187 – Graphs breaks at tight resolution  PMM-2362 – Explain is a part of query  PMM-2399 – RPM for pmm-server is missing some files  PMM-2407 – Menu items are not visible on PMM QAN dashboard  PMM-2469 – Parsing of a valid my.cnf can break the mysqld_exporter  PMM-2479 – PXC/Galera Cluster Overview dashboard: typo in metric names  PMM-2484 – PXC/Galera Graphs display unpredictable results each time they are refreshed  PMM-2503 – Wrong InnoDB Adaptive Hash Index Statistics  PMM-2513 – QAN-agent always changes max_slowlog_size to 0  PMM-2514 – pmm-admin annotate help – fix typos PMM-2515 – pmm-admin annotate – more than 1 annotation  How to get PMM PMM is available for installation using three methods: On Docker Hub – docker pull percona/pmm-server – Documentation AWS Marketplace – Documentation Open Virtualization Format (OVF) – Documentation Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. The post Percona Monitoring and Management 1.11.0 Is Now Available appeared first on Percona Database Performance Blog. [Less]
Posted 4 days ago by MariaDB
A Look into MariaDB Auditing for GDPR Compliance maria-luisaraviol Wed, 05/23/2018 - 18:27 When we are talking about a database auditing concept, what we are focused on is tracking the use of database records, and the monitoring of ... [More] each operation on the data. The auditing activities goal is to provide a clear and reliable answer to the typical 4 W questions: Who accessed the database, When did this happen, What was touched, Where this access came from. Auditing should also help the security team answer the 5th W: Why this happened? Auditing is also a very important task when we want to monitor the database activity to collect information that can help to increase the database performance or debug the application. When we talk about security, accountability and regulatory compliance Database Auditing plays an even more critical role. An auditing activity is key in achieving accountability as it allows us to investigate malicious or suspicious database activities. It’s used to help DBAs detect excessive user privileges or suspicious activities coming from specific connections. In particular, the new European Union General Data Protection Regulation (GDPR) says that it will be important to be able to provide detail of changes to personal data to demonstrate that data protection and security procedures are effective and are being followed. Furthermore, we must ensure that data is only accessed by appropriate parties. This means that we need to be able to say who changed an item of data and when they changed it. It’s broader than GDPR. HIPAA (Health Insurance Portability and Accountability Act) requires healthcare providers to deliver audit trails about anyone and everyone who touches any data in their records. This is down to the row and record level. Furthermore, if a data breach occurs, organizations must disclose full information on these events to their local data protection authority (DPA) and all customers concerned with the data breach within 72 hours so they can respond accordingly. MariaDB Audit Plugin For all these reasons MariaDB started including the Audit Plugin since version 10.0.10 of MariaDB Server. The purpose of the MariaDB Audit Plugin is to log the server's activity: for each client session, it records who connected to the server (i.e., user name and host), what queries were executed, and which tables were accessed and server variables that were changed. Events that are logged by the MariaDB Audit Plugin are grouped into three different types: CONNECT, QUERY and TABLE events. There are actually more types of events to allow fine-tuning of the audit, and focus on just the events and statements relevant for a specific organisation. These are detailed on the Log Settings Page. There also exist several system variables to configure the MariaDB Audit Plugin. the Server Audit Status Variables page includes all variables relevant to review the status of the auditing. The overall monitoring should include an alert to monitor that the auditing is active. This information is stored in a rotating log file or it may be sent to the local syslog. For security reasons, it's sometimes recommended to use the system logs instead of a local file: in this case the value of server_audit_output_type needs to be set to syslog. It is also possible to set up even more advanced and secure solutions such as using a remote syslog service (Read more about the MariaDB Audit Plugin and setting up a rsyslog). What does the MariaDB audit log file looks like? The audit log file is a set of rows in plain text format, written as a list of comma-separated fields to a file. The general format for the logging to the plugin's own file is defined like the following: [timestamp],[serverhost],[username],[host],[connectionid], [queryid],[operation],[database],[object],[retcode] If the log file is sent to syslog the format is slightly different as the syslog has its own standard format (refer to the MariaDB Audit Plugin Log Format page for the details). A typical MariaDB Audit plugin log file example is: # tail mlr_Test_audit.log 20180421 09:22:38,mlr_Test,root,localhost,22,0,CONNECT,,,0 20180421 09:22:42,mlr_Test,root,localhost,22,35,QUERY,,'CREATE USER IF NOT EXISTS \'mlr\'@\'%\' IDENTIFIED WITH \'mysql_native_password\' AS \'*F44445443BB93ED07F5FAB7744B2FCE47021238F\'',0 20180421 09:22:42,mlr_Test,root,localhost,22,36,QUERY,,'drop user if exists mlr',0 20180421 09:22:45,mlr_Test,root,localhost,22,0,DISCONNECT,,,0 20180421 09:25:29,mlr_Test,root,localhost,20,0,FAILED_CONNECT,,,1045 20180421 09:25:44,mlr_Test,root,localhost,43,133,WRITE,employees,salaries, 20180421 09:25:44,mlr_Test,root,localhost,43,133,QUERY,employees,'DELETE FROM salaries LIMIT 100',0 Audit Files Analysis Log files are a great source of information but only if you have a system in place to consistently review the data. Also the way you shape your application and database environment is important. In order to get useful auditing, for example, it’s recommended that every human user has his own account. Furthermore, from the applications standpoint, if those are not using native DB accounts but application based accounts, each application accessing the same server should have its own "application-user". As we said before, you have to use the information collected and analyse it on a regular basis, and when needed, take immediate actions based on those logged events. However, even small environments can generate a lot of information to be analysed manually. Starting with the most recent release, Monyog 8.5, the monitoring tool that is included with the MariaDB TX and MariaDB AX subscriptions,  added a very interesting feature for MariaDB: The Audit Log. This feature parses the audit log maintained by MariaDB Server and displays the content in a clean tabular format. Monyog accesses the audit log file, the same way it does for other MariaDB log files, including the Slow Query, General Query and Error log. Through the Monyog interface you can select the server and the time-frame for which you want the audit log to be seen from.  Then, clicking on “SHOW AUDIT LOG” fetches the contents of the log. The limit on the number of rows that can be fetched in one time-frame is 10000. The snapshot above gives you a quick summary of the audit log in a percentage, like Failed Logins, Failed Events, Schema changes, Data Changes and Stored Procedure. All these legends are clickable and shows the corresponding audit log entries on clicking. Furthermore, you can use the filter option to fetch audit log based on Username, Host, Operation, Database and Table/Query. MariaDB Releases Login or Register to post comments [Less]
Posted 4 days ago by The Pythian Group
In recent weeks I’ve been focusing on Docker in order to get a much better understanding of the containerized world that is materializing in front of us. Containers aren’t just for stateless applications anymore and we’re seeing more cases where ... [More] MySQL and other databases are being launched in a containerized fashion, so it’s important to know how to configure your MySQL container! In docker hub, you will see an option for this by doing a volume mount from the docker host to the container on /etc/mysql/conf.d. But the problem is that the container image you’re using may not have an !includedir referencing the conf.d directory, much like the latest version of mysql community, as you will see below. [root@centos7-1 ~]# docker run --memory-swappiness=1 --memory=2G -p 3306:3306 --name=mysql1 -e MYSQL_ROOT_PASSWORD=password -d mysql/mysql-server:5.7.22 [root@centos7-1 ~]# docker exec -it mysql1 cat /etc/my.cnf | grep -i include [root@centos7-1 ~]# This means that if you use the prescribed method of placing a config file in /etc/mysql/conf.d in the container, it’s not going to be read and will have no impact on the configuration of the underlying MySQL instance. You might think that the next step would be to attach to the container, modify the my.cnf file (after installing a text editor) and adding the !includedir in your my.cnf file, but this goes against the docker / containerization philosophy. You should be able to just launch a container with the appropriate arguments and be off to fight the universe’s data battles. So in this case, I would propose the following workaround: Instead of using /etc/mysql/conf.d, we can look at the mysql option file reference and realize there is more than one place we can put a config file. In fact, it looks like the next place mysql is going to look for configuration is going to be /etc/mysql/my.cnf and if we check our recently deployed container, we’ll see that /etc/mysql isn’t used. [root@centos7-1 ~]# docker exec -it mysql1 ls /etc/mysql ls: cannot access /etc/mysql: No such file or directory We can mount a volume with a my.cnf file to this directory on the container and it should pick up whatever configuration we supply, as demonstrated below. [root@centos7-1 ~]# docker stop mysql1 mysql1 [root@centos7-1 ~]# docker rm mysql1 mysql1 [root@centos7-1 ~]# cat /mysqlcnf/mysql1/my.cnf [mysqld] server-id=123 [root@centos7-1 ~]# docker run --memory-swappiness=1 --memory=2G -p 3306:3306 -v /mysqlcnf/mysql1:/etc/mysql --name=mysql1 -e MYSQL_ROOT_PASSWORD=password -d mysql/mysql-server:5.7.22 d5d980ee01d5b4707f3a7ef5dd30df1d780cdfa35b14ad22ff436fb02560be1b [root@centos7-1 ~]# docker exec -it mysql1 cat /etc/mysql/my.cnf [mysqld] server-id=123 [root@centos7-1 ~]# docker exec -it mysql1 mysql -u root -ppassword -e "show global variables like 'server_id'" mysql: [Warning] Using a password on the command line interface can be insecure. +---------------+-------+ | Variable_name | Value | +---------------+-------+ | server_id | 123 | +---------------+-------+ [root@centos7-1 ~]# Another option for doing this is overriding the my.cnf file in /etc/ with our own version. You can do this with a mount as noted in the mysql reference for Persisting Data and Configuration Changes, but in that case you will be overwriting other items that might be included in the my.cnf as part of the docker build. This may or may not be your intention depending on how you want to deploy your containers. Conclusion Be aware of the container image you’re using and what configuration options are available to you. Some forks will include a !includedir reference to /etc/mysql/conf.d, some won’t. You may want to overwrite the entire my.cnf file by volume mounting to a copy of the my.cnf on the docker host. Or you may just want to supplement the configuration with a second configuration file in /etc/mysql. The important things are to test, to make sure your configuration is properly read by the mysql container, and to establish confidence in the configuration method used before deploying in your environment. [Less]
Posted 4 days ago by Keith Larson
Happy Birthday MySQL  ! Turned 23 today !
Posted 5 days ago by Dave Stokes
There was an interesting question on Stackoverflow.com on extracting values from a JSON data type column in a MySQL database.  What caught my eye was the the keys for the key/value pairs were numeric. In particular the author of the question only ... [More] wanted values for the key named 74.  The sample data was fairly simple.{ "70" : "Apple", "71" : "Peach", "74" : "Kiwi" }I thought SELECT JSON_EXTRACT(column, '$.74') FROM table; should work but it did not. There was a complaint about an invalid path expression.It turns out that you need to make the second argument in the function '$."74"' or SELECT JSON_EXTRACT(column,'$."74"') FROM table; File this under something to remember for later. :-) [Less]
Posted 5 days ago by MySQL Performance Blog
Percona announces the release of Percona Toolkit 3.0.10 on May 22, 2018. Percona Toolkit is a collection of advanced open source command-line tools, developed and used by the Percona technical staff, that are engineered to perform a variety of ... [More] MySQL®, MongoDB® and system tasks that are too difficult or complex to perform manually. With over 1,000,000 downloads, Percona Toolkit supports Percona Server for MySQL, MySQL®, MariaDB®, Percona Server for MongoDB and MongoDB. Percona Toolkit, like all Percona software, is free and open source. You can download packages from the website or install from official repositories. This release includes the following changes: New Features: PT-131: pt-table-checksum disables the QRT plugin The Query Response Time Plugin provides a tool for analyzing information by counting and displaying the number of queries according to the length of time they took to execute. This feature enables a new flag --disable-qrt-plugin  that leverages Percona Server for MySQL’s new ability to disable QRT plugin at the session level. The advantage to enabling this Toolkit feature is that the QRT metrics are not impacted by the work that pt-table-checksum performs. This means that QRT metrics report only the work your Application is generating on MySQL, and not clouded by the activities of pt-table-checksum. PT-118: pt-table-checksum reports the number of rows of difference between master and slave We’re adding support for pt-table-checksum to identify the number of row differences between master and slave. Previously you were able to see only the count of chunks that differed between hosts. This is helpful for situations where you believe you can tolerate some measure of row count drift between hosts, but want to be precise in understanding what that row count difference actually is. Improvements PT-1546: Improved support for MySQL 8 roles PT-1543: The encrypted table status query causes high load over multiple minutes Users reported that listing encrypted table status can be very slow.  We’ve enabled this functionality via --list-encrypted-tables and set it to default of disabled. PT-1536: Added info about encrypted tablespaces in pt-mysql-summary We’ve improved pt-mysql-summary to now include information about encrypted tablespaces.  This information is available by using --list-encrypted-tables . Bug Fixes: PT-1556: pt-table-checksum 3.0.9 does not change binlog_format to statement any more. pt-show-grants has several known issues when working with MySQL 8 and roles, which Percona aims to address in subsequent Percona Toolkit releases: PT-1560, PT-1559, and PT-1558 Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. The post Percona Toolkit 3.0.10 Is Now Available appeared first on Percona Database Performance Blog. [Less]
Posted 5 days ago by VividCortex
If you’re someone who keeps up with the Go development cycle, then you’ll know that a couple of weeks ago Go entered its feature-freeze for the Go 1.11 release. One of the changes for this upcoming release that caught my eye was to the database/sql ... [More] package. Daniel Theophanes contributed a change that introduces several new counters available via the DB.Stats() method. If you’re not familiar with it, DB.Stats() returns a DBStat structure containing information about the underlying sql.DB that the method is called on. Up to this point, the struct has had a single field, tracking the current number of open connections to the database. Daniel’s patch introduces a number of additional fields though: MaxOpenConnections: The max allowed open connections to the DB, as set by DB.SetMaxOpenConns. InUse: The number of connections actively in-use. Idle: The number of open connections that are currently idle. WaitCount: The total number of times that a goroutine has had to wait for a connection. WaitDuration: The cumulative amount of time that goroutines have spent waiting for a connection. MaxIdleClosed: The number of connections closed according to the limit specified by DB.SetMaxIdleConns. MaxLifetimeClosed: The number of connections closed because they exceeded the duration specified by DB.SetConnMaxLifetime. Note that of the above fields, WaitCount, WaitDuration, MaxIdleClosed and MaxLifetimeClosed are all counters; that is to say, their values never decrease over the lifetime of the DB object, they only increase over time. The new stats will be available when Go 1.11 is released, which is projected to be available in August. In the meantime, if you aren’t publishing DBStats metrics in your applications today you can work on adding it to integrate into a metrics collector such as Graphite, Prometheus, or even VividCortex. The call to DB.Stats() is cheap and thread-safe, so it’s fairly easy to spawn another goroutine to call it periodically and forward the data to a metrics collector of your choice. The new information here makes the DB.Stats() command much more useful for monitoring the behavior of database connections. In particular, as noted by Daniel in the commit message, if you see a high amount of waiting or closed connections it may indicate that you need to tune the settings for your DB object. I’ll be adding new metrics to our applications once we upgrade to Go 1.11, you should add them to yours as well! Can't get enough Go? Check out our free eBook The Ultimate Guide to Building Database-Driven Apps with Go, our free webinar Developing MySQL Applications with Go, or sharpen your skills with the Go database/sql package tutorial.  Ready, set, Go! [Less]
Posted 5 days ago by MySQL Server Dev Team
In this article, I’ll explain about the multi version concurrency control (MVCC) of large objects (LOBs) design in the MySQL InnoDB storage engine.  MySQL 8.0 has a new feature that allows users to partially update large objects, including the JSON documents.  …
Posted 5 days ago by Shlomi Noach
This is the sixth in a series of posts reviewing methods for MySQL master discovery: the means by which an application connects to the master of a replication tree. Moreover, the means by which, upon master failover, it identifies and connects to ... [More] the newly promoted master. These posts are not concerned with the manner by which the replication failure detection and recovery take place. I will share orchestrator specific configuration/advice, and point out where cross DC orchestrator/raft setup plays part in discovery itself, but for the most part any recovery tool such as MHA, replication-manager, severalnines or other, is applicable. Hard coded configuration deployment You may use your source/config repo as master service discovery method of sorts. The master's identity would be hard coded into your, say, git repo, to be updated and deployed to production upon failover. This method is simple and I've seen it being used by companies, in production. Noteworthy: This requires a dependency of production on source availability. The failover tool would need to have access to your source environment. This requires a dependency of production on build/deploy flow. The failover tool would need to kick build, test, deploy process. Code deployment time can be long. Deployment must take place on all relevant hosts, and cause for a mass refresh/reload. It should interrupt processes that cannot reload themselves, such as various commonly used scripts. Synchronous replication This series of posts is focused on asynchronous replication, but we will do well to point out a few relevant notes on sychnronous replication (Galera, XtraDB Cluster, InnoDB Cluster). Synchronous replication can act in single-writer mode or in multi-writer mode. In single writer mode, apps should connect to a particular master. The identity of such master can be achieved by querying the MySQL members of the cluster. In multi-writer mode, apps can connect to any healthy member of the cluster. This still calls for a check: is the member healthy? Syncronous replication is not intended to work well cross DC. The last bullet should perhaps be highlighted. In a cross-DC setup, and for cross-DC failovers, we are back to same requirements as with asynchronous replication, and the methods illustrated in this series of posts may apply. VIPs make less sense. Proxy-based solution make a lot of sense. All posts in this series MySQL master discovery methods, part 1: DNS MySQL master discovery methods, part 2: VIP & DNS MySQL master discovery methods, part 3: app & service discovery MySQL master discovery methods, part 4: Proxy heuristics MySQL master discovery methods, part 5: Service discovery & Proxy MySQL master discovery methods, part 6: other methods [Less]