I Use This!
Moderate Activity

News

Analyzed about 21 hours ago. based on code collected 1 day ago.
Posted over 4 years ago by Facebook Code Engineering
Operating systems that serve millions (or even billions) of people can present unprecedented, complex engineering challenges. Last year, we launched the Systems @Scale conference to bring together engineers from various companies to discuss those ... [More] challenges. At this year’s event, attendees gathered to hear speakers from Facebook, LinkedIn, Uber, and other companies discuss innovative solutions for large-scale information systems. If you missed the event, you can view recordings of the presentations below. If you are interested in future events, visit the @Scale website or follow the @Scale Facebook page. Keynote Benjamin Reed, Assistant Professor, San Jose State University   Ben discusses the genesis of Apache ZooKeeper and the lessons learned along the way. What began as a simple idea to create a coordination core that could be used by many different applications at Yahoo! has grown into an Apache open source project used by many companies and projects. Ben talks about how the ideas behind ZooKeeper were conceived, its initial reception, and their initial experiences rolling out to the world. Apache Hive: From MapReduce to enterprise-grade big data warehousing Jesus Camacho Rodriguez, Principal Software Engineer, Cloudera   In this talk, Jesus describes the innovations on the journey from batch tool to full-fledged SQL enterprise data warehousing system. In particular, he shows how the community expanded the utility of the system by adding row-level transactional capabilities required for data modifications in star schema databases, introducing optimization techniques that are useful to handle today’s view hierarchies and big data operations, implementing the runtime improvements necessary to bring query latency and concurrency into the realm of interactive operation, and laying the groundwork for using Apache Hive as a relational front end to multiple storage and data systems. All these enhancements were introduced without ever compromising on the original characteristics that made the system popular. Delos: Storage for the Facebook control plane Mahesh Balakrishnan, Software Engineer, FacebookJason Flinn, Visiting Professor, Facebook, and Professor, University of Michigan   Delos is a new low-dependency storage system for control plane applications at the bottom of the Facebook stack. It supports a rich API (transactions, secondary indices, and range queries) and quorum-style guarantees on availability and durability. Delos is not just a storage system; it’s also an extensible platform for building replicated systems. Delos can be easily extended to support new APIs (e.g., queues or namespaces). Delos can also support entirely different ordering subsystems and switch between them on the fly to obtain performance and reliability trade-offs. As a result, new use cases with various APIs or performance requirements can be satisfied via new implementations of specific layers rather than by a wholesale rewrite of the system. Delos: Simple, flexible storage for the Facebook control plane Accordion: Better memory organization for LSM key-value stores Eshcar Hillel, Senior Research Scientist, Verizon Media   Log-structured merge (LSM) stores have emerged as the technology of choice for building scalable write-intensive key-value storage systems. Though inherent to the LSM design, frequent compactions are a major pain point because they slow down data store operations, primarily writes, and increase disk wear. Another performance bottleneck in today’s state-of-the-art LSM stores, in particular ones that use managed languages like Java, is the fragmented memory layout of their dynamic memory store. In this talk, Eshcar shows that these pain points may be mitigated via better organization of the memory store. She also presents Accordion — an algorithm that addresses these problems by reapplying the LSM design principles to memory management. Accordion is implemented in the production code of Apache HBase, where it was extensively evaluated. Eshcar demonstrates Accordion’s double-digit performance gains versus the baseline HBase implementation and discuss some unexpected lessons learned in the process. Observability infra, Uber and Facebook Yuri Shkuro, Software Engineer, UberMichael Bevilacqua-Linn, Software Engineer, Facebook   Distributed tracing systems are a tried-and-true tool for understanding systems at scale, ranging back over a decade to early research systems like X-Trace and Magpie, and popularized in industry with Google’s Dapper. Both Uber and Facebook operate large-scale distributed tracing systems, but each has a different focus. Uber’s Jaeger is used primarily as an observability tool, which gives engineers insight into failures in their microservices architecture, while Facebook has largely used its tracing system, Canopy, to get a detailed view of its web and mobile apps, including the creation of aggregate data sets with a built in trace-processing system. In this talk, Yuri and Michael walk through Canopy’s built-in trace processing, as well as Uber’s use of traces for more automated root cause analyses of distributed failures. Continuous deployment at Facebook scale Boris Grubic, Software Engineer, FacebookFangfei Zhou, Software Engineer, Facebook   Continuous deployment is an important requirement for moving fast, given the scale at which Facebook operates. This presentation describes how Facebook solves different aspects of the problem and how all the components are connected to provide an efficient developer experience. Once a developer commits a change, our infrastructure automatically builds the binary, tests it in various ways, and safely deploys it across the fleet. Enabling this workflow for thousands of microservices involves a delicate balance of trade-offs, and so the presentation also calls out the design considerations that guided the evolution of the system over the years and what continuous deployment means for the future. Observability infra Elaine Arbaugh, Senior Software Engineer, Affirm   As infrastructure grows, it’s critical to have observability into system performance and reliability in order to identify any current issues or potential future bottlenecks. In this talk, Elaine discusses how Affirm’s custom metrics, monitoring, and alerting systems work; how we’ve scaled them as our traffic and engineering teams have grown rapidly; and examples of scaling-related issues we’ve identified with them. She also discusses the instrumentation we’ve added around SQL queries, which has helped identify several issues that were causing excessive load on our servers and MySQL databases, as well as the tooling we’ve added to help devs optimize their queries. Elaine goes into detail about specific database and machine-level issues Affirm has faced, and how detection, diagnosis, escalation, and resolution were handled.  Enabling next-generation models for PYMK @Scale Peter Chng, Senior Software Engineer, LinkedInGaojie Liu, Staff Software Engineer, LinkedIn The People You May Know (PYMK) recommendation service helps LinkedIn’s members identify other members that they might want to connect to and is the major driver for growing LinkedIn’s social network. The principal challenge in developing a service like PYMK is dealing with the sheer scale of computation needed to make precise recommendations with a high recall. This talk presents the challenges LinkedIn faced when bringing its next generation of models for PYMK to production. PYMK relies on Venice, a key-value store, for accessing derived data online in order to generate recommendations. However, the increasing amount of data that had to be processed in real time with our next-generation models required us to collaborate and codesign our systems with the Venice team to generate recommendations in a timely and agile manner while still being resource efficient. Peter and Gaojie describe this journey to LinkedIn’s current solution with an emphasis on how the Venice architecture evolved to support computation at scale, the lessons learned, and the plan to tackle the scalability challenges for the next phase of growth. Scaling cluster management at Facebook with Tupperware Kenny Yu, Software Engineer, Facebook     Tupperware is Facebook’s cluster management system and container platform, and it has been running in production since 2011. Today, Tupperware manages millions of containers, and almost all backend services at Facebook are deployed through Tupperware. In this talk, Kenny explores the challenges we encountered over the past eight years as we evolved our system to scale to our global fleet. He discusses scalability challenges we faced, stateful services and how we support them, our approach for opportunistic compute, and upcoming challenges we are tackling next. Efficient, reliable cluster management at scale with Tupperware Preemption in Nomad — a greedy algorithm that scales Nick Ethier, Software Engineer, HashicorpMichael Lange, Software Engineer, Hashicorp Cluster orchestrators manage and monitor workloads that run in large fleets (tens of thousands) of shared compute resources. This talk is a technical deep dive into the challenge of keeping business-critical applications running by implementing preemption. The talk covers the challenges of implementing preemption for heterogeneous workloads, the algorithm Hashicorp designed, and how it is used. Nick and Michael conclude with remaining challenges and future work.  Disaster recovery at Facebook scale Shruti Padmanabha, Research Scientist, FacebookJustin Meza, Research Scientist, Facebook   Facebook operates dozens of data centers globally, each of which serves thousands of interdependent microservices to provide seamless experiences to billions of users across the family of Facebook products. At this scale, seemingly rare occurrences, from hurricanes looming over a data center to lightning striking a switchboard, have threatened the site’s health. These events cause large-scale machine failures at the scope of a data center or significant portions of it, which cannot be addressed by traditional fault-tolerance mechanisms designed for individual machine failures. Handling these failures requires us to develop solutions across the stack, from placing hardware and spare capacity across fault domains to being able to shift traffic smoothly away from affected fault domains to rearchitecting large-scale distributed systems in a fault domain-aware manner. In this talk, Shruti and Justin will describe principles Facebook follows for designing reliable software, tools we built to mitigate and respond to failures, and our continuous testing and validation process.     The post Systems @Scale 2019 recap appeared first on Facebook Code. [Less]
Posted over 4 years ago by Severalnines
ProxySQL has supported native clustering since v1.4.2. This means multiple ProxySQL instances are cluster-aware; they are aware of each others' state and able to handle the configuration changes automatically by syncing up to the most up-to-date ... [More] configuration based on configuration version, timestamp and checksum value. Check out this blog post which demonstrates how to configure clustering support for ProxySQL and how you could expect it to behave. ProxySQL is a decentralized proxy, recommended to be deployed closer to the application. This approach scales pretty well even up to hundreds of nodes, as it was designed to be easily reconfigurable at runtime. To efficiently manage multiple ProxySQL nodes, one has to make sure whatever changes performed on one of the nodes should be applied across all nodes in the farm. Without native clustering, one has to manually export the configurations and import them to the other nodes (albeit, you could automate this by yourself). In the previous blog post, we have covered ProxySQL clustering via Kubernetes ConfigMap. This approach is more or less pretty efficient with the centralized configuration approach in ConfigMap. Whatever loaded into ConfigMap will be mounted into pods. Updating the configuration can be done via versioning (modify the proxysql.cnf content and load it into ConfigMap with another name) and then push to the pods depending on the Deployment method scheduling and update strategy. However, in a rapidly changing environment, this ConfigMap approach is probably not the best method because in order to load the new configuration, pod rescheduling is required to remount the ConfigMap volume and this might jeopardize the ProxySQL service as a whole. For example, let's say in our environment, our strict password policy requires to force MySQL user password expiration for every 7 days, which we would have to keep updating the ProxySQL ConfigMap for the new password on a weekly basis. As a side note, MySQL user inside ProxySQL requires user and password to match the one on the backend MySQL servers. That's where we should start making use of ProxySQL native clustering support in Kubernetes, to automatically apply the configuration changes without the hassle of ConfigMap versioning and pod rescheduling. In this blog post, we’ll show you how to run ProxySQL native clustering with headless service on Kubernetes. Our high-level architecture can be illustrated as below: We have 3 Galera nodes running on bare-metal infrastructure deployed and managed by ClusterControl: 192.168.0.21 192.168.0.22 192.168.0.23 Our applications are all running as pods within Kubernetes. The idea is to introduce two ProxySQL instances in between the application and our database cluster to serve as a reverse proxy. Applications will then connect to ProxySQL pods via Kubernetes service which will be load balanced and failover across a number of ProxySQL replicas. The following is a summary of our Kubernetes setup: root@kube1:~# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kube1 Ready master 5m v1.15.1 192.168.100.201 Ubuntu 18.04.1 LTS 4.15.0-39-generic docker://18.9.7 kube2 Ready 4m1s v1.15.1 192.168.100.202 Ubuntu 18.04.1 LTS 4.15.0-39-generic docker://18.9.7 kube3 Ready 3m42s v1.15.1 192.168.100.203 Ubuntu 18.04.1 LTS 4.15.0-39-generic docker://18.9.7 ProxySQL Configuration via ConfigMap Let's first prepare our base configuration which will be loaded into ConfigMap. Create a file called proxysql.cnf and add the following lines: datadir="/var/lib/proxysql" admin_variables= { admin_credentials="proxysql-admin:adminpassw0rd;cluster1:secret1pass" mysql_ifaces="0.0.0.0:6032" refresh_interval=2000 cluster_username="cluster1" cluster_password="secret1pass" cluster_check_interval_ms=200 cluster_check_status_frequency=100 cluster_mysql_query_rules_save_to_disk=true cluster_mysql_servers_save_to_disk=true cluster_mysql_users_save_to_disk=true cluster_proxysql_servers_save_to_disk=true cluster_mysql_query_rules_diffs_before_sync=3 cluster_mysql_servers_diffs_before_sync=3 cluster_mysql_users_diffs_before_sync=3 cluster_proxysql_servers_diffs_before_sync=3 } mysql_variables= { threads=4 max_connections=2048 default_query_delay=0 default_query_timeout=36000000 have_compress=true poll_timeout=2000 interfaces="0.0.0.0:6033;/tmp/proxysql.sock" default_schema="information_schema" stacksize=1048576 server_version="5.1.30" connect_timeout_server=10000 monitor_history=60000 monitor_connect_interval=200000 monitor_ping_interval=200000 ping_interval_server_msec=10000 ping_timeout_server=200 commands_stats=true sessions_sort=true monitor_username="proxysql" monitor_password="proxysqlpassw0rd" monitor_galera_healthcheck_interval=2000 monitor_galera_healthcheck_timeout=800 } mysql_galera_hostgroups = ( { writer_hostgroup=10 backup_writer_hostgroup=20 reader_hostgroup=30 offline_hostgroup=9999 max_writers=1 writer_is_also_reader=1 max_transactions_behind=30 active=1 } ) mysql_servers = ( { address="192.168.0.21" , port=3306 , hostgroup=10, max_connections=100 }, { address="192.168.0.22" , port=3306 , hostgroup=10, max_connections=100 }, { address="192.168.0.23" , port=3306 , hostgroup=10, max_connections=100 } ) mysql_query_rules = ( { rule_id=100 active=1 match_pattern="^SELECT .* FOR UPDATE" destination_hostgroup=10 apply=1 }, { rule_id=200 active=1 match_pattern="^SELECT .*" destination_hostgroup=20 apply=1 }, { rule_id=300 active=1 match_pattern=".*" destination_hostgroup=10 apply=1 } ) mysql_users = ( { username = "wordpress", password = "passw0rd", default_hostgroup = 10, transaction_persistent = 0, active = 1 }, { username = "sbtest", password = "passw0rd", default_hostgroup = 10, transaction_persistent = 0, active = 1 } ) proxysql_servers = ( { hostname = "proxysql-0.proxysqlcluster", port = 6032, weight = 1 }, { hostname = "proxysql-1.proxysqlcluster", port = 6032, weight = 1 } ) Some of the above configuration lines are explained per section below: admin_variables Pay attention on the admin_credentials variable where we used non-default user which is "proxysql-admin". ProxySQL reserves the default "admin" user for local connection via localhost only. Therefore, we have to use other users to access the ProxySQL instance remotely. Otherwise, you would get the following error: ERROR 1040 (42000): User 'admin' can only connect locally We also appended the cluster_username and cluster_password value in the admin_credentials line, separated by semicolon to allow automatic syncing to happen. All variables prefixed with cluster_* are related to ProxySQL native clustering and are self-explanatory. mysql_galera_hostgroups This is a new directive introduced for ProxySQL 2.x (our ProxySQL image is running on 2.0.5). If you would like to run on ProxySQL 1.x, do remove this part and use scheduler table instead. We already explained the configuration details in this blog post, How to Run and Configure ProxySQL 2.0 for MySQL Galera Cluster on Docker under "ProxySQL 2.x Support for Galera Cluster". mysql_servers All lines are self-explanatory, which is based on three database servers running in MySQL Galera Cluster as summarized in the following Topology screenshot taken from ClusterControl: proxysql_servers Here we define a list of ProxySQL peers: hostname - Peer's hostname/IP address port - Peer's admin port weight - Currently unused, but in the roadmap for future enhancements comment - Free form comment field In Docker/Kubernetes environment, there are multiple ways to discover and link up container hostnames or IP addresses and insert them into this table, either by using ConfigMap, manual insert, via entrypoint.sh scripting, environment variables or some other means. In Kubernetes, depending on the ReplicationController or Deployment method used, guessing the pod's resolvable hostname in advanced is somewhat tricky unless if you are running on StatefulSet. Check out this tutorial on StatefulState pod ordinal index which provides a stable resolvable hostname for the created pods. Combine this with headless service (explained further down), the resolvable hostname format would be: {app_name}-{index_number}.{service} Where {service} is a headless service, which explains where "proxysql-0.proxysqlcluster" and "proxysql-1.proxysqlcluster" come from. If you want to have more than 2 replicas, add more entries accordingly by appending an ascending index number relative to the StatefulSet application name. Now we are ready to push the configuration file into ConfigMap, which will be mounted into every ProxySQL pod during deployment: $ kubectl create configmap proxysql-configmap --from-file=proxysql.cnf Verify if our ConfigMap is loaded correctly: $ kubectl get configmap NAME DATA AGE proxysql-configmap 1 7h57m Creating ProxySQL Monitoring User The next step before we start the deployment is to create ProxySQL monitoring user in our database cluster. Since we are running on Galera cluster, run the following statements on one of the Galera nodes: mysql> CREATE USER 'proxysql'@'%' IDENTIFIED BY 'proxysqlpassw0rd'; mysql> GRANT USAGE ON *.* TO 'proxysql'@'%'; If you haven't created the MySQL users (as specified under mysql_users section above), we have to create them as well: mysql> CREATE USER 'wordpress'@'%' IDENTIFIED BY 'passw0rd'; mysql> GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpress'@'%'; mysql> CREATE USER 'sbtest'@'%' IDENTIFIED BY 'passw0rd'; mysql> GRANT ALL PRIVILEGES ON sbtest.* TO 'proxysql'@'%'; That's it. We are now ready to start the deployment. Deploying a StatefulSet We will start by creating two ProxySQL instances, or replicas for redundancy purposes using StatefulSet. Let's start by creating a text file called proxysql-ss-svc.yml and add the following lines: apiVersion: apps/v1 kind: StatefulSet metadata: name: proxysql labels: app: proxysql spec: replicas: 2 serviceName: proxysqlcluster selector: matchLabels: app: proxysql tier: frontend updateStrategy: type: RollingUpdate template: metadata: labels: app: proxysql tier: frontend spec: restartPolicy: Always containers: - image: severalnines/proxysql:2.0.4 name: proxysql volumeMounts: - name: proxysql-config mountPath: /etc/proxysql.cnf subPath: proxysql.cnf ports: - containerPort: 6033 name: proxysql-mysql - containerPort: 6032 name: proxysql-admin volumes: - name: proxysql-config configMap: name: proxysql-configmap --- apiVersion: v1 kind: Service metadata: annotations: labels: app: proxysql tier: frontend name: proxysql spec: ports: - name: proxysql-mysql port: 6033 protocol: TCP targetPort: 6033 - name: proxysql-admin nodePort: 30032 port: 6032 protocol: TCP targetPort: 6032 selector: app: proxysql tier: frontend type: NodePort There are two sections of the above definition - StatefulSet and Service. The StatefulSet is the definition of our pods, or replicas and the mount point for our ConfigMap volume, loaded from proxysql-configmap. The next section is the service definition, where we define how the pods should be exposed and routed for internal or external network. Verify the pod and service states: $ kubectl get pods,svc NAME READY STATUS RESTARTS AGE pod/proxysql-0 1/1 Running 0 4m46s pod/proxysql-1 1/1 Running 0 2m59s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 443/TCP 10h service/proxysql NodePort 10.111.240.193 6033:30314/TCP,6032:30032/TCP 5m28s If you look at the pod's log, you would notice we got flooded with this warning: $ kubectl logs -f proxysql-0 ... 2019-08-01 19:06:18 ProxySQL_Cluster.cpp:215:ProxySQL_Cluster_Monitor_thread(): [WARNING] Cluster: unable to connect to peer proxysql-1.proxysqlcluster:6032 . Error: Unknown MySQL server host 'proxysql-1.proxysqlcluster' (0) The above simply means proxysql-0 was unable to resolve "proxysql-1.proxysqlcluster" and connect to it, which is expected since we haven't created our headless service for DNS records that is going to be needed for inter-ProxySQL communication. Kubernetes Headless Service In order for ProxySQL pods to be able to resolve the anticipated FQDN and connect to it directly, the resolving process must be able to lookup the assigned target pod IP address and not the virtual IP address. This is where headless service comes into the picture. When creating a headless service by setting "clusterIP=None", no load-balancing is configured and no cluster IP (virtual IP) is allocated for this service. Only DNS is automatically configured. When you run a DNS query for headless service, you will get the list of the pods IP addresses. Here is what it looks like if we look up the headless service DNS records for "proxysqlcluster" (in this example we had 3 ProxySQL instances): $ host proxysqlcluster proxysqlcluster.default.svc.cluster.local has address 10.40.0.2 proxysqlcluster.default.svc.cluster.local has address 10.40.0.3 proxysqlcluster.default.svc.cluster.local has address 10.32.0.2 While, the following output shows the DNS record for the standard service called "proxysql" which resolves to the clusterIP: $ host proxysql proxysql.default.svc.cluster.local has address 10.110.38.154 To create a headless service and attach it to the pods, one has to define the ServiceName inside the StatefulSet declaration, and the Service definition must have "clusterIP=None" as shown below. Create a text file called proxysql-headless-svc.yml and add the following lines: apiVersion: v1 kind: Service metadata: name: proxysqlcluster labels: app: proxysql spec: clusterIP: None ports: - port: 6032 name: proxysql-admin selector: app: proxysql Create the headless service: $ kubectl create -f proxysql-headless-svc.yml Just for verification, at this point, we have the following services running: $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 443/TCP 8h proxysql NodePort 10.110.38.154 6033:30200/TCP,6032:30032/TCP 23m proxysqlcluster ClusterIP None 6032/TCP 4s Now, check out one of our pod's log: $ kubectl logs -f proxysql-0 ... 2019-08-01 19:06:19 ProxySQL_Cluster.cpp:215:ProxySQL_Cluster_Monitor_thread(): [WARNING] Cluster: unable to connect to peer proxysql-1.proxysqlcluster:6032 . Error: Unknown MySQL server host 'proxysql-1.proxysqlcluster' (0) 2019-08-01 19:06:19 [INFO] Cluster: detected a new checksum for mysql_query_rules from peer proxysql-1.proxysqlcluster:6032, version 1, epoch 1564686376, checksum 0x3FEC69A5C9D96848 . Not syncing yet ... 2019-08-01 19:06:19 [INFO] Cluster: checksum for mysql_query_rules from peer proxysql-1.proxysqlcluster:6032 matches with local checksum 0x3FEC69A5C9D96848 , we won't sync. You would notice the Cluster component is able to resolve, connect and detect a new checksum from the other peer, proxysql-1.proxysqlcluster on port 6032 via the headless service called "proxysqlcluster". Note that this service exposes port 6032 within Kubernetes network only, hence it is unreachable externally. At this point, our deployment is now complete. Connecting to ProxySQL There are several ways to connect to ProxySQL services. The load-balanced MySQL connections should be sent to port 6033 from within Kubernetes network and use port 30033 if the client is connecting from an external network. To connect to the ProxySQL admin interface from external network, we can connect to the port defined under NodePort section, 30032 (192.168.100.203 is the primary IP address of host kube3.local): $ mysql -uproxysql-admin -padminpassw0rd -h192.168.100.203 -P30032 Use the clusterIP 10.110.38.154 (defined under "proxysql" service) on port 6032 if you want to access it from other pods in Kubernetes network. Then perform the ProxySQL configuration changes as you wish and load them to runtime: mysql> INSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('newuser','passw0rd',10); mysql> LOAD MYSQL USERS TO RUNTIME; You will notice the following lines in one of the pods indicating the configuration syncing completes: $ kubectl logs -f proxysql-0 ... 2019-08-02 03:53:48 [INFO] Cluster: detected a peer proxysql-1.proxysqlcluster:6032 with mysql_users version 2, epoch 1564718027, diff_check 4. Own version: 1, epoch: 1564714803. Proceeding with remote sync 2019-08-02 03:53:48 [INFO] Cluster: detected peer proxysql-1.proxysqlcluster:6032 with mysql_users version 2, epoch 1564718027 2019-08-02 03:53:48 [INFO] Cluster: Fetching MySQL Users from peer proxysql-1.proxysqlcluster:6032 started 2019-08-02 03:53:48 [INFO] Cluster: Fetching MySQL Users from peer proxysql-1.proxysqlcluster:6032 completed Keep in mind that the automatic syncing only happens if there is a configuration change in ProxySQL runtime. Therefore, it's vital to run "LOAD ... TO RUNTIME" statement before you can see the action. Don't forget to save the ProxySQL changes into the disk for persistency: mysql> SAVE MYSQL USERS TO DISK; Limitation Note that there is a limitation to this setup due to ProxySQL does not support saving/exporting the active configuration into a text configuration file that we could use later on to load into ConfigMap for persistency. There is a feature request for this. Meanwhile, you could push the modifications to ConfigMap manually. Otherwise, if the pods were accidentally deleted, you would lose your current configuration because the new pods would be bootstrapped by whatever defined in the ConfigMap. Special thanks to Sampath Kamineni, who sparked the idea of this blog post and provide insights about the use cases and implementation. Tags:  proxysql kubernetes galera cluster MySQL MariaDB docker containerization [Less]
Posted over 4 years ago by Sri Sakthivel Durai Pandian
Posted over 4 years ago by Abdel-Mawla Gharieb
Some time ago, I was building a new MySQL DB server (5.7.25) and like all DBAs, I have a template of my.cnf that I use for the new instances after changing a few variables based on the instance resources, replication … etc. I had MySQL installed but ... [More] I struggled on having the service started! MySQL failed to start, no errors were printed at all in the MySQL error log – or the log was not created from the first place – even no errors in the system log and I had no clue what was going on! After some digging in, I found the bad guy! The variable secure_file_priv referred to a directory that didn’t exist. When I had the directory created, everything was fine and the service started. I tried to repeat the same scenario in MySQL 8 and it was much better. The error log indicated the root cause of the issue as below:2019-03-25T23:39:59.810992Z 0 [ERROR] [MY-010095] [Server] Failed to access directory for --secure-file-priv. Please make sure that directory exists and is accessible by MySQL Server. Supplied value : /tmp/mysql 2019-03-25T23:39:59.811178Z 0 [ERROR] [MY-010119] [Server] Aborting I’ve created a bug report (Bug #96406) and will update this post when it got fixed. Conclusion When you use my.cnf templates, make sure that all paths refer to directories that do exist on the system! [Less]
Posted over 4 years ago by Abdel-Mawla Gharieb
Some time ago, I was building a new MySQL DB server (5.7.25) and like all DBAs, I have a template of my.cnf that I use for the new instances after changing a few variables based on the instance resources, replication … etc. I had MySQL installed but ... [More] I struggled on having the service started! MySQL failed to start, no errors were printed at all in the MySQL error log – or the log was not created from the first place – even no errors in the system log and I had no clue what was going on! After some digging in, I found the bad guy! The variable secure_file_priv referred to a directory that didn’t exist. When I had the directory created, everything was fine and the service started. In MySQL 8, it is much better. I tried to repeat the same scenario in MySQL 8 and the error log indicated the root cause of the issue as below:2019-03-25T23:39:59.810992Z 0 [ERROR] [MY-010095] [Server] Failed to access directory for --secure-file-priv. Please make sure that directory exists and is accessible by MySQL Server. Supplied value : /tmp/mysql 2019-03-25T23:39:59.811178Z 0 [ERROR] [MY-010119] [Server] Aborting I’ve created a bug report (Bug #96406) and will update this post when it got fixed. Conclusion When you use my.cnf templates, make sure that all paths refer to directories that do exist on the system! [Less]
Posted over 4 years ago by EverSQL
From time to time, you might notice that your MySQL database suddenly under-performs. To quickly locate the root cause, one might start digging into MySQL slow query logs, trying to locate the query(ies) taking up the server’s resources. Those slow ... [More] log files contain lots of metrics for each query execution, which can be overwhelming, especially if you have a busy application with lots of queries constantly being executed. There are several command line tools out there, doing a fantastic job in analyzing those slow log files and summarizing them (examples: pt-query-digest, mysqldumpslow), so if you never used them, you should definitely check them out. If you’re looking for a more visual way to look at those slow logs, another option can be to try out EverSQL’s Slow Log Analyzer. This online free tool will analyze, summarize and visualize the slow queries for you. By default, queries are grouped by their fingerprint, and presented in the query container. Each point in the timeline represents the total duration of queries which ended at that point in time. The tool is rather lean at the moment, and we’re planning to add more options in the near future. Feel free to send your feedback to [email protected] to help us improve this product. Summary If you’re looking for a visual way to explore your MySQL slow query logs, take a look at EverSQL’s Slow Log Analyzer and feel free to send your feedback to [email protected]. [Less]
Posted over 4 years ago by MySQL Performance Blog
In this post, we will discuss a new feature – the MySQL 8.0.17 clone plugin. Here I will demonstrate how easy it is to use to create the “classic” replication, building the standby replica from scratch. The clone plugin permits cloning data locally ... [More] or from a remote MySQL server instance. The cloned data is a physical snapshot of data stored in InnoDB, and this means, for example, that the data can be used to create a standby replica. Let’s go to the hands-on and see how it works. Installation & validation process of the MySQL 8.0.17 clone plugin Installation is very easy and it works in the same as installing other plugins. Below is the command line to install the clone plugin: master [localhost:45008] {msandbox} ((none)) > INSTALL PLUGIN clone SONAME 'mysql_clone.so'; Query OK, 0 rows affected (0.00 sec) And how to check if the clone plugin is active: master [localhost:45008] {msandbox} ((none)) > SELECT PLUGIN_NAME, PLUGIN_STATUS FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME LIKE 'clone'; +-------------+---------------+ | PLUGIN_NAME | PLUGIN_STATUS | +-------------+---------------+ | clone | ACTIVE | +-------------+---------------+ 1 row in set (0.00 sec) Note that these steps need to be executed on the Donor (aka master) and on the Recipient (aka slave if the clone is being used to create a replica). After executing the installation, the plugin will be loaded automatically across restarts, so you don’t need to worry about this anymore. Next, we will create the user with the necessary privilege on the Donor, so we can connect to the instance remotely to clone it. master [localhost:45008] {msandbox} ((none)) > create user clone_user@'%' identified by 'sekret'; Query OK, 0 rows affected (0.01 sec) master [localhost:45008] {msandbox} ((none)) > GRANT BACKUP_ADMIN ON *.* TO 'clone_user'@'%'; Query OK, 0 rows affected (0.00 sec) As a security measure, I recommend replacing the % for the IP/hostname or network mask of the Recipient so the connections will be accepted only by the future replica server.  Now, on the Recipient server, the clone user requires the CLONE_ADMIN privilege for replacing recipient data, blocking DDL during the cloning operation and automatically restarting the server. slave1 [localhost:45009] {msandbox} ((none)) > create user clone_user@'localhost' identified by 'sekret'; Query OK, 0 rows affected (0.01 sec) slave1 [localhost:45009] {msandbox} ((none)) > GRANT CLONE_ADMIN ON *.* TO 'clone_user'@'localhost'; Query OK, 0 rows affected (0.00 sec) Next, with the plugin installed and validated, and users created on both Donor and Recipient servers, let’s proceed to the cloning process. Cloning process As mentioned, the cloning process can be executed locally or remotely.  Also, it supports replication, which means that the cloning operation extracts and transfers replication coordinates from the donor and applies them on the recipient. It can be used for GTID or non-GTID replication. So, to begin the cloning process, first, let’s make sure that there’s a valid donor. This is controlled by clone_valid_donor_list parameter. As it is a dynamic parameter, you can change it while the server is running. Using the show variables command will show if the parameter has a valid donor: slave1 [localhost:45009] {msandbox} ((none)) > SHOW VARIABLES LIKE 'clone_valid_donor_list'; +------------------------+-------+ | Variable_name | Value | +------------------------+-------+ | clone_valid_donor_list | | +------------------------+-------+ 1 row in set (0.01 sec) In our case, we need to set it. So let’s change it: slave1 [localhost:45009] {msandbox} ((none)) > set global clone_valid_donor_list = '127.0.0.1:45008'; Query OK, 0 rows affected (0.00 sec) The next step is not mandatory, but using the default log_error_verbosity the error log does not display much information about the cloning progress. So, for this example, I will adjust the verbosity to a higher level (on the Donor and the Recipient): mysql > set global log_error_verbosity=3; Query OK, 0 rows affected (0.00 sec) Now, let’s start the cloning process on the Recipient: slave1 [localhost:45009] {msandbox} ((none)) > CLONE INSTANCE FROM [email protected]:45008 identified by 'sekret'; Query OK, 0 rows affected (38.58 sec) It is possible to observe the cloning progress in the error log of both servers. Below is the output of the Donor: 2019-07-31T12:48:48.558231Z 47 [Note] [MY-013273] [Clone] Plugin Clone reported: 'Server: Acquired backup lock.' 2019-07-31T12:48:48.558307Z 47 [Note] [MY-013457] [InnoDB] Clone Begin Master Task by clone_user@localhost 2019-07-31T12:48:48.876138Z 47 [Note] [MY-013273] [Clone] Plugin Clone reported: 'Server: COM_INIT: Storage Initialize.' 2019-07-31T12:48:48.876184Z 47 [Note] [MY-013273] [Clone] Plugin Clone reported: 'Server: COM_RES_COMPLETE.' 2019-07-31T12:48:53.996976Z 48 [Note] [MY-013458] [InnoDB] Clone set state change ACK: 1 2019-07-31T12:48:53.997046Z 48 [Note] [MY-013273] [Clone] Plugin Clone reported: 'Server: COM_ACK: Storage Ack.' 2019-07-31T12:48:53.997148Z 48 [Note] [MY-013273] [Clone] Plugin Clone reported: 'Server: COM_RES_COMPLETE.' 2019-07-31T12:48:54.096766Z 47 [Note] [MY-013458] [InnoDB] Clone Master received state change ACK 2019-07-31T12:48:54.096847Z 47 [Note] [MY-013458] [InnoDB] Clone State Change : Number of tasks = 1 2019-07-31T12:48:54.096873Z 47 [Note] [MY-013458] [InnoDB] Clone State BEGIN FILE COPY ... 2019-07-31T12:49:33.939968Z 47 [Note] [MY-013457] [InnoDB] Clone End Master Task ID: 0 Passed, code: 0: 2019-07-31T12:49:33.940016Z 47 [Note] [MY-013273] [Clone] Plugin Clone reported: 'Server: COM_EXIT: Storage End.' 2019-07-31T12:49:33.940115Z 47 [Note] [MY-013273] [Clone] Plugin Clone reported: 'Server: COM_RES_COMPLETE.' 2019-07-31T12:49:33.940150Z 47 [Note] [MY-013273] [Clone] Plugin Clone reported: 'Server: Exiting clone protocol.' And the Recipient: 2019-07-31T12:48:48.521515Z 8 [Note] [MY-013272] [Clone] Plugin Clone reported: 'Client: Task Connect.' 2019-07-31T12:48:48.557855Z 8 [Note] [MY-013272] [Clone] Plugin Clone reported: 'Client: Master ACK Connect.' 2019-07-31T12:48:48.557923Z 8 [Note] [MY-013457] [InnoDB] Clone Apply Begin Master Version Check 2019-07-31T12:48:48.558474Z 8 [Note] [MY-013457] [InnoDB] Clone Apply Version End Master Task ID: 0 Passed, code: 0: 2019-07-31T12:48:48.558507Z 8 [Note] [MY-013457] [InnoDB] Clone Apply Begin Master Task 2019-07-31T12:48:48.558749Z 8 [Warning] [MY-013460] [InnoDB] Clone removing all user data for provisioning: Started 2019-07-31T12:48:48.558769Z 8 [Note] [MY-011977] [InnoDB] Clone Drop all user data 2019-07-31T12:48:48.863134Z 8 [Note] [MY-011977] [InnoDB] Clone: Fix Object count: 371 task: 0 2019-07-31T12:48:53.829493Z 8 [Note] [MY-011977] [InnoDB] Clone Drop User schemas 2019-07-31T12:48:53.829948Z 8 [Note] [MY-011977] [InnoDB] Clone: Fix Object count: 5 task: 0 2019-07-31T12:48:53.838939Z 8 [Note] [MY-011977] [InnoDB] Clone Drop User tablespaces 2019-07-31T12:48:53.839800Z 8 [Note] [MY-011977] [InnoDB] Clone: Fix Object count: 6 task: 0 2019-07-31T12:48:53.910728Z 8 [Note] [MY-011977] [InnoDB] Clone Drop: finished successfully ... 2019-07-31T12:49:33.836509Z 8 [Note] [MY-013272] [Clone] Plugin Clone reported: 'Client: Command COM_EXECUTE.' 2019-07-31T12:49:33.836998Z 8 [Note] [MY-013272] [Clone] Plugin Clone reported: 'Client: Master ACK COM_EXIT.' 2019-07-31T12:49:33.839498Z 8 [Note] [MY-013272] [Clone] Plugin Clone reported: 'Client: Master ACK Disconnect : abort: false.' 2019-07-31T12:49:33.851403Z 0 [Note] [MY-013272] [Clone] Plugin Clone reported: 'Client: Command COM_EXECUTE.' 2019-07-31T12:49:33.851796Z 0 [Note] [MY-013272] [Clone] Plugin Clone reported: 'Client: Task COM_EXIT.' 2019-07-31T12:49:33.852398Z 0 [Note] [MY-013272] [Clone] Plugin Clone reported: 'Client: Task Disconnect : abort: false.' 2019-07-31T12:49:33.852472Z 0 [Note] [MY-013457] [InnoDB] Clone Apply End Task ID: 1 Passed, code: 0: 2019-07-31T12:49:33.940156Z 8 [Note] [MY-013272] [Clone] Plugin Clone reported: 'Client: Task COM_EXIT.' 2019-07-31T12:49:33.940810Z 8 [Note] [MY-013272] [Clone] Plugin Clone reported: 'Client: Task Disconnect : abort: false.' 2019-07-31T12:49:33.944244Z 8 [Note] [MY-013457] [InnoDB] Clone Apply End Master Task ID: 0 Passed, code: 0: Note that the MySQL server on the Recipient will be restarted after the cloning process finishes. After this, the database is ready to be accessed and the final step is setting up the replica. The replica process Both binary log position (filename, offset) and GTID coordinates are extracted and transferred from the donor MySQL server instance. The queries below can be executed on the cloned MySQL server instance to view the binary log position or the GTID of the last transaction that was applied: # Binary log position slave1 [localhost:45009] {msandbox} ((none)) > SELECT BINLOG_FILE, BINLOG_POSITION FROM performance_schema.clone_status; +------------------+-----------------+ | BINLOG_FILE | BINLOG_POSITION | +------------------+-----------------+ | mysql-bin.000001 | 437242601 | +------------------+-----------------+ 1 row in set (0.01 sec) # GTID slave1 [localhost:45009] {msandbox} ((none)) > SELECT @@GLOBAL.GTID_EXECUTED; +----------------------------------------------+ | @@GLOBAL.GTID_EXECUTED | +----------------------------------------------+ | 00045008-1111-1111-1111-111111111111:1-32968 | +----------------------------------------------+ 1 row in set (0.00 sec) With the information in hand, we need to execute the CHANGE MASTER command accordingly: slave1 [localhost:45009] {msandbox} ((none)) > CHANGE MASTER TO MASTER_HOST = '127.0.0.1', MASTER_PORT = 45008,MASTER_USER='root',MASTER_PASSWORD='msandbox',MASTER_AUTO_POSITION = 1; Query OK, 0 rows affected, 2 warnings (0.02 sec) slave1 [localhost:45009] {msandbox} ((none)) > start slave; Query OK, 0 rows affected (0.00 sec) Limitations The clone plugin has some limitations and they are described here. In my opinion, two major limitations will be faced by the community. First, it is the ability to clone only InnoDB tables. This means that  MyISAM and CSV tables stored in any schema including the sys schema will be cloned as empty tables. The other limitation is regarding DDL, including TRUNCATE TABLE, which is not permitted during a cloning operation. Concurrent DML, however, is permitted. If a DDL is running, then the clone operation will wait for the lock: +----+-----------------+-----------------+------+------------------+-------+---------------------------------------------------------------+----------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+-----------------+-----------------+------+------------------+-------+---------------------------------------------------------------+----------------------------------+ | 63 | clone_user | localhost:34402 | NULL | clone | 3 | Waiting for backup lock | NULL | +----+-----------------+-----------------+------+------------------+-------+---------------------------------------------------------------+----------------------------------+ Otherwise, if the clone operation is running, the DDL will wait for the lock: +----+-----------------+-----------------+------+------------------+-------+---------------------------------------------------------------+----------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+-----------------+-----------------+------+------------------+-------+---------------------------------------------------------------+----------------------------------+ | 52 | msandbox | localhost | test | Query | 5 | Waiting for backup lock | alter table joinit engine=innodb | | 60 | clone_user | localhost:34280 | NULL | clone | 15 | Receiving from client | NULL | | 61 | clone_user | localhost:34282 | NULL | clone | 15 | Receiving from client | NULL | | 62 | clone_user | localhost:34284 | NULL | clone | 6 | Receiving from client | NULL | +----+-----------------+-----------------+------+------------------+-------+---------------------------------------------------------------+----------------------------------+ Conclusion Creating replicas has become much easier with the help of the MySQL 8.0.17 clone plugin. This feature can be used using SSL connections and with encrypted data as well.  At the moment of publication of this blog post, the clone plugin can be used to set up not only asynchronous replicas but provisioning Group Replication members too. Personally, I believe that in the near future this feature will also be used for Galera clusters. This is my two cents for what the future holds. Useful Resources Finally, you can reach us through the social networks, our forum or access our material using the links presented below: Blog: https://www.percona.com/blog/  Solution Briefs: https://www.percona.com/resources/solution-brief  White Papers: https://www.percona.com/resources/white-papers  Ebooks: https://www.percona.com/resources/ebooks Technical Presentations archive: https://www.percona.com/resources/technical-presentations  Videos/Recorded Webinars: https://www.percona.com/resources/videos Forum: https://www.percona.com/forums/ Knowledge Base (Percona Subscriber exclusive content): https://customers.percona.com [Less]
Posted over 4 years ago by Percona Community
The blog post How to Compile Percona Server for MySQL 5.7 in Raspberry Pi 3 by Walter Garcia, inspired me to create an updated install of Percona Server for the Raspberry Pi 3+. This how-to post covers installing from source and being able to use ... [More] Percona Server for MySQL in any of your maker projects. I have included everything you need to have a complete Percona Server, ready to store data collection for your weather station, your GPS data, or any other project you can think of that would require data collection in a database. My years of hands-on support of Percona Server enable me to customize the install a bit. I wanted to build a full Percona “Stack” including XtraBackup, and Percona Toolkit. Hardware and Software Tested on a Raspberry PI 3B and 3B+ OS is Raspbian Buster. You can download it here: https://www.raspberrypi.org/downloads/raspbian/ I choose the option: Raspbian Buster with Desktop. 64GB SD Card, not required, but would not suggest less than 32GB. For best performance use and SD card that is between 90 – 100MB per sec. The Step-by-Step Guide Let’s get on and build! 1. Prep Your Raspberry PI You will notice I use sudo rather often, even during the make and cmake. I found that running as the default pi user for the install gave me issues. Using sudo for root based commands is the best practice that I always try to follow. sudo apt-get update sudo apt-get upgrade sudo apt-get install screen cmake debhelper autotools-dev libaio-dev \ automake libtool bison bzr libgcrypt20-dev flex autoconf libtool libncurses5-dev \ mariadb-client-10.0 libboost-dev libreadline-dev libcurl4-openssl-dev libtirpc-dev Create a swapfile. Very much needed for these two compiles. sudo dd if=/dev/zero of=/swapfile2GB bs=1M count=2048 sudo mkswap /swapfile2GB sudo swapon /swapfile2GB sudo chmod 0600 /swapfile2GB 2. Build Percona Server for MySQL This will take about 3.5 to 4 hours to run. Download percona-server 5.7.26 source tar ball wget https://www.percona.com/downloads/Percona-Server-5.7/Percona-Server-5.7.26-29/source/tarball/percona-server-5.7.26-29.tar.gz Extract to /home/pi cd percona-server-5.7.26-29 sudo cmake -DDOWNLOAD_BOOST=ON -DWITH_BOOST=$HOME/boost . sudo make -j3 sudo make install 3. Build Percona XtraBackup This will take about 3 hours. sudo apt-get install libcurl4-gnutls-dev libev-dev libev4 Note: installing the package libcurl4-gnutls-dev  will remove the packagelibcurl4-openssl-dev. I had compile failures for XtraBackup whenlibcurl4-openssl-dev  was installed.Download XtraBackup 2.4.14 wget https://www.percona.com/downloads/Percona-XtraBackup-2.4/Percona-XtraBackup-2.4.14/source/tarball/percona-xtrabackup-2.4.14.tar.gz Extract to /home/pi cd percona-xtrabackup-2.4.14 sudo cmake -DWITH_BOOST=$HOME/boost -DBUILD_CONFIG=xtrabackup_release -DWITH_MAN_PAGES=OFF sudo make -j3 sudo make install 4. Build Percona Toolkit Done in a few minutes. wget https://www.percona.com/downloads/percona-toolkit/3.0.13/source/tarball/percona-toolkit-3.0.13.tar.gz extract to /home/pi cd percona-toolkit-3.0.13 perl Makefile.PL make make test sudo make install 5. Create the mysqsl user sudo useradd mysql -d /var/lib/mysql Create directories for mysql to use. sudo mkdir -p /var/lib/mysql/data sudo mkdir /var/lib/mysql/binlog sudo mkdir /var/lib/mysql/tmp sudo mkdir /var/log/mysql Change ownership of directories to mysql user. sudo chown -R mysql:mysql /var/lib/mysql sudo chown mysql:mysql /var/log/mysql sudo chown -R mysql:mysql /usr/local/mysql 6. Prep my.cnf sudo rm -fR /etc/mysql I like to remove any leftover mysql directories or files in /etc before I create my file in the next step. sudo vi /etc/my.cnf Add these lines, below, to your new my.cnf file. [mysqld] port = 3306 socket = /var/lib/mysql/mysql.sock pid-file = /var/lib/mysql/mysqld.pid basedir = /usr/local/mysql datadir = /var/lib/mysql/data general_log_file = /var/log/mysql/mysql-general.log log-error = /var/log/mysql/mysqld.log slow_query_log_file = /var/log/mysql/log/slow_query.log slow_query_log = 0 # Slow query log off lc-messages-dir = /usr/local/mysql/share plugin_dir = /usr/local/mysql/lib/mysql/plugin skip-external-locking log-bin = /var/lib/mysql/binlog/mysql-bin sync_binlog = 1 expire_logs_days = 5 server-id = 1 binlog_format = mixed innodb_data_home_dir = /var/lib/mysql/data innodb_log_group_home_dir = /var/lib/mysql/data innodb_log_files_in_group = 2 innodb_buffer_pool_size = 128M innodb_log_file_size = 16M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 1 innodb_lock_wait_timeout = 50 innodb_flush_method = O_DIRECT innodb_file_per_table = 1 innodb_buffer_pool_instances = 1 Save the my.cnf file. sudo chown mysql:mysql /etc/my.cnf 7. Initialize the database files At this point, you can initialize the database files sudo /usr/local/mysql/bin/mysqld --initialize-insecure --user=mysql --basedir=/usr/local/mysql --datadir=/var/lib/mysql/data 8. Start Percona Server This is the exciting part coming up. We are going to start Percona Server sudo /usr/local/mysql/bin/mysqld_safe --defaults-file=/etc/my.cnf --user=mysql & If everything went well you should see the following lines in your /var/log/mysql/mysqld.log .2019-06-24T19:56:52.071765Z 0 [Note] Server hostname (bind-address): '*'; port: 3306 2019-06-24T19:56:52.072251Z 0 [Note] IPv6 is available. 2019-06-24T19:56:52.072385Z 0 [Note]   - '::' resolves to '::'; 2019-06-24T19:56:52.072770Z 0 [Note] Server socket created on IP: '::'. 2019-06-24T19:56:52.132587Z 0 [Note] InnoDB: Buffer pool(s) load completed at 190624 15:56:52 2019-06-24T19:56:52.136886Z 0 [Note] Failed to start slave threads for channel '' 2019-06-24T19:56:52.178087Z 0 [Note] Event Scheduler: Loaded 0 events 2019-06-24T19:56:52.179153Z 0 [Note] /usr/local/mysql/bin/mysqld: ready for connections. Version: '5.7.26-29-log'  socket: '/var/lib/mysql/mysql.sock'  port: 3306 Source distribution 9. Test login to Percona Server mysql -u root --socket=/var/lib/mysql/mysql.sock If you plan on keeping this as an active Percona Server I strongly advise you to remove the root user and create your own privileged user. First, stop Percona Server /usr/local/mysql/bin/mysqladmin -u root --socket=/var/lib/mysql/mysql.sock shutdown Create the mysqld.server and enable it. sudo vi /etc/systemd/system/mysqld.service [Unit] Description=Percona Server Version 5.7.x After=syslog.target After=network.target [Install] WantedBy=multi-user.target [Service] User=mysql Group=mysql ExecStart=/usr/local/mysql/bin/mysqld --defaults-file=/etc/my.cnf TimeoutSec=300 WorkingDirectory=/usr/local/mysql/bin #Restart=on-failure #RestartPreventExitStatus=1 PrivateTmp=true sudo systemctl enable mysqld.service Now if everything was done correctly you should be able to reboot your Pi and Percona Server will auto start on OS Boot. This is it, you now have an entire Percona Server for MySQL up and running, with XtraBackup for your daily backups and Percona Toolkit to assist you with daily and complicated tasks. If you try this out, I’d love to hear about the uses you make of your Percona Server on a Raspberry Pi. —Image based on Photo by Hector Bermudez on Unsplash The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up. The post How to Build a Percona Server “Stack” on a Raspberry Pi 3+ appeared first on Percona Community Blog. [Less]
Posted over 4 years ago by Dave Stokes
Last time we used the New MySQL Shell to set up a sandbox instance of InnoDB Cluster in Six Steps. Now to make that cluster function we need to start up MySQL Router to proxy connections. And since we do not show how to do things with Windows, we ... [More] will configure Router on that Operating System.Cleaning Up From Last TimeIf you followed the instructions from last time you may find you InnoDB Cluster non functional.  If you have rebooted then you will find that the component pieces of the cluster have not been started.  To clean things up you will have to do the following: dba.killSandboxInstance(3310) dba.deleteSandboxInstance(3310) dba.deleteSandboxInstance(3320) dba.deleteSandboxInstance(3310) Then recreate the sandbox instance using the instructions from the last post.Lets Get Ready To Route With Windows, we will  use the MySQL Installer to configure Router.  If Router is not installed you can also use the MySQL Installer to add the program. Click on 'Reconfigure' to set up MySQL Router. MySQL Installer - Note that Router is installed on this system. Click on 'Reconfigure' to set up MySQL Router We are now ready to explore the configuration options for MySQL Router. We need to specify the host which, in the case of our sandbox, is 'localhost', and provide the account and password to test the connection. The host needs to be specified and in our case it is the localhost. You will want to test the connection after supplying the account with password and port before selecting the 'Test' button. And remember we set the port when we established the sandbox instance with thedba.deploySandboxInstance(3310) command. We need to supply the host, port, account & password before testing the connection If you want to change the proxy ports at this time you certainly can. We can also change the ports for the proxy at this time too!  Born To Run So we have MySQL Router configured and it is time to get it running.  Time to select the 'Execute' button in the lower right corner. When you are ready to start MySQL Router select 'Execute' The MySQL Router program will start. MySQL Router starts up and since we are on Windows, it registers as a Windows Service, Log filed are available for those who want the details. The first part of the log details the settings of MySQL Router And the second half of the log shows us the proxy ports and that MySQL Router is a Windows Service Special Account Did you see it? The special account that appears like a hidden Easter Egg?  It was in the first part of the log. Router set up an account  mysql_router1_5vbo3umtvi6y@'%' for cluster management. It also created a configuration file for later user.More on this in a future post.But Is It Working?!? Start another MySQL shell and connect to the read only proxy port. \c root@localhost:6447Creating a session to 'root@localhost:6447'Please provide the password for 'root@localhost:6447': ******Save password for 'root@localhost:6447'? [Y]es/[N]o/Ne[v]er (default No): yFetching schema names for autocompletion... Press ^C to stop.Your MySQL connection id is 24Server version: 8.0.17 MySQL Community Server - GPLNo default schema selected; type \use to set one.JS > \sqlSwitching to SQL mode... Commands end with ;SQL > select @@port;+--------+| @@port |+--------+|   3330 |+--------+1 row in set (0.0002 sec)So we are talking to the instance at port 3330!  Woo-hoo.  Without a load connecting again will probably not switch us to the other server at 3320 but we will try.  And so we start another shell.  i will confess that I kept seeing good ol' 3330 after several attempts and got luck when I tried the read only port for the X Protocol.SQL > select @@port;+--------+| @@port |+--------+|   3330 |+--------+1 row in set (0.0003 sec)SQL > \c root@localhost:6449Creating a session to 'root@localhost:6449'Please provide the password for 'root@localhost:6449': ******Save password for 'root@localhost:6449'? [Y]es/[N]o/Ne[v]er (default No): yFetching schema names for autocompletion... Press ^C to stop.Closing old connection...Your MySQL connection id is 39 (X protocol)Server version: 8.0.17 MySQL Community Server - GPLNo default schema selected; type \use to set one. SQL > select @@port;+--------+| @@port |+--------+|   3320 |+--------+1 row in set (0.0004 sec)ConclusionFrom the last blog we know we can set up an sandbox InnoDB cluster with six commands. And this time we were able to set up MySQL Router in about three minutes.   So ten minutes to a highly available MySQL InnoDB Cluster.  This many not impress your but for someone like me who started with the basic asynchronous replication with early version of MySQL this is amazing.  [Less]
Posted over 4 years ago by Continuent
Continuent is pleased to announce that the following new software releases are now available: Tungsten Clustering version 5.4.0 Tungsten Replicator version 5.4.0 Releases 5.4.0 is significant in that it introduces MySQL 8 support, along with many ... [More] new features, stability improvements and bug fixes. Highlights common to both products: Improvements, new features and functionality Two new utility scripts have been added to the release to help with setting the Replicator position: tungsten_find_position, which assists with locating information in the THL based on the provided MySQL binary log event position and outputs a dsctl set command as output. tungsten_find_seqno, which assists with locating information in the THL based on the provided sequence number and outputs a dsctl set command as output. A new, beta-quality command has been included called prov-sl.sh which is intended to eventually replace the current tungsten_provision_slave script. Currently, prov-sl.sh supports provisioning slaves using mysqldump and xtrabackup tools, and is MySQL 8-compatible.  
The prov-sl.sh command is written in Bash, has less dependencies compared to the current script and is meant to fix a number of issues with the current version. 
Backups are streamed from source to target so that an intermediate write to disk is not performed, resulting in faster provisioning times. Upgraded the Drizzle driver to support MySQL 8 authentication protocols (SHA256, caching_sha2) The replicator has been updated to support the new character sets supported by MySQL 5.7 and MySQL 8.0, including the UTF-8-mb4 series. Bug Fixes and Behavior Changes The tpm command now properly handles network interface names containing colons and/or dots. The tpm diag --hosts={hostlist} command has been fixed. In a Staging-method install, using this option limits the tpm diag command to the specified hosts. In an INI-based install, tpm diag by itself gets diags for the local node only, and specifying the option --hosts={hostlist} will attempt to obtain diags from the named hosts only. When using tpm with the INI method, the command would search multiple locations for suitable INI files. This could lead to multiple definitions of the same service, which could in turn lead to duplication of the installation process and occasional failures. If multiple INI files are found, a warning is now produced to highlight the potential for failures. When executing mysqldump, all Tungsten tools no longer use the –add-drop-database flag as it will prevent MySQL 8+ from restoring the dump. Now properly extracting the Geometry datatype. Added support for missing charset GB18030. Highlights in the Clustering product: Behavior ChangesThe following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration: The Connector passThroughMode configuration option is now deprecated, and will be removed from tungsten-connector/conf/connector.properties. There is currently no tpm option for this, and it is undocumented. The default will be kept to passThroughMode=true. Improvements, new features and functionality A new utility script has been added to the release, tungsten_post_process, which assists with the graceful maintenance of the static cross-site replicator configuration files on disk. A new utility script has been added to the release, tungsten_reset_manager, which assists with the graceful reset of the manager’s dynamic state files on disk. The Tungsten Stack now supports the new MySQL 8.0 authentication plugins. Both sha256_password and caching_sha2_password (the new default) are supported by the Replicator, Manager and Connector. The Drizzle driver has been updated to support these new authentication methods, and the MySQL Connector/J 8 is also supported. In order to be fully transparent with the new defaults, when connected to a MySQL 8+ data source, the Connector will advertise caching_sha2_password as the default plugin. With earlier versions of MySQL (pre-8.0), the previous default mysql_native_password is used by default and advertised to the client applications. There is a new Connector property option for tpm: property=statement.display.size.in.kb=NNN, which allows for increased verbosity when Connector logging is set to debug or trace. Removed spurious warnings during composite switch or failover. Bug Fixes The Connector will now wait indefinitely for a Master to become available before finishing startup. Fixing the rpm-based post-install chown command so that symlinked directories get correct ownership. The Tungsten Clustering RPM now preserves the original OS group memberships for the tungsten user. Long service names within cctrl could cause output to fail when displaying information. The underlying issue has been fixed. Because long service names can cause formatting issues, a new option, --cctrl-column-width has been added which can be used to configure the minimum column width used to display information. MySQL ping commands are now reconnected/retried upon “server gone away” error (Proxy mode ONLY). Fixed an edge case where the master node and the coordinator node are the same, then the node was rebooted. The failover would not complete and throw an error. Highlights for the Replicator product only: The trepctl command now properly handles the -all-services option for the reset sub-command. The Redshift Applier now allows AWS authentication using IAM Roles. Previously authentication was possible via Access and Secret Key pairs only. Loading data into Redshift would fail with an error if a row of data contained a specific control character (0x00 (NULL)) The ddl_map.json file used by the apply_schema_changes filter was missing a rule to handle ALTER TABLE statements when replicating between MySQL and Redshift The extract_schema_change filter wasn’t escaping ” (double-quotes) and the generated JSON would then cause the applier to error out. Release notes: https://docs.continuent.com/tungsten-clustering-5.4/release-notes-5-4-0.html https://docs.continuent.com/tungsten-replicator-5.4/release-notes-5-4-0.html [Less]