I Use This!
Moderate Activity

News

Analyzed about 8 hours ago. based on code collected about 14 hours ago.
Posted almost 4 years ago by InsideMySQL.com
Dear MySQL users, MySQL Connector/Python 8.0.21 is the latest GA release version of the MySQL Connector Python 8.0 series. The X DevAPI enables application developers to write code that combines the strengths of the relational and document models ... [More] using a modern, NoSQL-like syntax that does not assume previous experience writing traditional SQL. To learn more about how to write applications using the X DevAPI, see http://dev.mysql.com/doc/x-devapi-userguide/en/ For more information about how the X DevAPI is implemented in MySQL Connector/Python, and its usage, see http://dev.mysql.com/doc/dev/connector-python Please note that the X DevAPI requires at least MySQL Server version 8.0 or higher with the X Plugin enabled. For general documentation about how to get started using MySQL as a document store, see http://dev.mysql.com/doc/refman/8.0/en/document-store.html To download MySQL Connector/Python 8.0.21, see the “General Availability (GA) Releases” tab at http://dev.mysql.com/downloads/connector/python/ Enjoy! Changes in MySQL Connector/Python 8.0.21 (2020-07-13, General Availability) Functionality Added or Changed * Implemented context managers to define the runtimecontext to establish when executing a with statement. Itwas added to the Connection and Cursor objects in theclassic protocol, and to Session in the X DevAPI. Thanksto WEN-FENG SHIH for the patch. (Bug #19586444, Bug#71663, Bug #28779784, Bug #89113) * Added compression support to X Protocol connections.Supported compression algorithms are zlib/deflate andlz4. Specifically, the supported algorithms (in order ofpreference) are lz4_message and deflate_stream. Thecompression threshold is set at 1000 bytes.A new compress X DevAPI connection option accepts eitherrequired, disabled, or preferred (default). + preferred: If Connector/Python and the server cannotreach consensus on algorithm or styles, then nocompression is used; and this is logged. + required: Like preferred, except the connection isterminated with an error if the connector and serverare unable to reach agreement. + disabled: Compression is not used. Bugs Fixed * The connect_timeout option applied to all blocking socketoperations but now properly only applies to the timeoutwhen establishing the connection. (Bug #30996790) * In X DevAPI implementation, an error was raised whenusing the fractional part in DATETIME types. The error:“ValueError: Datetime mapping scenario unhandled” (Bug#30950184) * Because MySQL stores TEXT types as BLOB and JSON asLONGBLOB, the TEXT and JSON types are now converted tostr and the rest of the BLOB types as bytes. Previously,as an example, a column of type TEXT that only containeddigits was read as type=integer by Connector/Python. (Bug#29808262, Bug #95437) * Connector/Python assumed that MySQL libraries were underlib/ when compiling the C extension, but now uses themysql_config flags which adds the correct include andlibraries paths. Thanks to Daniël van Eeden for thepatch. (Bug #29181907, Bug #93846) On Behalf of Oracle/MySQL Engineering Team, Balasubramanian Kandasamy [Less]
Posted almost 4 years ago by Marcelo Altmann
TL; DR How to add a foreign key to a new TABLE: CREATE TABLE child (ID INT PRIMARY KEY, parent_ID INT, FOREIGN KEY my_fk (parent_id) REFERENCES parent(ID)); How to add a foreign key to an existing TABLE: ALTER TABLE child ADD FOREIGN KEY my_fk ... [More] (parent_id) REFERENCES parent(ID); MySQL has the ability to enforce a record that exists on a parent table when you are adding/modifying data or validate that a record doesn’t exist when you are deleting data from your child table, leaving your database inconsistent. This is called Foreign Key. As seen above, you can either create your table with an FK since the beginning or modify/alter your table to add a new constrain after table creation time. Apart from syntax to refer to a field on the parent table, you can control what will be the behavior when you UPDATE or DELETE a record on the PARENT table that has a reference to in on the child table. This is controlled by the optional parameter ON UPDATE and ON DELETE, the restrictions are as follow: We will be using as an example the below table: mysql> SELECT * FROM parent; +----+ | ID | +----+ | 1 | +----+ 1 row in set (0.00 sec) mysql> SELECT * FROM child; +----+-----------+ | ID | parent_ID | +----+-----------+ | 1 | 1 | +----+-----------+ 1 row in set (0.00 sec) RESTRICT or NO ACTION – Default behavior when you omit ON UPDATE or ON DELETE, this means if you try to update the filed on the parent table that is referred to at the child table, your update/delete will be blocked: mysql> UPDATE parent SET ID = 2 WHERE ID = 1; ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key constraint fails (`test`.`child`, CONSTRAINT `child_ibfk_1` FOREIGN KEY (`parent_ID`) REFERENCES `parent` (`ID`)) Although not recommended, in extreme cases you can for MySQL to disable the FK check to by-passe above error: SET foreign_key_checks=0; Have in mind that this will despite the whole reason for having FK in the first place! SET DEFAULT – It’s recognized by the parse (won´t give any error), however, its interpreted as RESTRICT. CASCADE – Whatever action you do on the parent table, will replicate to the child table: mysql> UPDATE parent SET ID = 2 WHERE ID = 1; Query OK, 1 row affected (0.01 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> SELECT * FROM parent; +----+ | ID | +----+ | 2 | +----+ 1 row in set (0.00 sec) mysql> SELECT * FROM child; +----+-----------+ | ID | parent_ID | +----+-----------+ | 1 | 2 | +----+-----------+ 1 row in set (0.00 sec) SET NULL – Whatever action you do on the parent table, child column will reset to NULL (make sure child filed is not set to NOT NULL): mysql> UPDATE parent SET ID = 2 WHERE ID = 1; Query OK, 1 row affected (0.01 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> SELECT * FROM parent; +----+ | ID | +----+ | 2 | +----+ 1 row in set (0.01 sec) mysql> SELECT * FROM child; +----+-----------+ | ID | parent_ID | +----+-----------+ | 1 | NULL | +----+-----------+ 1 row in set (0.00 sec) [Less]
Posted almost 4 years ago by Marcelo Altmann
TL; DR On a new TABLE: CREATE TABLE child (ID INT PRIMARY KEY, parent_ID INT, FOREIGN KEY my_fk (parent_id) REFERENCES parent(ID)); On an existing TABLE: ALTER TABLE child ADD FOREIGN KEY my_fk (parent_id) REFERENCES parent(ID); MySQL has the ... [More] ability to enforce a record that exists on a parent table when you are adding/modifying data or validate that a record doesn’t exist when you are deleting data from your child table, leaving your database inconsistent. This is called Foreign Key. As seen above, you can either create your table with an FK since the beginning or modify/alter your table to add a new constrain after table creation time. Apart from syntax to refer to a field on the parent table, you can control what will be the behavior when you UPDATE or DELETE a record on the PARENT table that has a reference to in on the child table. This is controlled by the optional parameter ON UPDATE and ON DELETE, the restrictions are as follow: We will be using as an example the below table: mysql> SELECT * FROM parent; +----+ | ID | +----+ | 1 | +----+ 1 row in set (0.00 sec) mysql> SELECT * FROM child; +----+-----------+ | ID | parent_ID | +----+-----------+ | 1 | 1 | +----+-----------+ 1 row in set (0.00 sec) RESTRICT or NO ACTION – Default behavior when you omit ON UPDATE or ON DELETE, this means if you try to update the filed on the parent table that is referred to at the child table, your update/delete will be blocked: mysql> UPDATE parent SET ID = 2 WHERE ID = 1; ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key constraint fails (`test`.`child`, CONSTRAINT `child_ibfk_1` FOREIGN KEY (`parent_ID`) REFERENCES `parent` (`ID`)) Although not recommended, in extreme cases you can for MySQL to disable the FK check to by-passe above error: SET foreign_key_checks=0; Have in mind that this will despite the whole reason for having FK in the first place! SET DEFAULT – It’s recognized by the parse (won´t give any error), however, its interpreted as RESTRICT. CASCADE – Whatever action you do on the parent table, will replicate to the child table: mysql> UPDATE parent SET ID = 2 WHERE ID = 1; Query OK, 1 row affected (0.01 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> SELECT * FROM parent; +----+ | ID | +----+ | 2 | +----+ 1 row in set (0.00 sec) mysql> SELECT * FROM child; +----+-----------+ | ID | parent_ID | +----+-----------+ | 1 | 2 | +----+-----------+ 1 row in set (0.00 sec) SET NULL – Whatever action you do on the parent table, child column will reset to NULL (make sure child filed is not set to NOT NULL): mysql> UPDATE parent SET ID = 2 WHERE ID = 1; Query OK, 1 row affected (0.01 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> SELECT * FROM parent; +----+ | ID | +----+ | 2 | +----+ 1 row in set (0.01 sec) mysql> SELECT * FROM child; +----+-----------+ | ID | parent_ID | +----+-----------+ | 1 | NULL | +----+-----------+ 1 row in set (0.00 sec) [Less]
Posted almost 4 years ago by René Cannaò
We’re very excited to open up our ProxySQL Total Training online course to the public! Get yourself, or your team, trained by our Professional ProxySQL Trainers who have built and actively maintain ProxySQL in our upcoming total Training course. ... [More] Designed for DBAs, application developers and IT professionals in the industry, the course will focus on real world implementation and hands-on labs that will ensure you acquire the needed skills to deploy efficient, scalable and highly available solutions with ProxySQL! Immerse yourself in the world of ProxySQL with our two-day ProxySQL Total Training. ProxySQL Total Training topics include: ProxySQL use cases and real world examples How to install, upgrade and configure ProxySQL with zero-downtime Query routing, query caching and query fire-walling Advanced performance monitoring and tuning Deployment and scaling best practices High Availability and advanced ProxySQL topology support ProxySQL Cluster and configuration management Failover in synchronous & asynchronous replication environments without losing transactions Global Transaction ID (GTID) tracking with ProxySQL and the binary log (binlog) reader Securing ProxySQL and implementing SSL Application development considerations when using ProxySQL Diagnosing issues and troubleshooting ProxySQL Public 2x Day Training Details: Dates: 21st-22nd July 2020Time: 8am-4pm EST (UTC-5)Where: Online Class size is limited and filling up. If you are interested in Online Public Training, please contact us here to book your spot in the upcoming session scheduled for: 21st-22nd July 2020, 8am-4pm EST (UTC-5) The post ProxySQL Public Training, 21st-22nd July appeared first on ProxySQL. [Less]
Posted almost 4 years ago by MyDBOPS
In my day to day tasks as a DB Engineer at Mydbops we have been using multiple MySQL tools for multiple use cases to ensure an optimal performance and availability for servers managed by our Remote DBA Team. A tool like pt-online-schema can be ... [More] used for any DDL changes ( Overview to DDL algorithm ), if any tool which needs to scheduled for longer period we tend to use screen or cron. Some of the problems we face when we demonise the process or use screen for running processes. The daemon process gets killed when the server reboot happens. The screen might accidentally terminate while closing it. To flexibility to start or stop the process when required. These common problem can be overcome by using systemd service. What is Systemd? Systemd is a system and service manager for Linux operating systems. When running as the first process on boot (as PID 1), it acts as an init system that brings up and maintains userspace services. List of few use cases that can be made simple with systemd.service. Integrating Pt-heartbeat with Systemd Service Integrating Auto kill using pt-kill with Systemd Service. Integrating multi query killer with Systemd service Integrating Pt-heartbeat with Systemd Service. We had the requirement to schedule pt-heartbeat to monitor replication lag for one of our clients under our database managed Services. Here is problem statement pt-heartbeat process was running as a daemon process, the usual problem we were facing was when the system is rebooted for any maintenance , the pt-heartbeat process gets killed and we start receiving the replication lag alerts and then it needs a manual fix. Script for pt-heartbeat /usr/bin/pt-heartbeat --daemonize -D mydbops --host=192.168.33.11 --master-server-id 1810 --user=pt-hbt --password=vagrant --table heartbeat --insert-heartbeat-row --update Now let us integrate it with systemd $ cd etc/systemd/system/ $ vi pt-heartbeat.service ##pt-heartbeat systemd service file [Unit] Description="pt-heartbeat" After=network-online.target syslog.target Wants=network-online.target [Install] WantedBy=multi-user.target [Service] Type=forking ExecStart=/usr/bin/pt-heartbeat --daemonize -D mydbops --host=192.168.33.11 --master-server-id 1810 --user=pt-hbt --password=vagrant --table heartbeat --insert-heartbeat-row --update StandardOutput=syslog StandardError=syslog SyslogIdentifier=pt-heartbeat Restart=always ExecStart = It needs the command which needs to be executed when the service kick start ) Restart = Always option specifies to start the process once the OS is booted up. Once the new systemd script is pushed, Reload the systemctl daemon and start the service $ systemctl daemon-reload $ systemctl start pt-heartbeat $ systemctl status pt-heartbeat -l ● pt-heartbeat.service - "pt-heartbeat" Loaded: loaded (/etc/systemd/system/pt-heartbeat.service; disabled; vendor preset: enabled) Active: active (running) since Mon 2020-06-20 13:20:24 IST; 10 days ago Main PID: 25840 (perl) Tasks: 1 Memory: 19.8M CPU: 1h 1min 47.610s CGroup: /system.slice/pt-heartbeat.service └─25840 perl /usr/bin/pt-heartbeat --daemonize -D mydbops --host=192.168.33.11 --master-server-id 1810 --user=pt-hbt --password=vagrant --table heartbeat --insert-heartbeat-row --update This service can be stopped by just giving ( similar to to any systemctl process ) $ systemctl stop pt-heartbeat ● pt-heartbeat.service - "pt-heartbeat" Loaded: loaded (/etc/systemd/system/pt-heartbeat.service; disabled; vendor preset: enabled) Active: inactive (dead) Jun 20 15:46:07 ip-192-168-33-11 systemd[1]: Stopping "pt-heartbeat"… Jun 20 15:46:07 ip-192-168-33-11 systemd[1]: Stopped "pt-heartbeat". Integrating Auto kill using pt-kill with Systemd Service Usually in production servers long running queries will spike the system resource usage and degrade the MySQL performance drastically or might kill your MySQL process with OOM killer, in order to avoid this hiccups , we can schedule Percona pt-kill process based on the use case defined. scheduling the pt-kill service $ cd etc/systemd/system/ $ vi pt-kill.service #pt-kill systemd service file [Unit] Description="pt-kill" After=network-online.target syslog.target Wants=network-online.target [Install] WantedBy=multi-user.target [Service] Type=forking ExecStart=/usr/bin/pt-kill --user=mon_ro --host=192.168.33.11 --password=pt@123 --busy-time 200 --kill --match-command Query --match-info (select|SELECT|Select) --match-user cron_ae --interval 10 --print --daemonize StandardOutput=syslog StandardError=syslog SyslogIdentifier=pt-kill Restart=always $ systemctl daemon-reload $ systemctl start pt-kill $ systemctl status pt-kill pt-kill.service - "pt-kill" Loaded: loaded (/etc/systemd/system/pt-kill.service; enabled) Active: active (running) since Wed 2020-06-24 11:00:17 IST; 5 days ago CGroup: name=dsystemd:/system/pt-kill.service ├─20974 perl /usr/bin/pt-kill --user=mon_ro --host=192.168.33.11 --password=pt@123 --busy-time 200 --kill --match-command Query --match-info (select|SELECT|Select) --match-user cron_ae --interval 10 --print --daemonize Now we have configured a fail safe pt-kill process. Integrating multi query killer with Systemd service Question : Is it possible to integrate multiple Kill Statements for different hosts as single process. Answer – Yes ! It is possible and quite simple too. Just add the needed commands as shell script file and make it execute it. In the below example i have chose three different server consider a RDS instance ( more on AWS RDS its Myth ) and a couple of virtual machine. $ vi pt_kill.sh /usr/bin/pt-kill --user=pt_kill --host=test.ap-northeast-1.rds.amazonaws.com --password=awkS --busy-time 1000 --rds --kill --match-command Query --match-info "(select|SELECT|Select)" --match-user "(mkt_ro|dash)" --interval 10 --print --daemonize >> /home/vagrant/pt_kill_slave1.log /usr/bin/pt-kill --user=mon_ro --host=192.168.33.11 --password=pt@123 --busy-time 20 --kill --match-command Query --match-info "(select|SELECT|Select)" --match-user "(user_qa|cron_ae)" --interval 10 --print --daemonize >> /home/vagrant/pt_kill_slave2.log /usr/bin/pt-kill --user=db_ro --host=192.168.33.12 --password=l9a40E --busy-time 200 --kill --match-command Query --match-info "(select|SELECT|Select)" --match-user sbtest_ro --interval 10 --print --daemonize >> /home/vagrant/pt_kill_slave3.log Scheduling pt-kill.service for multiple hosts. #pt-kill systemd service file [Unit] Description="pt-kill" After=network-online.target syslog.target Wants=network-online.target [Install] WantedBy=multi-user.target [Service] Type=forking ExecStart=/bin/bash /home/vagrant/pt_kill.sh StandardOutput=syslog StandardError=syslog SyslogIdentifier=pt-kill Restart=always Reload the daemon and start the service $ systemctl daemon-reload $ systemctl start pt-kill $ systemctl status pt-kill pt-kill.service - "pt-kill" Loaded: loaded (/etc/systemd/system/pt-kill.service; enabled) Active: active (running) since Wed 2020-06-24 11:00:17 IST; 5 days ago CGroup: name=dsystemd:/system/pt-kill.service ├─20974 Perl /usr/bin/pt-kill --user=pt_kill --host=test.ap-northeast-1.rds.amazonaws.com --password=awkS --busy-time 1000 --rds --kill --match-command Query --match-info "(select... ├─21082 perl /usr/bin/pt-kill --user=mon_ro --host=192.168.33.11 --password=pt@123 --busy-time 20 --kill --match-command Query --match-info "(select... ├─21130 perl /usr/bin/pt-kill --user=db_ro --host=192.168.33.12 --password=l9a40E --busy-time 200 --kill --match-command Query --match-info "(select... This makes Systemd more useful and easy tool for scheduling mysql tools in database environment. There are many more features in Systemd that be used for scheduling scripts bypassing the use of crontab, hopefully. Note : All these are sample scripts you ensure you test well before making it in production. [Less]
Posted almost 4 years ago by MySQL Performance Blog
You got yourself a Kubernetes cluster and are now testing our Percona Kubernetes Operator for Percona XtraDB Cluster. Everything is working great and you decided that you want to increase the number of Percona XtraDB Cluster (PXC) pods from the ... [More] default 3, to let’s say, 5 pods. It’s just a matter of running the following command: kubectl patch pxc cluster1 --type='json' -p='[{"op": "replace", "path": "/spec/pxc/size", "value": 5 }]' Good, you run the command without issues and now you will have 5 pxc pods! Right? Let’s check out how the pods are being replicated: kubectl get pods | grep pxc cluster1-pxc-0 1/1 Running 0 25m cluster1-pxc-1 1/1 Running 0 23m cluster1-pxc-2 1/1 Running 0 22m cluster1-pxc-3 0/1 Pending 0 13m You not only see 4 pods instead of 5 but also the one new pod is stuck in the “Pending” state. Further info shows that the kube scheduler wasn’t able to find a node to deploy the pod: kubectl describe pod cluster1-pxc-3 Name: cluster1-pxc-3 Namespace: pxc Priority: 0 … … … Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 69s (x5 over 5m28s) default-scheduler 0/3 nodes are available: 3 node(s) didn't match pod affinity/anti-affinity, 3 node(s) didn't satisfy existing pods anti-affinity rules. From that output, we can see what’s the issue: Affinity. Or more specifically: Anti-affinity Affinity defines eligible pods that can be scheduled (can run) on the node which already has pods with specific labels. Anti-affinity defines pods that are not eligible. The operator provides an option called “antiAffinityTopologyKey” which can have several values: kubernetes.io/hostname – Pods will avoid residing within the same host. failure-domain.beta.kubernetes.io/zone – Pods will avoid residing within the same zone. failure-domain.beta.kubernetes.io/region – Pods will avoid residing within the same region. none – No constraints are applied. It means that all PXC pods can be scheduled on one Node and you can lose all your cluster because of one Node failure. The default value is kubernetes.io/hostname which pretty much means: “only one Pod per node” In this case, the kubernetes cluster is running on top of 3 aws instances, hence when one tries to increase the number of pods, the scheduler will have trouble finding where to put that new pod. Alternatives? There are several options. One plain and simple (and obvious) one can be to add new nodes to the k8s cluster. Another option is to set the anti-affinity to “none”. Now, why one would want to remove the guarantee of having POD distribute among the available nodes? Well, think about lower environments like QA or Staging, where the HA requirements are not hard and you just need to deploy the operator in a couple of nodes (control plane/worker). Now, here’s how the affinity setting can be changed: Edit the cluster configuration. My cluster is called “cluster1” so the command is: kubectl edit pxc/cluster1 Find the line where “antiAffinityTopologyKey” is defined and change “kubernetes.io/hostname” to “none” and save the changes. This modification will be applied immediately. Delete the old pods ONE BY ONE. Kubernetes will spawn a new one, so don’t worry about it. For example, to delete the pod named “cluster1-pxc-0”, run: kubectl delete pod cluster1-pxc-0 You will see how the Pods are recreated and the one that was on “pending” moves on: kubectl get pod NAME READY STATUS RESTARTS AGE cluster1-pxc-0 0/1 ContainerCreating 0 8s cluster1-pxc-1 1/1 Running 0 4h55m cluster1-pxc-2 1/1 Running 0 4h53m cluster1-pxc-3 0/1 ContainerCreating 0 8m25s Finally, the goal of having 5 pods is achieved: kubectl get pod NAME READY STATUS RESTARTS AGE cluster1-pxc-0 1/1 Running 0 39m cluster1-pxc-1 1/1 Running 0 36m cluster1-pxc-2 1/1 Running 0 37m cluster1-pxc-3 1/1 Running 0 47m cluster1-pxc-4 1/1 Running 0 38m But what if one needs a more sophisticated option? One with some degree of guarantee that HA will be met? For those cases, the operator Affinity can use an advanced approach, by using the NodeAffinity with “preferredDuringSchedulingIgnoredDuringExecution” The whole description and configuration is available in the Operator documentation “Binding Percona XtraDB Cluster components to Specific Kubernetes/OpenShift Nodes” And also, in the future, the operator will make use of the “topologySpreadConstraints” spec to control the degree to which Pods may be unevenly distributed. Thanks to Ivan Pylypenko and Mykola Marzhan from the Percona Engineer Team for the guidance. We understand that choosing open source software for your business can be a potential minefield. You need to select the best available options, which fully support and adapt to your changing needs. In this white paper, we discuss the key features that make open source software attractive, and why Percona’s software might be the best option for your business. Download “When is Percona Software the Right Choice?” [Less]
Posted almost 4 years ago by René Cannaò
ProxySQL Firewall Overview ProxySQL’s flexible query rules engine has many uses, from Read/Write splitting, sharding and even creating firewall blacklist. This allows ProxySQL to be loved by both Performance and Security-minded engineers. Starting ... [More] in ProxySQL 2.0.9, ProxySQL has another Security feature: the Firewall Whitelist. Modeled on MySQL Enterprise Firewall, this allows a security-conscious administrator to tune access to only allow certain queries. Imagine a situation where your webapp gets hacked, which exposes your user’s database credentials. If your webapp connects directly to the database, the malicious user can do what they want to your data with the same permissions your webapp has. So perhaps they can’t just DROP TABLE because you’ve smartly removed DDL permissions from your application user. But they can likely do DELETE FROM mytable after saving a copy to hold for ransom. ProxySQL Firewall can prevent this situation. Implementing ProxySQL Firewall The documentation explains really well how to implement ProxySQL Firewall, but here are the high-level steps on how it works: Prerequisite Point your webapp to connect to ProxySQL rather than MySQL if you haven’t already. Configure ProxySQL with users and servers to connect to the backend MySQL database. Step 1. Allow ProxySQL to record the application’s traffic for a period of time to ensure all application queries are captured.Protip: ProxySQL does this automatically through stats_mysql_digest and (if configured) stats_history.history_mysql_query_digest. Step 2. Add your app user to mysql_firewall_whitelist_users table through the ProxySQL Admin interface. Let’s say our app user is myapp: ProxySQL Admin> INSERT INTO mysql_firewall_whitelist_users (active, username, client_address, mode) VALUES (1, 'myapp', '', 'PROTECTING'); In this example, PROTECTING means ProxySQL will stop all other traffic that is not defined in the mysql_firewall_whitelist_rules for this user in the following step. It’s also important to note that any user not in this table will be blocked when firewall whitelist is enabled below. You can get around this by adding an entry into mysql_firewall_whitelist_users for all users to allow, with mode=’OFF’. Step 3. Add all digests from Step 2 into mysql_firewall_whitelist_rules table. This assumes you’ve configured stats_history.history_mysql_query_digest, as this will persist stats across ProxySQL restarts. ProxySQL Admin> INSERT INTO mysql_firewall_whitelist_rules (active, username, client_address, schemaname, flagIN, digest, comment) SELECT DISTINCT 1, username, client_address, schemaname, 0, digest, '', FROM stats_history.history_mysql_query_digest WHERE username='myapp'; Step 4. Load your configuration to runtime. LOAD MYSQL FIREWALL TO RUNTIME; SAVE MYSQL FIREWALL TO DISK; Step 5. Enable the ProxySQL Firewall SET mysql-firewall_whitelist_enabled = 1; LOAD MYSQL VARIABLES TO RUNTIME; SAVE MYSQL VARIABLES TO DISK; Congratulations! Now the malicious user can only perform exact queries that your application can perform. The previous DELETE FROM mytable statement would fail, because your application doesn’t allow doing DELETEs without WHERE clauses, right? Keeping firewall rules up to date If you are familiar with ProxySQL query rules, you’ll notice that the mysql_firewall_whitelist_rules table behaves a little bit differently. You have to populate it with exact digests, rather than relying on regular expressions. If you have thousands of mysql_query_rules, you were likely doing this already though. This means that you will need to update the firewall rules any time an application deployment introduces new queries or changes query digests. Our recommendation is to implement a process that records application digest changes before it hits production, such as a staging environment, and then update the production firewall rules as part of the deployment process. Conclusion If you want to be absolutely certain of the queries hitting your database from your application are not malicious, ProxySQL Firewall can help with this. There is a bit of friction to keep this maintained, but if the security of your data is paramount, then the ease of mind ProxySQL Firewall provides is well worth it. Authored by: Derek Downey The post Protect your data using ProxySQL Firewall appeared first on ProxySQL. [Less]
Posted almost 4 years ago by Continuent
Following on from our recent newsletter and the message from Eero Teerikorpi, Founder and CEO, in this blog I will set out the outline and plan for the work we are currently undertaking to make a number of changes within the terminology used in our ... [More] product and communications. Tags:  MySQL terminology replica primary source target tungsten clustering tungsten replicator tungsten dashboard mysql community [Less]
Posted almost 4 years ago by Bhuvanesh R
In any MySQL replication or migration, generally, we’ll skip the mysql database from the dump file. Since it has the current server’s settings, users, etc. So we should not create a mess on the target server with this source mysql database dump. But ... [More] one downside of this approach is we’ll not get the users on the target MySQL server. Maybe you tried some approaches that are mentioned here. But whenever I don’t the credentials list during the migration and don’t have time to explore/install any tools likept-show-grants, then I’ll use this trick. DISCLAIMER You should try this approach when you don’t any user credentials on the source DB. Take a backup of the MySQL database on the target DB side. Do this with your DBA’s help. And it is tested in MySQL 5.6 and 5.7 Step #1: Get the grant: -- Get grants mysql -h SOURCE_DB_IP_ADDRES -u root -p'PASSWORD' --skip-column-names -A -e"SELECT CONCAT('SHOW GRANTS FOR ''',user,'''@''',host,''';') FROM mysql.user WHERE user<>''" | mysql -h IP_ADDRES -u root -p'PASSWORD' --skip-column-names -A | sed 's/$/;/g' > user_grants.sql -- clean up the password field sed -i 's/IDENTIFIED BY PASSWORD //g' user_grants.sql Step #2: Generte create user statement: -- MySQL 5.6 mysql -h source_DB -u root -p'password' --skip-column-names -A mysql -e "SELECT Concat('create user \'', user, '\'@\'', host, '\' IDENTIFIED WITH \'mysql_native_password\' AS \'', password,'\';') FROM mysql.user where user not in ('mysql.session','mysql.sys','debian-sys-maint','root');" > create_user.sql -- MySQL 5.7 mysql -h source_DB -u root -p'password' --skip-column-names -A mysql -e "SELECT Concat('create user \'', user, '\'@\'', host, '\' IDENTIFIED WITH \'mysql_native_password\' AS \'', authentication_string,'\';') FROM mysql.user where user not in ('mysql.session','mysql.sys','debian-sys-maint','root');" > create_user.sql Step #3: Restore the users: mysql -u target_db_ip -u root -p myqsl < create_user.sql mysql -u target_db_ip -u root -p myqsl < user_grants.sql Again a friendly reminder, keep your DBA with you and do take a backup of the MySQL database. If required you can still use pt-show-grants and mysqlpump as well. [Less]
Posted almost 4 years ago by MySQL Performance Blog
Ever wanted to extend MySQL and add some feature you think it is missing?  With MySQL plugins, you can do exactly that.  One thing that has bothered me for several years is that you cannot easily retrieve system metrics from within MySQL.  Whether I ... [More] am connecting via a remote connection or looking to add features to monitoring without the need for another interface with the server, I have wanted to retrieve system metrics without leaving the MySQL interface. So, I started a Proof of Concept for this.  My goal was to get metrics such as RAM (total, used, free), system load, CPU utilization, disk utilization for the file system containing the datadir, and more.  My objective was to do this as efficiently within MySQL as possible.  For this, I chose to utilize standard C libraries in as few lines of code as possible without having to scrape system files or run commands to get the data.  The data is pulled on demand so as not to increase the load on the system. The MySQL plugin architecture is one of the most underutilized features in MySQL in my opinion.  It provides so much power, and I feel the MySQL ecosystem would be so much more powerful if more people took advantage of it.  Below is an example of a basic plugin I created to pull some system metrics. For this plugin, I chose to access the data via INFORMATION_SCHEMA.OS_METRICS.  This is defined in the plugin in the following: static struct st_mysql_information_schema simple_table_info = { MYSQL_INFORMATION_SCHEMA_INTERFACE_VERSION }; static ST_FIELD_INFO simple_table_fields[]= { {"NAME", 15, MYSQL_TYPE_STRING, 0, 0, 0, 0}, {"VALUE", 6, MYSQL_TYPE_FLOAT, 0, MY_I_S_UNSIGNED, 0, 0}, {"COMMENT", 50, MYSQL_TYPE_STRING, 0, 0, 0, 0}, {0, 0, MYSQL_TYPE_NULL, 0, 0, 0, 0} }; static int simple_fill_table(THD *thd, TABLE_LIST *tables, Item *cond) { struct sysinfo info; TABLE *table= tables->table; This defines the structure of the virtual table as having three columns: NAME, VALUE, and COMMENT.  NAME will be a string up to 15 characters long, followed by a float number for VALUE, and a text string for COMMENT up to 50 characters long. By invoking the sysinfo() function in C, I am able to pull various metrics.  These metrics are returned in a structure.  These can then be passed into the OS_METRICS “table” with the following commands: struct sysinfo info; sysinfo(&info); // Total usable main memory size table->field[0]->store("TOTAL_RAM", 9, system_charset_info); table->field[1]->store(info.totalram * info.mem_unit); table->field[2]->store("Total usable main memory size", 29, system_charset_info); if (schema_table_store_record(thd, table)) return 1; In the above case, I reference the element “totalram” from the sysinfo structure and store it in the table.  You can see where there is a line for each column of the table and the values are stored one by one. Here is the most basic form of a plugin that only pulls RAM information and makes it available within INFORMATION_SCHEMA.OS_METRICS: #include #include #include #include #include #include #include #include #include #include static struct st_mysql_information_schema simple_table_info = { MYSQL_INFORMATION_SCHEMA_INTERFACE_VERSION }; static ST_FIELD_INFO simple_table_fields[]= { {"NAME", 15, MYSQL_TYPE_STRING, 0, 0, 0, 0}, {"VALUE", 6, MYSQL_TYPE_FLOAT, 0, MY_I_S_UNSIGNED, 0, 0}, {"COMMENT", 50, MYSQL_TYPE_STRING, 0, 0, 0, 0}, {0, 0, MYSQL_TYPE_NULL, 0, 0, 0, 0} }; static int simple_fill_table(THD *thd, TABLE_LIST *tables, Item *cond) { struct sysinfo info; TABLE *table= tables->table; sysinfo(&info); // Total usable main memory size table->field[0]->store("TOTAL_RAM", 9, system_charset_info); table->field[1]->store(info.totalram * info.mem_unit); table->field[2]->store("Total usable main memory size", 29, system_charset_info); if (schema_table_store_record(thd, table)) return 1; // Available memory size table->field[0]->store("FREE_RAM", 8, system_charset_info); table->field[1]->store(info.freeram * info.mem_unit); table->field[2]->store("Available memory size", 21, system_charset_info); if (schema_table_store_record(thd, table)) return 1; // Used memory size table->field[0]->store("USED_RAM", 8, system_charset_info); table->field[1]->store((info.totalram - info.freeram) * info.mem_unit); table->field[2]->store("Used memory size", 16, system_charset_info); if (schema_table_store_record(thd, table)) return 1; // Available memory (percentage) table->field[0]->store("FREE_RAM_PCT", 12, system_charset_info); table->field[1]->store((float) info.freeram / info.totalram * 100 * info.mem_unit); table->field[2]->store("Available memory as a percentage", 32, system_charset_info); if (schema_table_store_record(thd, table)) return 1; // Used memory (percentage) table->field[0]->store("USED_RAM_PCT", 12, system_charset_info); table->field[1]->store((float) (info.totalram - info.freeram) / info.totalram * 100 * info.mem_unit); table->field[2]->store("Free memory as a percentage", 27, system_charset_info); if (schema_table_store_record(thd, table)) return 1; // Amount of shared memory table->field[0]->store("SHARED_RAM", 10, system_charset_info); table->field[1]->store(info.sharedram * info.mem_unit); table->field[2]->store("Amount of shared memory", 23, system_charset_info); if (schema_table_store_record(thd, table)) return 1; // Memory used by buffers table->field[0]->store("BUFFER_RAM", 10, system_charset_info); table->field[1]->store(info.bufferram * info.mem_unit); table->field[2]->store("Memory used by buffers", 22, system_charset_info); if (schema_table_store_record(thd, table)) return 1; return 0; } static int simple_table_init(void *ptr) { ST_SCHEMA_TABLE *schema_table= (ST_SCHEMA_TABLE*)ptr; schema_table->fields_info= simple_table_fields; schema_table->fill_table= simple_fill_table; return 0; } mysql_declare_plugin(os_metrics) { MYSQL_INFORMATION_SCHEMA_PLUGIN, &simple_table_info, /* type-specific descriptor */ "OS_METRICS", /* table name */ "Michael Patrick", /* author */ "OS Metrics INFORMATION_SCHEMA table", /* description */ PLUGIN_LICENSE_GPL, /* license type */ simple_table_init, /* init function */ NULL, 0x0100, /* version = 1.0 */ NULL, /* no status variables */ NULL, /* no system variables */ NULL, /* no reserved information */ 0 /* no flags */ } mysql_declare_plugin_end; You will need to have the MySQL source code available on a server along with the libraries needed to compile C code.  For me, I went with the most basic approach of manually compiling the plugin, although I need to update it with cmake so it is easier to compile. I named my file, osmetricsplugin.cc.  Of course, in the example below, you will need to define the path for where your code lives where I have placed “{PATH_TO_YOUR_PLUGIN_CODE}”. You can compile the plugin with a command such as the following: SRCBASE="../percona-server-5.7.24-27" g++ -DMYSQL_DYNAMIC_PLUGIN -Wall -fPIC -shared \ -I/usr/include/mysql -m64 \ -I${SRCBASE}/sql \ -I${SRCBASE}/include \ -I${SRCBASE}/libbinlogevents/export \ -I${SRCBASE}/libbinlogevents/include \ -I{PATH_TO_YOUR_PLUGIN_CODE} \ -o osmetricsplugin.so osmetricsplugin.cc If you are interested in seeing more of what can be done with the above, check out the GitHub page for the plugin I wrote. Once you compile it, you should get an osmetricsplugin.so file which can be copied to your MySQL plugin directory with a command such as: cp osmetricsplugin.so /usr/lib64/mysql/plugin/ Once it is in place, you can tell MySQL to load the plugin with a command such as: mysql> INSTALL PLUGIN OS_METRICS SONAME 'osmetricsplugin.so'; You can verify that the plugin is loaded correctly: mysql> SELECT * FROM information_schema.PLUGINS WHERE PLUGIN_NAME LIKE "%OS%";; +-------------+----------------+---------------+--------------------+---------------------+--------------------+------------------------+-----------------+-------------------------------------+----------------+-------------+ | PLUGIN_NAME | PLUGIN_VERSION | PLUGIN_STATUS | PLUGIN_TYPE | PLUGIN_TYPE_VERSION | PLUGIN_LIBRARY | PLUGIN_LIBRARY_VERSION | PLUGIN_AUTHOR | PLUGIN_DESCRIPTION | PLUGIN_LICENSE | LOAD_OPTION | +-------------+----------------+---------------+--------------------+---------------------+-------------------+------------------------+-----------------+-------------------------------------+----------------+-------------+ | OS_METRICS | 1.0 | ACTIVE | INFORMATION SCHEMA | 50724.0 | osmetricsplugin.so | 1.7 | Michael Patrick | OS Metrics INFORMATION_SCHEMA table | GPL | ON | +-------------+----------------+---------------+--------------------+---------------------+--------------------+------------------------+-----------------+-------------------------------------+----------------+-------------+ 1 row in set (0.00 sec) To query the data, execute a SQL command such as: mysql> SELECT * FROM information_schema.OS_METRICS; +------------------------+-------------------+-------------------------------------------------+ | NAME | VALUE | COMMENT | +------------------------+-------------------+-------------------------------------------------+ | TOTAL_RAM | 1039118336 | Total usable main memory size | | FREE_RAM | 341049344 | Available memory size | | USED_RAM | 698068992 | Used memory size | | FREE_RAM_PCT | 32.82102966308594 | Available memory as a percentage | | USED_RAM_PCT | 67.17897033691406 | Free memory as a percentage | | SHARED_RAM | 0 | Amount of shared memory | | BUFFER_RAM | 2158592 | Memory used by buffers | +------------------------+-------------------+-------------------------------------------------+ 7 rows in set (0.00 sec) There is much more work to be done with the plugin and there is more that can be done to improve it.  I believe it is a very useful feature to be able to access system metrics from within MySQL, but am very interested to hear what others think. If interested, please check out a more advanced version of the plugin, and here you can learn more about MySQL plugins. Our solution brief “Get Up and Running with Percona Server for MySQL” outlines setting up a MySQL® database on-premises using Percona Server for MySQL. It includes failover and basic business continuity components. Download PDF [Less]