I Use This!
Activity Not Available

News

Analyzed 5 months ago. based on code collected 5 months ago.
Posted 3 days ago by Dave Stokes
The 2018 MySQL Community Reception is October 23rd in a new venue at Samovar Tea, 730 Howard Street in San Francisco at 7:00 PM.   Right in the heart of the Moscone Center activities for Oracle OpenWorld and Oracle Code one activities.The MySQL ... [More] Community Reception is not part of Oracle OpenWorld or Oracle Code One (you do not need a badge for either event) but you do need to RSVP.  Food, drinks, and a really amazing group of attendees!   And there will be more than tea to drink.Plus we have a few new surprises this year!  [Less]
Posted 3 days ago by Percona Community
There are a number of options for generating ID values for your tables. In this post, Alexey Mikotkin of Devart explores your choices for generating identifiers with a look at auto_increment, triggers, UUID and sequences. AUTO_INCREMENT Frequently ... [More] , we happen to need to fill tables with unique identifiers. Naturally, the first example of such identifiers is PRIMARY KEY data. These are usually integer values hidden from the user since their specific values are unimportant. When adding a row to a table, you need to take this new key value from somewhere. You can set up your own process of generating a new identifier, but MySQL comes to the aid of the user with the AUTO_INCREMENT column setting. It is set as a column attribute and allows you to generate unique integer identifiers. As an example, consider the users table, the primary key includes an id column of type INT: CREATE TABLE users ( id int NOT NULL AUTO_INCREMENT, first_name varchar(100) NOT NULL, last_name varchar(100) NOT NULL, email varchar(254) NOT NULL, PRIMARY KEY (id) ); Inserting a NULL value into the id field leads to the generation of a unique value; inserting 0 value is also possible unless the NO_AUTO_VALUE_ON_ZERO Server SQL Mode is enabled:: INSERT INTO users(id, first_name, last_name, email) VALUES (NULL, 'Simon', 'Wood', 'simon@testhost.com'); INSERT INTO users(id, first_name, last_name, email) VALUES (0, 'Peter', 'Hopper', 'peter@testhost.com'); It is possible to omit the id column. The same result is obtained with: INSERT INTO users(first_name, last_name, email) VALUES ('Simon', 'Wood', 'simon@testhost.com'); INSERT INTO users(first_name, last_name, email) VALUES ('Peter', 'Hopper', 'peter@testhost.com'); The selection will provide the following result: Select from users table shown in dbForge StudioYou can get the automatically generated value using the LAST_INSERT_ID() session function. This value can be used to insert a new row into a related table. There are aspects to consider when using AUTO_INCREMENT, here are some: In the case of rollback of a data insertion transaction, no data will be added to a table. However, the AUTO_INCREMENT counter will increase, and the next time you insert a row in the table, holes will appear in the table. In the case of multiple data inserts with a single INSERT command, the LAST_INSERT_ID() function will return an automatically generated value for the first row. The problem with the AUTO_INCREMENT counter value is described in Bug #199 – Innodb autoincrement stats los on restart. For example, let’s consider several cases of using AUTO_INCREMENT for table1: CREATE TABLE table1 ( id int NOT NULL AUTO_INCREMENT, PRIMARY KEY (id) ) ENGINE = INNODB; -- transactional table -- Insert operations. INSERT INTO table1 VALUES (NULL); -- 1 INSERT INTO table1 VALUES (NULL); -- 2 INSERT INTO table1 VALUES (NULL); -- 3 SELECT LAST_INSERT_ID() INTO @p1; -- 3 -- Insert operations within commited transaction. START TRANSACTION; INSERT INTO table1 VALUES (NULL); -- 4 INSERT INTO table1 VALUES (NULL); -- 5 INSERT INTO table1 VALUES (NULL); -- 6 COMMIT; SELECT LAST_INSERT_ID() INTO @p3; -- 6 -- Insert operations within rolled back transaction. START TRANSACTION; INSERT INTO table1 VALUES (NULL); -- 7 won't be inserted (hole) INSERT INTO table1 VALUES (NULL); -- 8 won't be inserted (hole) INSERT INTO table1 VALUES (NULL); -- 9 won't be inserted (hole) ROLLBACK; SELECT LAST_INSERT_ID() INTO @p2; -- 9 -- Insert multiple rows operation. INSERT INTO table1 VALUES (NULL), (NULL), (NULL); -- 10, 11, 12 SELECT LAST_INSERT_ID() INTO @p4; -- 10 -- Let’s check which LAST_INSERT_ID() values were at different stages of the script execution: SELECT @p1, @p2, @p3, @p4; +------+------+------+------+ | @p1 | @p2 | @p3 | @p4 | +------+------+------+------+ | 3 | 9 | 6 | 10 | +------+------+------+------+ -- The data selection from the table shows that there are holes in the table in the values of identifiers: SELECT * FROM table1; +----+ | id | +----+ | 1 | | 2 | | 3 | | 4 | | 5 | | 6 | | 10 | | 11 | | 12 | +----+ Note: The next AUTO_INCREMENT value for the table can be parsed from the SHOW CREATE TABLE result or read from the AUTO_INCREMENT field of the INFORMATION_SCHEMA TABLES table. The rarer case is when the primary key is surrogate — it consists of two columns. The MyISAM engine has an interesting solution that provides the possibility of generating values for such keys. Let’s consider the example: CREATE TABLE roomdetails ( room char(30) NOT NULL, id int NOT NULL AUTO_INCREMENT, PRIMARY KEY (room, id) ) ENGINE = MYISAM; INSERT INTO roomdetails VALUES ('ManClothing', NULL); INSERT INTO roomdetails VALUES ('WomanClothing', NULL); INSERT INTO roomdetails VALUES ('WomanClothing', NULL); INSERT INTO roomdetails VALUES ('WomanClothing', NULL); INSERT INTO roomdetails VALUES ('Fitting', NULL); INSERT INTO roomdetails VALUES ('ManClothing', NULL); It is quite a convenient solution: Special values auto generation The possibilities of the AUTO_INCREMENT attribute are limited because it can be used only for generating simple integer values. But what about complex identifier values? For example, depending on the date/time or [A0001, A0002, B0150…]). To be sure, such values should not be used in primary keys, but they might be used for some auxiliary identifiers. The generation of such unique values can be automated, but it will be necessary to write code for such purposes. We can use the BEFORE INSERT trigger to perform the actions we need. Let’s consider a simple example. We have the sensors table for sensors registration. Each sensor in the table has its own name, location, and type: 1 –analog, 2 –discrete, 3 –valve. Moreover, each sensor should be marked with a unique label like [symbolic representation of the sensor type + a unique 4-digit number] where the symbolic representation corresponds to such values [AN, DS, VL]. In our case, it is necessary to form values like these [DS0001, DS0002…] and insert them into the label column. When the trigger is executed, it is necessary to understand if any sensors of this type exist in the table. It is enough to assign number “1” to the first sensor of a certain type when it is added to the table. In case such sensors already exist, it is necessary to find the maximum value of the identifier in this group and form a new one by incrementing the value by 1. Naturally, it is necessary to take into account that the label should start with the desired symbol and the number should be 4-digit. So, here is the table and the trigger creation script: CREATE TABLE sensors ( id int NOT NULL AUTO_INCREMENT, type int NOT NULL, name varchar(255) DEFAULT NULL, `position` int DEFAULT NULL, label char(6) NOT NULL, PRIMARY KEY (id) ); DELIMITER $$ CREATE TRIGGER trigger_sensors BEFORE INSERT ON sensors FOR EACH ROW BEGIN IF (NEW.label IS NULL) THEN -- Find max existed label for specified sensor type SELECT MAX(label) INTO @max_label FROM sensors WHERE type = NEW.type; IF (@max_label IS NULL) THEN SET @label = CASE NEW.type WHEN 1 THEN 'AN' WHEN 2 THEN 'DS' WHEN 3 THEN 'VL' ELSE 'UNKNOWN' END; -- Set first sensor label SET NEW.label = CONCAT(@label, '0001'); ELSE -- Set next sensor label SET NEW.label = CONCAT(SUBSTR(@max_label, 1, 2), LPAD(SUBSTR(@max_label, 3) + 1, 4, '0')); END IF; END IF; END$$ DELIMITER; The code for generating a new identifier can, of course, be more complex. In this case, it is desirable to implement some of the code as a stored procedure/function. Let’s try to add several sensors to the table and look at the result of the labels generation: INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 1, 'temperature 1', 10, 'AN0025'); -- Set exact label value 'AN0025' INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 1, 'temperature 2', 11, NULL); INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 1, 'pressure 1', 15, NULL); INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 2, 'door 1', 10, NULL); INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 2, 'door 2', 11, NULL); INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 3, 'valve 1', 20, NULL); INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 3, 'valve 2', 21, NULL); Using UUID Another version of the identification data is worth mentioning – Universal Unique Identifier (UUID), also known as GUID. This is a 128-bit number suitable for use in primary keys. A UUUI value can be represented as a string – CHAR(36)/VARCHAR(36) or a binary value – BINARY(16). Benefits: Ability to generate values ​​from the outside, for example from an application. UUID values ​​are unique across tables and databases since the standard assumes uniqueness in space and time. There is a specification – A Universally Unique IDentifier (UUID) URN Namespace. Disadvantages: Possible performance problems. Data increase. More complex data analysis (debugging). To generate this value, MySQL function UUID() is used. New functions have been added to Oracle MySQL 8.0 server to work with UUID values ​​- UUID_TO_BIN, BIN_TO_UUID, IS_UUID. Learn more about it at the Oracle MySQL website – UUID() The code shows the use of UUID values: CREATE TABLE table_uuid (id binary(16) PRIMARY KEY); INSERT INTO table_uuid VALUES(UUID_TO_BIN(UUID())); INSERT INTO table_uuid VALUES(UUID_TO_BIN(UUID())); INSERT INTO table_uuid VALUES(UUID_TO_BIN(UUID())); SELECT BIN_TO_UUID(id) FROM table_uuid; +--------------------------------------+ | BIN_TO_UUID(id) | +--------------------------------------+ | d9008d47-cdf4-11e8-8d6f-0242ac11001b | | d900e2b2-cdf4-11e8-8d6f-0242ac11001b | | d9015ce9-cdf4-11e8-8d6f-0242ac11001b | +--------------------------------------+ You may also find useful the following article – Store UUID in an optimized way. Using sequences Some databases support the object type called Sequence that allows generating sequences of numbers. The Oracle MySQL server does not support this object type yet but the MariaDB 10.3 server has the Sequence engine that allows working with the Sequence object. The Sequence engine provides DDL commands for creating and modifying sequences as well as several auxiliary functions for working with the values. It is possible to specify the following parameters while creating a named sequence: START – a start value, INCREMENT – a step, MINVALUE/MAXVALUE – the minimum and maximum value; CACHE – the size of the cache values; CYCLE/NOCYCLE – the sequence cyclicity. For more information, see the CREATE SEQUENCE documentation. Moreover, the sequence can be used to generate unique numeric values.  This possibility can be considered as an alternative to AUTO_INCREMENT but the sequence additionally provides an opportunity to specify a step of the values. Let’s take a look at this example by using the users table. The sequence object users_seq will be used to fill the values of the primary key. It is enough to specify the NEXT VALUE FOR function in the DEFAULT property of the column: CREATE SEQUENCE users_seq; CREATE TABLE users ( id int NOT NULL DEFAULT (NEXT VALUE FOR users_seq), first_name varchar(100) NOT NULL, last_name varchar(100) NOT NULL, email varchar(254) NOT NULL, PRIMARY KEY (id) ); INSERT INTO users (first_name, last_name, email) VALUES ('Simon', 'Wood', 'simon@testhost.com'); INSERT INTO users (first_name, last_name, email) VALUES ('Peter', 'Hopper', 'peter@testhost.com'); Table content output: Information The images for this article were produced while using dbForge Studio for MySQL Express Edition, a download is available from https://www.devart.com/dbforge/mysql/studio/dbforgemysql80exp.exe It’s free!   Thank you to community reviewer Jean-François Gagné for his review and suggestions for this post. The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up. The post Generating Identifiers – from AUTO_INCREMENT to Sequence appeared first on Percona Community Blog. [Less]
Posted 3 days ago by Bhuvanesh R
Credit: RunDeck Rundeck is one of my favorite Automation tools. Here we are going to see how can we install and configure rundek on a CentOS server with mysql as a backend. Even I like Jenkins, but as a SYSadmin, I like the Rundeck a lot.You may ... [More] think like both can do automation. But as … The post Rundeck Series: Install And Configure RunDeck 3.0 On CentOS 7 appeared first on SQLgossip. [Less]
Posted 3 days ago by MyDBOPS
Mydbops has partnered with OSI days for the second consecutive year. OSI days is one of the Asia’s leading  open source conference. Presentations on MySQL  Topic        : Evolution of DBA’s in Cloud Presenter : Manosh Malai ,Senior Devops / DB ... [More] Consultant Mydbops Kabilesh P R. Co-Founder / DB Consultant Mydbops   As Cloud is more widely adopted by industry now DBA’s should focus on ramping up their Skills on core optimisation and designing more scalable database. Our consultants emphasis the role of DBA in cloud environment and share their experience in handling large scale systems. Topic : MySQL 8.0 = NoSQL + SQL Presenter : Tomas Ulin, Vice President MySQL Engineering Oracle Topic : High Availability framework for MySQL wth Semi-Synchronous replication Presenter : Prasad Nagaraj,VP, Engineering Scalegrid     [Less]
Posted 4 days ago by Percona Community
In the context of providing managed WordPress hosting services, at Presslabs we operate with lots of small to medium-sized databases, in a DB-per-service model, as we call it. The workloads are mostly reads, so we need to efficiently scale that. The ... [More] MySQL® asynchronous replication model fits the bill very well, allowing us to scale horizontally from one server—with the obvious availability pitfalls—to tens of nodes. The next release of the stack is going to be open-sourced. As we were already using Kubernetes, we were looking for an operator that could automate our DB deployments and auto-scaling. Those available were doing synchronous replication using MySQL group replication or Galera-based replication. Therefore, we decided to write our own operator. Solution architecture The MySQL operator, released under Apache 2.0 license, is based on Percona Server for MySQL for its operational improvements —like utility user and backup locks—and relies on the tried and tested Orchestrator to do the automatic failovers. We’ve been using Percona Server in production for about four years, with very good results, thus encouraging us to continue implementing it in the operator as well. The MySQL Operator-Orchestrator integration is highly important for topology, as well as for cluster healing and system failover. Orchestrator is a MySQL high availability and replication management tool that was coded and opened by GitHub. As we’re writing this, the operator is undergoing a full rewrite to implement the operator using the Kubebuilder framework, which is a pretty logical step to simplify and standardize the operator to make it more readable to contributors and users. Aims for the project We’ve built the MySQL operator with several considerations in mind, generated by the needs that no other operator could satisfy at the time we started working on it, last year. Here are some of them: Easily deployable MySQL clusters in Kubernetes, following the cluster-per-service model DevOps-friendly, critical to basic operations such as monitoring, availability, scalability, and backup stories Out-of-the-box backups, scheduled or on-demand, and point-in-time recovery Support for cloning, both inside a cluster and across clusters It’s good to know that the MySQL operator is now in beta version, and can be tested in production workloads. However, you can take a spin and decide for yourself—we’re already successfully using it for a part of our production workloads at Presslabs, for our customer dashboard services. Going further to some more practical info, we’ve successfully installed and tested the operator on AWS, Google Cloud Platform, and Microsoft Azure and covered the step by step process in three tutorials here. Set up and configuration It’s fairly simple to use the operator. Prerequisites would be the ubiquitous Helm and Kubectl. The first step is to install the controller. Two commands should be run, to make use of the Helm chart bundled in the operator: $ helm repo add presslabs https://presslabs.github.io/charts $ helm install presslabs/mysql-operator --name mysql-operator These commands will deploy the controller together with an Orchestrator cluster. The configuration parameters of the Helm chart for the operator and its default values are as follows: Parameter Description Default value replicaCount replicas for controller 1 image controller container image quay.io/presslabs/mysql-operator:v0.1.5 imagePullPolicy controller image pull policy IfNotPresent helperImage mysql helper image quay.io/presslabs/mysql-helper:v0.1.5 installCRDs whether or not to install CRDS true resources controller pod resources {} nodeSelector controller pod nodeSelector {} tolerations controller pod tolerations {} affinity controller pod affinity {} extraArgs args that are passed to controller [] rbac.create whether or not to create rbac service account, role and roleBinding true rbac.serviceAccountName If rbac.create is false then this service account is used default orchestrator.replicas Control Orchestrator replicas 3 orchestrator.image Orchestrator container image quay.io/presslabs/orchestrator:latest   Further Orchestrator values can be tuned by checking the values.yaml config file. Cluster deployment The next step is to deploy a cluster. For this, you need to create a Kubernetes secret that contains MySQL credentials (root password, database name, user name, user password), to initialize the cluster and a custom resource MySQL cluster as you can see below: An example of a secret (example-cluster-secret.yaml): apiVersion: v1 kind: Secret metadata:  name: my-secret type: Opaque data:  ROOT_PASSWORD: # root password, base_64 encoded An example of simple cluster (example-cluster.yaml): apiVersion: mysql.presslabs.org/v1alpha1 kind: MysqlCluster metadata:  name: my-cluster spec:  replicas: 2  secretName: my-secret The usual kubectl commands can be used to do various operations, such as a basic listing: $ kubectl get mysql or detailed cluster information: $ kubectl describe mysql my-cluster Backups A further step could be setting up the backups on an object storage service. To create a backup is as simple as creating a MySQL Backup resource that can be seen in this example (example-backup.yaml): apiVersion: mysql.presslabs.org/v1alpha1 kind: MysqlBackup metadata:  name: my-cluster-backup spec:  clusterName: my-cluster  backupUri: gs://bucket_name/path/to/backup.xtrabackup.gz  backupSecretName: my-cluster-backup-secret To provide credentials for a storage service, you have to create a secret and specify your credentials to your provider; we currently support AWS, GCS or HTTP as in this example (example-backup-secret.yaml): apiVersion: v1 kind: Secret metadata:  name: my-cluster-backup-secret type: Opaque Data:  # AWS  AWS_ACCESS_KEY_ID: #add here your key, base_64 encoded  AWS_SECRET_KEY: #and your secret, base_64 encoded  # or Google Cloud base_64 encoded  # GCS_SERVICE_ACCOUNT_JSON_KEY: #your key, base_64 encoded  # GCS_PROJECT_ID: #your ID, base_64 encoded Also, recurrent cluster backups and cluster initialization from a backup are some additional operations you can opt for. For more details head for our documentation page. Further operations and new usage information are kept up-to-date on the project homepage. Our future plans include developing the MySQL operator and integrating it with Percona Management & Monitoring for better exposing the internals of the Kubernetes DB cluster. Open source community Community contributions are highly appreciated; we should mention the pull requests from Platform9, so far, but also the sharp questions on the channel we’ve opened on Gitter, for which we do the best to answer in detail, as well as issue reports from early users of the operator. Come and talk to us about the project Along with my colleague Calin Don, I’ll be talking about this at Percona Live Europe in November. It would be great to have the chance to meet other enthusiasts and talk about what we’ve discovered so far! The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up. The post Deploying MySQL on Kubernetes with a Percona-based Operator appeared first on Percona Community Blog. [Less]
Posted 4 days ago by MySQL Performance Blog
With the exception of the three configuration variables described here, ProxySQL will only parse the configuration files the first time it is started, or if the proxysql.db file is missing for some other reason. If we want to change any of this data ... [More] we need to do so via ProxySQL’s admin interface and then save them to disk. That’s fine if ProxySQL is running, but what if it won’t start because of these values? For example, perhaps we accidentally configured ProxySQL to run on port 3306 and restarted it, but there’s already a production MySQL instance running on this port. ProxySQL won’t start, so we can’t edit the value that way: 2018-10-02 09:18:33 network.cpp:53:listen_on_port(): [ERROR] bind(): Address already in use We could delete proxysql.db and have it reload the configuration files, but that would mean any changes we didn’t mirror into the configuration files will be lost. Another option is to edit ProxySQL’s database file using sqlite3: [root@centos7-pxc57-4 ~]# cd /var/lib/proxysql/ [root@centos7-pxc57-4 proxysql]# sqlite3 proxysql.db sqlite> SELECT * FROM global_variables WHERE variable_name='mysql-interfaces'; mysql-interfaces|127.0.0.1:3306 sqlite> UPDATE global_variables SET variable_value='127.0.0.1:6033' WHERE variable_name='mysql-interfaces'; sqlite> SELECT * FROM global_variables WHERE variable_name='mysql-interfaces'; mysql-interfaces|127.0.0.1:6033 Or if we have a few edits to make we may prefer to do so with a text editor: [root@centos7-pxc57-4 ~]# cd /var/lib/proxysql/ [root@centos7-pxc57-4 proxysql]# sqlite3 proxysql.db sqlite> .output /tmp/global_variables sqlite> .dump global_variables sqlite> .exit The above commands will dump the global_variables table into a file in SQL format, which we can then edit: [root@centos7-pxc57-4 proxysql]# grep mysql-interfaces /tmp/global_variables INSERT INTO “global_variables” VALUES(‘mysql-interfaces’,’127.0.0.1:3306’); [root@centos7-pxc57-4 proxysql]# vim /tmp/global_variables [root@centos7-pxc57-4 proxysql]# grep mysql-interfaces /tmp/global_variables INSERT INTO “global_variables” VALUES(‘mysql-interfaces’,’127.0.0.1:6033’); Now we need to restore this data. We’ll use the restore command to empty the table (as we’re restoring from a missing backup): [root@centos7-pxc57-4 proxysql]# sqlite3 proxysql.db sqlite> .restore global_variables sqlite> .read /tmp/global_variables sqlite> .exit Once we’ve made the change, we should be able to start ProxySQL again: [root@centos7-pxc57-4 proxysql]# /etc/init.d/proxysql start Starting ProxySQL: DONE! [root@centos7-pxc57-4 proxysql]# lsof -I | grep proxysql proxysql 15171 proxysql 19u IPv4 265881 0t0 TCP localhost:6033 (LISTEN) proxysql 15171 proxysql 20u IPv4 265882 0t0 TCP localhost:6033 (LISTEN) proxysql 15171 proxysql 21u IPv4 265883 0t0 TCP localhost:6033 (LISTEN) proxysql 15171 proxysql 22u IPv4 265884 0t0 TCP localhost:6033 (LISTEN) proxysql 15171 proxysql 23u IPv4 266635 0t0 TCP *:6032 (LISTEN) While you are here You might enjoy my recent post Using ProxySQL to connect to IPV6-only databases over IPV4 You can download ProxySQL from Percona repositories, and you might also want to check out our recorded webinars that feature ProxySQL too. [Less]
Posted 4 days ago by MySQL Performance Blog
With the exception of the three configuration variables described here, ProxySQL will only parse the configuration files the first time it is started, or if the proxysql.db file is missing for some other reason. If we want to change any of this data ... [More] we need to do so via ProxySQL’s admin interface and then save them to disk. That’s fine if ProxySQL is running, but what if it won’t start because of these values? For example, perhaps we accidentally configured ProxySQL to run on port 3306 and restarted it, but there’s already a production MySQL instance running on this port. ProxySQL won’t start, so we can’t edit the value that way: 2018-10-02 09:18:33 network.cpp:53:listen_on_port(): [ERROR] bind(): Address already in use We could delete proxysql.db and have it reload the configuration files, but that would mean any changes we didn’t mirror into the configuration files will be lost. Another option is to edit ProxySQL’s database file using sqlite3: [root@centos7-pxc57-4 ~]# cd /var/lib/proxysql/ [root@centos7-pxc57-4 proxysql]# sqlite3 proxysql.db sqlite> SELECT * FROM global_variables WHERE variable_name='mysql-interfaces'; mysql-interfaces|127.0.0.1:3306 sqlite> UPDATE global_variables SET variable_value='127.0.0.1:6033' WHERE variable_name='mysql-interfaces'; sqlite> SELECT * FROM global_variables WHERE variable_name='mysql-interfaces'; mysql-interfaces|127.0.0.1:6033 Or if we have a few edits to make we may prefer to do so with a text editor: [root@centos7-pxc57-4 ~]# cd /var/lib/proxysql/ [root@centos7-pxc57-4 proxysql]# sqlite3 proxysql.db sqlite> .output /tmp/global_variables sqlite> .dump global_variables sqlite> .exit The above commands will dump the global_variables table into a file in SQL format, which we can then edit: [root@centos7-pxc57-4 proxysql]# grep mysql-interfaces /tmp/global_variables INSERT INTO “global_variables” VALUES(‘mysql-interfaces’,’127.0.0.1:3306’); [root@centos7-pxc57-4 proxysql]# vim /tmp/global_variables [root@centos7-pxc57-4 proxysql]# grep mysql-interfaces /tmp/global_variables INSERT INTO “global_variables” VALUES(‘mysql-interfaces’,’127.0.0.1:6033’); Now we need to restore this data. We’ll use the restore command to empty the table (as we’re restoring from a missing backup): [root@centos7-pxc57-4 proxysql]# sqlite3 proxysql.db sqlite> .restore global_variables sqlite> .read /tmp/global_variables sqlite> .exit Once we’ve made the change, we should be able to start ProxySQL again: [root@centos7-pxc57-4 proxysql]# /etc/init.d/proxysql start Starting ProxySQL: DONE! [root@centos7-pxc57-4 proxysql]# lsof -I | grep proxysql proxysql 15171 proxysql 19u IPv4 265881 0t0 TCP localhost:6033 (LISTEN) proxysql 15171 proxysql 20u IPv4 265882 0t0 TCP localhost:6033 (LISTEN) proxysql 15171 proxysql 21u IPv4 265883 0t0 TCP localhost:6033 (LISTEN) proxysql 15171 proxysql 22u IPv4 265884 0t0 TCP localhost:6033 (LISTEN) proxysql 15171 proxysql 23u IPv4 266635 0t0 TCP *:6032 (LISTEN) While you are here You might enjoy my recent post Using ProxySQL to connect to IPV6-only databases over IPV4 You can download ProxySQL from Percona repositories, and you might also want to check out our recorded webinars that feature ProxySQL too. The post How to Fix ProxySQL Configuration When it Won’t Start appeared first on Percona Database Performance Blog. [Less]
Posted 4 days ago by Keith Hollman
So, if we’re applying GDPR to our system, and we’re already making use of MySQL Transparent Data Encryption / keyring, then here’s an example on how to migrate from filed-based keyring to the encrypted keyring. Online. If you’re looking to go deeper ... [More] into the TDE then I suggest reading the MySQL Server Team’s InnoDB Transparent Tablespace Encryption blog. You’d already have your environment running, whereas I have to create one.. give me a minute please, 8.0.12 here we come: mysqld --defaults-file=my_okv.cnf --initialize-insecure --user=khollman mysqld --defaults-file=my_okv.cnf --user=khollman & mysql --defaults-file=my_okv.cnf -uroot show plugins; show variables like 'keyring%'; alter user 'root'@'localhost' identified by 'oracle'; create database nexus; create table nexus.replicant (id INT(11) NOT NULL AUTO_INCREMENT , `First name` varchar(40) not null default '', `Last name` varchar(40) not null default '', `Replicant` enum('Yes','No') not null default 'Yes', PRIMARY KEY (id)) engine=InnoDB row_format=COMPACT ENCRYPTION = 'Y'; INSERT INTO nexus.`replicant` (`First name`,`Last name`,`Replicant`) VALUES ('Roy','Hauer','Yes'), ('Rutger','Batty','Yes'), ('Voight','Kampff','Yes'), ('Pris','Hannah','Yes'), ('Daryl','Stratton','Yes'), ('Rachael','Young','Yes'), ('Sean','Tyrell','Yes'), ('Rick','Ford','No'), ('Harrison','Deckard','Yes'); select * from nexus.replicant; Now we have an environment using the keyring file-based TDE. Before migrating the key store, there are a few things we need to be aware of, as well as reading the manual on this topic: mysqld. Yes, we start up another mysqld process, but it’s not a fully functioning server, far from it. It is just a means to migrate the keys from the old file-based  to the new encrypted file. So don’t worry about the defaults-file, the innodb_xxxx params nor anything else. We actually need to reuse the existing datadir. datadir. As just mentioned, don’t try and use another datadir as it won’t find any files there to encrypt with the new key and the process won’t be successful. Use the existing online server datadir. (of course, I recommend this process be run in a non-production test environment first!) -source & -destination. I think this is quite obvious.  The plugin we’re coming from, and going to. keyring_file_data is the existing file-based keyring being used. keyring_encrypted_file_data & _password is the new encrypted password being stored in its file in this location. keyring-migration- params. We need to connect to the existing instance with super user privs. As it’s locally to the instance, we can use -socket.   mysqld --basedir=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64 \ --plugin-dir=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/lib/plugin \ --lc_messages_dir=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/share \ --datadir=/opt/mysql/okv/data \ --keyring-migration-source=keyring_file.so \ --keyring_file_data=/opt/mysql/okv/keyring \ --keyring-migration-destination=keyring_encrypted_file.so \ --keyring_encrypted_file_data=/opt/mysql/okv/keyring_enc \ --keyring_encrypted_file_password=oracle2018 \ --keyring-migration-socket=/opt/mysql/okv/mysql.socket \ --keyring-migration-user=root \ --keyring-migration-password=oracle And if, only if, the migration is successful, you should see output like the following. Anything else, i.e. if no output comes back, or some of the lines don’t appear in your scenario, double check the parameters in the previous command as it’s more than likely impeding a successful key migration somewhere: 2018-10-08T11:26:22.227161Z 0 [Note] [MY-010098] [Server] --secure-file-priv is set to NULL. Operations related to importing and exporting data are disabled 2018-10-08T11:26:22.227219Z 0 [Note] [MY-010949] [Server] Basedir set to /usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/. 2018-10-08T11:26:22.227226Z 0 [System] [MY-010116] [Server] mysqld (mysqld 8.0.12-commercial) starting as process 13758 2018-10-08T11:26:22.254234Z 0 [Note] [MY-011085] [Server] Keyring migration successful. 2018-10-08T11:26:22.254381Z 0 [Note] [MY-010120] [Server] Binlog end 2018-10-08T11:26:22.254465Z 0 [Note] [MY-010733] [Server] Shutting down plugin 'keyring_encrypted_file' 2018-10-08T11:26:22.254642Z 0 [Note] [MY-010733] [Server] Shutting down plugin 'keyring_file' 2018-10-08T11:26:22.255757Z 0 [System] [MY-010910] [Server] mysqld: Shutdown complete (mysqld 8.0.12-commercial) MySQL Enterprise Server - Commercial. Migrated.   To make sure the instance has the new parameters in the defaults file, and before any risk of restarting the instance, we’ll need to add the new ‘encrypted’ params to the my.cnf: [mysqld] plugin_dir =/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/lib/plugin #early-plugin-load =keyring_file.so #keyring_file_data =/opt/mysql/okv/keyring early-plugin-load =keyring_encrypted_file.so keyring_encrypted_file_data =/opt/mysql/okv/keyring_enc keyring_encrypted_file_password =oracle2018 ...   And upon the next most convenient / least inconvenient moment, restart the instance: mysqladmin --defaults-file=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/my_okv.cnf -uroot -poracle shutdown mysqld --defaults-file=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/my_okv.cnf --user=khollman & And let’s double check which keyring plugin we’re using: select * from information_schema.plugins where plugin_name like '%keyring%' \G *************************** 1. row *************************** PLUGIN_NAME: keyring_encrypted_file PLUGIN_VERSION: 1.0 PLUGIN_STATUS: ACTIVE PLUGIN_TYPE: KEYRING PLUGIN_TYPE_VERSION: 1.1 PLUGIN_LIBRARY: keyring_encrypted_file.so PLUGIN_LIBRARY_VERSION: 1.9 PLUGIN_AUTHOR: Oracle Corporation PLUGIN_DESCRIPTION: store/fetch authentication data to/from an encrypted file PLUGIN_LICENSE: PROPRIETARY LOAD_OPTION: ON 1 row in set (0,00 sec) And also that we can select the data from the encrypted tablespace: select * from nexus.replicant; +----+------------+-----------+-----------+ | id | First name | Last name | Replicant | +----+------------+-----------+-----------+ | 1 | Roy | Hauer | Yes | | 2 | Rutger | Batty | Yes | | 3 | Voight | Kampff | Yes | | 4 | Pris | Hannah | Yes | | 5 | Daryl | Stratton | Yes | | 6 | Rachael | Young | Yes | | 7 | Sean | Tyrell | Yes | | 8 | Rick | Ford | No | | 9 | Harrison | Deckard | Yes | +----+------------+-----------+-----------+ 9 rows in set (0,00 sec)   Seems quite straight forward. Well how about, in a test environment, changing the keyring_encrypted_file_password value to something different now, and restart the instance, and run the same select on the same table. Hey presto: select * from nexus.replicant; ERROR 3185 (HY000): Can't find master key from keyring, please check in the server log if a keyring plugin is loaded and initialized successfully. Error (Code 3185): Can't find master key from keyring, please check in the server log if a keyring plugin is loaded and initialized successfully. Error (Code 1877): Operation cannot be performed. The table 'nexus.replicant' is missing, corrupt or contains bad data.   Hope this helps someone out there. Enjoy encrypting! Now we can run encrypted backups safely and not worry about moving those files around different systems now. Advertisements [Less]
Posted 4 days ago by MySQL Performance Blog
After much speculation following the announcement in Santa Clara earlier this year, we are delighted to announce Percona Live 2019 will be taking place in Austin, Texas. Save the dates in your diary for May, 28-30 2019! The conference will take ... [More] place just after Memorial Day at The Hyatt Regency, Austin on the shores of Lady Bird Lake. This is also an ideal central location for those who wish to extend their stay and explore what Austin has to offer! Call for papers, ticket sales and sponsorship opportunities will be announced soon, so stay tuned! In other Percona Live news, we’re less than 4 weeks away from this year’s European conference taking place in Frankfurt, Germany on 5-7 November. The tutorials and breakout sessions have been announced, and you can view the full schedule here. Tickets are still on sale so don’t miss out, book yours here today!   Register Now for Percona Live Europe 2018   [Less]
Posted 4 days ago by MySQL Performance Blog
After much speculation following the announcement in Santa Clara earlier this year, we are delighted to announce Percona Live 2019 will be taking place in Austin, Texas. Save the dates in your diary for May, 28-30 2019! The conference will take ... [More] place just after Memorial Day at The Hyatt Regency, Austin on the shores of Lady Bird Lake. This is also an ideal central location for those who wish to extend their stay and explore what Austin has to offer! Call for papers, ticket sales and sponsorship opportunities will be announced soon, so stay tuned! In other Percona Live news, we’re less than 4 weeks away from this year’s European conference taking place in Frankfurt, Germany on 5-7 November. The tutorials and breakout sessions have been announced, and you can view the full schedule here. Tickets are still on sale so don’t miss out, book yours here today!   Register Now for Percona Live Europe 2018   The post Percona Live 2019 – Save the Date! appeared first on Percona Database Performance Blog. [Less]