I Use This!
Very High Activity


Analyzed 1 day ago. based on code collected 1 day ago.
Posted 1 day ago by Open Query
In Amazon space, any EC2 or Service instance can “disappear” at any time.  Depending on which service is affected, the service will be automatically restarted.  In EC2 you can choose whether an interrupted instance will be restarted, or left ... [More] shutdown. For an Aurora instance, an interrupted instance is always restarted. Makes sense. The restart timing, and other consequences during the process, are noted in our post on Aurora Failovers. Aurora Testing Limitations As mentioned earlier, we love testing “uncontrolled” failovers.  That is, we want to be able to pull any plug on any service, and see that the environment as a whole continues to do its job.  We can’t do that with Aurora, because we can’t control the essentials: power button; reset switch; ability to kill processes on a server; and the ability to change firewall settings. In Aurora, an instance is either running, or will (again) be running shortly.  So that we know.  Aurora MySQL also offers some commands that simulate various failure scenarios, but since they are built-in we can presume that those scenarios are both very well tested, as well as covered by the automation around the environment.  Those clearly defined cases are exactly the situations we’re not interested in. What if, for instance, a server accepts new connections but is otherwise unresponsive?  We’ve seen MySQL do this on occasion.  Does Aurora catch this?  We don’t know and  we have no way of testing that, or many other possible problem scenarios.  That irks. The Need to Know If an automated system is able to catch a situation, that’s great.  But if your environment can end up in a state such as described above and the automated systems don’t catch and handle it, you could be dead in the water for an undefined amount of time.  If you have scripts to catch cases such as these, but the automated systems catch them as well, you want to be sure that you don’t trigger “double failovers” or otherwise interfere with a failover-in-progress.  So either way, you need to know and and be aware whether a situation is caught and handled, and be able to test specific scenarios. In summary: when you know the facts, then you can assess the risk in relation to your particular needs, and mitigate where and as desired. A corporate guarantee of “everything is handled and it’ll be fine” (or as we say in Australia “She’ll be right, mate!“) is wholly unsatisfactory for this type of risk analysis and mitigation exercise.  Guarantees and promises, and even legal documents, don’t keep environments online.  Consequently, promises and legalities don’t keep a company alive. So what does?  In this case, engineers.  But to be able to do their job, engineers need to know what parameters they’re working with, and have the ability to test any unknowns.  Unfortunately Aurora is, also in this respect, a black box.  You have to trust, and can’t comprehensively verify.  Sigh. [Less]
Posted 1 day ago by MyDBOPS
This is part-4 of the Maxscale Blog series Maxscale and Galera Maxscale Basic Administration Maxscale for Replication Maxscale started supporting Amazon Aurora lately from its version 2.1 which comes with a BSL license, we are good until we use ... [More] only 3 nodes, Amazon Aurora (Our Previous blog ) is a brilliant technology built by AWS team which imitates features of MySQL, Aurora is getting better and better with regards to scaling and feature with each release of its Aurora engine current version is 1.16 (at time of writing ) , Aurora architecture and features can be seen here. In this blog i will be explaining Maxscale deployment for Aurora. Maxscale version : maxscale-2.1.13-1.x86_64OS version              : Amazon Linux AMI 2016.09Cores                        : 4RAM                         : 8GB Note: Make sure to have the EC2 machine with in same Availability Zone ( AZ ) where the Aurora resides which could greatly help in reducing the network latency. For the purpose of this blog i have used 3 instances of Aurora (1Master + 2 Read Replica). Now lets speaks about the end-points that Aurora provides. Aurora Endpoints:Endpoints are the correction URI’s provide by AWS to connect to the Aurora database. Listed below the endpoints provided by Aurora.  Cluster Endpoint  Reader Endpoint  Instance Endpoint Cluster Endpoint: An endpoint for a Aurora DB cluster that connects to the current primary instance for that DB cluster,The cluster endpoint provides failover support for read/write connections to the DB cluster. If the current primary instance of a DB cluster fails, Aurora automatically fails over to a new primary instance Reader Endpoint: An endpoint for an Aurora DB cluster that connects to one of the available Aurora Replicas for that DB cluster. Each Aurora DB cluster has a reader endpoint.The reader endpoint provides load balancing support for read-only connections to the DB cluster Instance Endpoint: An endpoint for a DB instance in an Aurora DB cluster that connects to that specific DB instance.Each DB instance in a DB cluster, regardless of instance type, has its own unique instance endpoint Among these different end-point we will be using the “Instance Endpoint” ie., individual end-point in maxscale config. The problem here is application should have reads and writes split at application layer. So that it can use the Reader and writer end points efficiently. But if a user migrates to Aurora for scalability then we need a intelligent proxy like Maxscale / ProxySQL. Currently Maxscale has inbuilt support for Aurora. How is Aurora Monitored by Maxscale? Maxscale uses a special monitor module called ‘auroramon’, since the Aurora does not follow standard MySQL replication protocol for replicating data to its under replica. How ‘auroramon’ identifies master and replica from ‘Instance Endpoint’ ? Each node inside the aurora cluster( in our use case its 1master + 2 Replica), has a aurora_server_id (@@aurora_server_id), which is a unique identifier for each instance/node. And also Aurora all the relevant information about replication including the aurora_server_id inside the table information_schema.replica_host_status, Below is the structure of the table. +----------------------------------------+---------------------+------+-----+---------------------+-------+ | Field | Type | Null | Key | Default | Extra | +----------------------------------------+---------------------+------+-----+---------------------+-------+ | SERVER_ID | varchar(100) | NO | | | | | SESSION_ID | varchar(100) | NO | | | | | IOPS | int(10) unsigned | NO | | 0 | | | READ_IOS | bigint(10) unsigned | NO | | 0 | | | PENDING_READ_IOS | int(10) unsigned | NO | | 0 | | | CPU | double | NO | | 0 | | | DURABLE_LSN | bigint(20) unsigned | NO | | 0 | | | ACTIVE_LSN | bigint(20) unsigned | NO | | 0 | | | LAST_TRANSPORT_ERROR | int(10) | NO | | 0 | | | LAST_ERROR_TIMESTAMP | datetime | NO | | 0000-00-00 00:00:00 | | | LAST_UPDATE_TIMESTAMP | datetime | NO | | 0000-00-00 00:00:00 | | | MASTER_SLAVE_LATENCY_IN_MICROSECONDS | bigint(10) unsigned | NO | | 0 | | | REPLICA_LAG_IN_MILLISECONDS | double | NO | | 0 | | | LOG_STREAM_SPEED_IN_KiB_PER_SECOND | double | NO | | 0 | | | LOG_BUFFER_SEQUENCE_NUMBER | bigint(10) unsigned | NO | | 0 | | | IS_CURRENT | tinyint(1) unsigned | NO | | 0 | | | OLDEST_READ_VIEW_TRX_ID | bigint(10) unsigned | NO | | 0 | | | OLDEST_READ_VIEW_LSN | bigint(10) unsigned | NO | | 0 | | | HIGHEST_LSN_RECEIVED | bigint(1) unsigned | NO | | 0 | | | CURRENT_READ_POINT | bigint(1) unsigned | NO | | 0 | | | CURRENT_REPLAY_LATENCY_IN_MICROSECONDS | bigint(1) unsigned | NO | | 0 | | | AVERAGE_REPLAY_LATENCY_IN_MICROSECONDS | bigint(1) unsigned | NO | | 0 | | | MAX_REPLAY_LATENCY_IN_MICROSECONDS | bigint(1) unsigned | NO | | 0 | | +----------------------------------------+---------------------+------+-----+---------------------+-------+ The above structure of table is subjected to change as per Aurora version. Another important variable to check is the ‘SESSION_ID’ which is a unique identifier value for replica nodes, but for the master server it will defined as ‘MASTER_SESSION_ID’. Based on these two variables the master and replica is segregated by maxscale monitor and which sets the status flag based on which the router sends the traffic to nodes. Now let get into the configuration part, of maxscale for Aurora. Installation and administration has been covered in previous blogs part1 and part2. Below is the Aurora monitor module configuration. [Aurora-Monitor] type=monitor module=auroramon servers=nodeA,nodeB,nodeC user=USERXXXX passwd=9BE2F1F3B182F061CEA59799AA758D1DAE6B8ADF32845517C13EA0122A5BA7F5 monitor_interval=2500 Below is the server, definitions, i have named each node as nodeA, nodeB & nodeC and have provided instance end-point. [nodeA] type=server address=prodtest-rr-tier1XX.xxxxxx.us-east-1.rds.amazonaws.com port=3306 protocol=MySQLBackend persistpoolmax=200 persistmaxtime=3600 [nodeB] type=server address=proddtest-rr-tier1YY.xxxxxxx.us-east-1.rds.amazonaws.com port=3306 protocol=MySQLBackend persistpoolmax=200 persistmaxtime=3600 [nodeC] type=server address=proddtest-rr-tier1ZZ.XXXXXXX.us-east-1.rds.amazonaws.com port=3306 protocol=MySQLBackend persistpoolmax=200 persistmaxtime=3600 In the above server i have enabled connection pool by defining persistpoolmax and persistmaxtime, this would greatly help in overcoming restart of instance due to memory handling with aurora. Once all the config is done you can reload Maxscale config / restart Maxscale service. -------------------+-----------------+-------+-------------+-------------------- Server | Address | Port | Connections | Status -------------------+-----------------+-------+-------------+-------------------- nodeA | proddtest-rr-tier1XX.xxxxxx.us-east-1.rds.amazonaws.com| 3306 | 0 | Slave, Running nodeB | proddtest-rr-tier1YY.xxxxxxx.us-east-1.rds.amazonaws.com | 3306 | 0 | Slave, Running nodeC | proddtest-rr-tier1ZZ.XXXXXXX.us-east-1.rds.amazonaws.com | 3306 | 0 | Master, Running -------------------+-----------------+-------+-------------+--------------------   Architecture  Below is the method to check the RW split from command line, by default a query inside ‘start transaction;’ goes to the master node, Since am connecting directly from max scale node i have used local socket, you can also use IP and Port instead. [root@ip-XXXXXX mydbops]# mysql -u mydbops -pXXXXXX -S /tmp/ClusterMaster -e "show global variables like '%aurora_server_id%';" mysql: [Warning] Using a password on the command line interface can be insecure. +------------------+----------------------------+ | Variable_name    | Value                      | +------------------+----------------------------+ | aurora_server_id | proddtest-rr-tier1ZZ | +------------------+----------------------------+ [root@ip-XXXXXXX mydbops]# mysql -u mydbops -pXXXXX -S /tmp/ClusterMaster -e "start transaction;show global variables like '%aurora_server_id%';commit;" mysql: [Warning] Using a password on the command line interface can be insecure. +------------------+----------------------------+ | Variable_name    | Value                      | +------------------+----------------------------+ | aurora_server_id | proddtest-rr-tier1XX | +------------------+----------------------------+ To monitor the percentage of RW split between master & replica and configuration stats as below. MaxScale> show service "Splitter Service" Service:                             Splitter Service Router:                              readwritesplit State:                               Started use_sql_variables_in:      all slave_selection_criteria:  LEAST_BEHIND_MASTER master_failure_mode:       fail_instantly max_slave_replication_lag: 30 retry_failed_reads:        true strict_multi_stmt:         true strict_sp_calls:           false disable_sescmd_history:    true max_sescmd_history:        0 master_accept_reads:       true Number of router sessions:           5 Current no. of router sessions:      2 Number of queries forwarded:          20 Number of queries forwarded to master:8 (40.00%) Number of queries forwarded to slave: 12 (60.00%) Number of queries forwarded to all:   4 (20.00%) Started:                             Mon Feb 19 15:53:48 2018 Root user access:                    Disabled Backend databases: [prodtest-rr-tier1XX.xxxxxx.us-east-1.rds.amazonaws.com]:3306    Protocol: MySQLBackend    Name: nodeA [proddtest-rr-tier1YY.xxxxxxx.us-east-1.rds.amazonaws.com]:3306    Protocol: MySQLBackend    Name: nodeB [proddtest-rr-tier1ZZ.XXXXXXX.us-east-1.rds.amazonaws.com]:3306    Protocol: MySQLBackend    Name: nodeC Total connections:                   7 Currently connected:                 2   Now you can start using Maxscale and scale your queries for Aurora cluster. It is becomes no more mandatory for the segregating the reads and writes queries for Aurora.   [Less]
Posted 1 day ago by Reggie Burnett
Open source is at the foundation of MySQL and the biggest and best part of open source is the legion of developers and users who use and contribute to our great product.  It has always been of incredible importance to us to interact with our friends ... [More] in MySQL space and one of the great ways of doing that is via IRC (Internet Relay Chat) on #freenode.  While that still remains a great option many other systems have been developed that offer other advantages.  One of those is slack. We wanted to create a slack space where our users could hang out, help each other, and interact with MySQL developers.  We’re in no way wanting to replace IRC but just wanting to make it even easier to solve your MySQL problems and learn about the many great things we are working on. Head on over to http://mysqlcommunity.slack.com to join in on the fun! [Less]
Posted 2 days ago by MariaDB
MariaDB Connector/J 2.2.2 and 1.7.2 now available dbart Wed, 02/21/2018 - 11:40 The MariaDB project is pleased to announce the immediate availability of MariaDB Connector/J 2.2.2 and MariaDB Connector/J 1.7.2. See the release notes and ... [More] changelogs for details and visit mariadb.com/downloads/connector to download. Download MariaDB Connector/J 2.2.2 Release Notes Changelog About MariaDB Connector/J Download MariaDB Connector/J 1.7.2 Release Notes Changelog About MariaDB Connector/J Community MariaDB Releases The MariaDB project is pleased to announce the immediate availability of MariaDB Connector/J 2.2.2 and MariaDB Connector/J 1.7.2. See the release notes and changelogs for details. Login or Register to post comments [Less]
Posted 2 days ago by Ramesh Sivaraman
In this blog post, I’ll look at how to make Percona XtraDB Cluster and SELinux work when used together. Recently, I encountered an issue with Percona XtraDB Cluster startup. We tried to setup a three-node cluster using Percona XtraDB Cluster with a ... [More] Vagrant CentOS box, but somehow node2 was not starting. I did not get enough information to debug the issue in the donor/joiner error log. I got only the following error message: 2018-02-08 16:58:48 7910 [Note] WSREP: Running: 'wsrep_sst_xtrabackup-v2 --role 'joiner' --address '' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '7910' --binlog 'mysql-bin' ' 2018-02-08 16:58:48 7910 [ERROR] WSREP: Failed to read 'ready ' from: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '7910' --binlog 'mysql-bin' Read: '(null)' 2018-02-08 16:58:48 7910 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --address '' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '7910' --binlog 'mysql-bin' : 2 (No such file or directory) 2018-02-08 16:58:48 7910 [ERROR] WSREP: Failed to prepare for 'xtrabackup-v2' SST. Unrecoverable. 2018-02-08 16:58:48 7910 [ERROR] Aborting 2018-02-08 16:58:50 7910 [Note] WSREP: Closing send monitor... The donor node error log also failed to give any information to debug the issue. After spending a few hours on the problem, one of our developers (Krunal) found that the error is due to SELinux. By default, SELinux is enabled in Vagrant CentOS boxes. We have already documented how to disable SELinux when installing Percona XtraDB Cluster. Since we did not find any SELinux related error in the error log, we had to spend few hours finding out the root cause You should also disable SELinux on the donor node to start the joiner node. Otherwise, the SST script starts but startup will fail with this error: 2018-02-09T06:55:06.099021Z 0 [Note] WSREP: Initiating SST/IST transfer on DONOR side (wsrep_sst_xtrabackup-v2 --role 'donor' --address '' --socket '/var/lib/mysql/mysql.sock' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' '' --gtid '0dc70996-0d60-11e8-b008-074abdb3291a:1') 2018-02-09T06:55:06.099556Z 2 [Note] WSREP: DONOR thread signaled with 0 2018-02-09T06:55:06.099722Z 0 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'donor' --address '' --socket '/var/lib/mysql/mysql.sock' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' '' --gtid '0dc70996-0d60-11e8-b008-074abdb3291a:1': 2 (No such file or directory) 2018-02-09T06:55:06.099781Z 0 [ERROR] WSREP: Command did not run: wsrep_sst_xtrabackup-v2 --role 'donor' --address '' --socket '/var/lib/mysql/mysql.sock' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' '' --gtid '0dc70996-0d60-11e8-b008-074abdb3291a:1' Disable SELinux on all nodes to start Percona XtraDB Cluster. The Percona XtraDB Cluster development team is working on providing the proper error message for SELinux issues. [Less]
Posted 2 days ago by MySQL Performance Blog
The conference session schedule for the seventh annual Percona Live 2018 Open Source Database Conference, taking place April 23-25 at the Santa Clara Convention Center in Santa Clara, CA is now live and available for review! Advance Registration ... [More] Discounts can be purchased through March 4, 2018, 11:30 p.m. PST. Percona Live Open Source Database Conference 2018 is the premier open source database event. With a theme of “Championing Open Source Databases,” the conference will feature multiple tracks, including MySQL, MongoDB, Cloud, PostgreSQL, Containers and Automation, Monitoring and Ops, and Database Security. Once again, Percona will be offering a low-cost database 101 track for beginning users who want to start learning how to use and operate open source databases. Major areas of focus at the conference include: Database operations and automation at scale, featuring speakers from Facebook, Slack, Github and more Databases in the cloud – how database-as-a-service (DBaaS) is changing the DB Landscape, featuring speakers from AWS, Microsoft, Alibaba and more Security and compliance – how GDPR and other government regulations are changing the way we manage databases, featuring speakers from Fastly, Facebook, Pythian, Percona and more Bridging the gap between developers and DBAs – finding common ground, featuring speakers from Square, Oracle, Percona and more Conference Session Schedule Conference sessions take place April 24-25 and will feature 90+ in-depth talks by industry experts related to each of the key areas. Several sessions from Oracle and Percona will focus on how the new features and enhancements in the upcoming release of MySQL 8.0 will impact businesses. Conference session examples include: What Are the Main New Key Features in MySQL 8.0? – Geir Høydalsvik, Oracle Securing Access to Facebook’s Databases – Andrew Regner, Facebook Designing and Launching the Next-Generation Database System @ Slack: Whiteboard to Production – Guido Iaquinti, Slack MongoDB for a High Volume Logistics Application – Eric Potvin, Shipwire MongoDB Cluster Topology, Management and Optimization – Steven Wang, Tesla MySQL at Scale at Square – Daniel Nichter, Square How Microsoft Built MySQL, PostgreSQL and MariaDB for the Cloud – Jun Su, Microsoft Tuning PostgreSQL for High-Write Workloads – Grant McAlister, Amazon Web Services Containerizing Databases at New Relic: What We Learned – Joshua Galbraith and Bryant Vinisky, New Relic GDPR and Security Compliance for the DBA – Tyler Duzan and Jeff Sandstrom, Percona  MySQL at Twitter: No More Forkin’ – Migrating to MySQL Community version – Ronald Ramon Francisco and Jojo Antonio, Twitter Database Security as a Function: Scaling to Your Organization’s Needs – Laine Campbell, Fastly Securing Your Data on PostgreSQL – Payal Singh, OmniTI Computer Consulting Inc. MySQL Performance Optimization and Troubleshooting with Percona Monitoring and Management – Peter Zaitsev, Percona Sponsorships Sponsorship opportunities for Percona Live Open Source Database Conference 2018 are available and offer the opportunity to interact with the DBAs, sysadmins, developers, CTOs, CEOs, business managers, technology evangelists, solution vendors and entrepreneurs who typically attend the event. Contact live@percona.com for sponsorship details. Diamond Sponsors – Continuent, VividCortex Platinum – Microsoft Gold Sponsors – Facebook, Grafana Bronze Sponsors – Altinity, BlazingDB, SolarWinds, Timescale, TwinDB, Yelp Other Sponsors – cPanel Media Sponsors – Database Trends & Applications, Datanami, EnterpriseTech, HPCWire, ODBMS.org, Packt Hyatt Regency Santa Clara & The Santa Clara Convention Center Percona Live 2018 Open Source Database Conference is held at the Hyatt Regency Santa Clara & The Santa Clara Convention Center, at 5101 Great America Parkway Santa Clara, CA 95054. The Hyatt Regency Santa Clara & The Santa Clara Convention Center is a prime location in the heart of the Silicon Valley. Enjoy this spacious venue with complimentary wifi, on-site expert staff and three great restaurants. You can reserve a room by booking through the Hyatt’s dedicated Percona Live reservation site. Book your hotel using Percona’s special room block rate! [Less]
Posted 2 days ago by Severalnines
Databases usually work in a secure environment. It may be a datacenter with a dedicated VLAN for database traffic. It may be a VPC in EC2. If your network spreads across multiple datacenters in different regions, you’d usually use some kind of ... [More] Virtual Private Network or SSH tunneling to connect these locations in a secure manner. With data privacy and security being hot topics these days, you might feel better with an additional layer of security. MySQL supports SSL as a means to encrypt traffic both between MySQL servers (replication) and between MySQL servers and clients. If you use Galera cluster, similar features are available - both intra-cluster communication and connections with clients can be encrypted using SSL. A common way of implementing SSL encryption is to use self-signed certificates. Most of the time, it is not necessary to purchase an SSL certificate issued by the Certificate Authority. Anybody who’s been through the process of generating a self-signed certificate will probably agree that it is not the most straightforward process - most of the time, you end up searching through the internet to find howto’s and instructions on how to do this. This is especially true if you are a DBA and only go through this process every few months or even years. This is why we added a ClusterControl feature to help you manage SSL keys across your database cluster. In this blog post, we’ll be making use of ClusterControl 1.5.1. Key Management in the ClusterControl You can enter Key Management by going to Side Menu -> Key Management section. You will be presented with the following screen: You can see two certificates generated, one being a CA and the other one a regular certificate. To generate more certificates, switch to the ‘Generate Key’ tab: A certificate can be generated in two ways - you can first create a self-signed CA and then use it to sign a certificate. Or you can go directly to the ‘Client/Server Certificates and Key’ tab and create a certificate. The required CA will be created for you in the background. Last but not least, you can import an existing certificate (for example a certificate you bought from one of many companies which sell SSL certificates). To do that, you should upload your certificate, key and CA to your ClusterControl node and store them in /var/lib/cmon/ca directory. Then you fill in the paths to those files and the certificate will be imported. If you decided to generate a CA or generate a new certificate, there’s another form to fill - you need to pass details about your organization, common name, email, pick the key length and expiration date. Once you have everything in place, you can start using your new certificates. ClusterControl currently supports deployment of SSL encryption between clients and MySQL databases and SSL encryption of intra-cluster traffic in Galera Cluster. We plan to extend the variety of supported deployments in future releases of the ClusterControl. Full SSL encryption for Galera Cluster Now let’s assume we have our SSL keys ready and we have a Galera Cluster, which needs SSL encryption, deployed through our ClusterControl instance. We can easily secure it in two steps. First - encrypt Galera traffic using SSL. From your cluster view, one of the cluster actions is 'Enable SSL Galera Encryption'. You’ll be presented with the following options: If you do not have a certificate, you can generate it here. But if you already generated or imported an SSL certificate, you should be able to see it in the list and use it to encrypt Galera replication traffic. Please keep in mind that this operation requires a cluster restart - all nodes will have to stop at the same time, apply config changes and then restart. Before you proceed here, make sure you are prepared for some downtime while the cluster restarts. Once intra-cluster traffic has been secured, we want to cover client-server connections. To do that, pick ‘Enable SSL Encryption’ job and you’ll see following dialog: It’s pretty similar - you can either pick an existing certificate or generate new one. The main difference is that to apply client-server encryption, downtime is not required - a rolling restart will suffice. Once restarted, you will find a lock icon right under the encrypted host on the Overview page: The label 'Galera' means Galera encryption is enabled, while 'SSL' means client-server encryption is enabled for that particular host. Of course, enabling SSL on the database is not enough - you have to copy certificates to clients which are supposed to use SSL to connect to the database. All certificates can be found in /var/lib/cmon/ca directory on the ClusterControl node. You also have to remember to change grants for users and make sure you’ve added REQUIRE SSL to them if you want to enforce only secure connections. We hope you’ll find those options easy to use and help you secure your MySQL environment. If you have any questions or suggestions regarding this feature, we’d love to hear from you. Tags:  encryption MySQL MariaDB galera key management ssl [Less]
Posted 2 days ago by Giuseppe Maxia
How it happened A few years ago I started thinking about refactoring MySQL-Sandbox. I got lots of ideas and a name for the project (dbdeployer) but went no further. The initial idea (this was 2013!) was to rewrite the project in Ruby: I had been ... [More] using Ruby at work and it looked like a decent replacement for Perl. My main problem was the difficulty of installation in an uncontrolled environment. If you have control over your environment (it's your laptop or you are in charge of the server configuration via Puppet or similar) then the task is easy. But if you ever need to deploy somewhere with little or no notice, it becomes a problem: there are servers where Perl is not installed, and is common that the server also have a policy forbidding all scripting languages from being deployed. Soon I found out that Ruby has the same problem as Perl. In the meantime, my work also required heavy involvement with Python, and I started thinking that maybe it would be a better choice than Ruby.My adventures with deployment continued. In some places, I would find old versions of Perl, Ruby, Python, and no way of replacing them easily. I also realized that, if I bit the bullet and wrote my tools in C or C++, my distribution problems would not end, as I had to deal with library dependencies and conflict with existing ones.At the end of 2017 I finally did what I had postponed for so long: I took a serious look at Go, and I decided that it was the best candidate for solving the distribution problem. I had a few adjustment problems, as the Go philosophy is different from my previously used languages, but the advantages were so immediate that I was hooked. Here's what I found compelling: Shift in responsibility: with all the other languages I have used, the user is responsible for providing the working environment, such as installing libraries, the language itself, solve conflicts, and so on, until the program can work. With Go, the responsibility is on the developers only: they are supposed to know how to collect the necessary packages and produce a sound executable. Users only need to download the executable and run it. Ease of deployment. A Go executable doesn't have dependencies. Binaries can be compiled for several platforms from a single origin (I can build Linux executables in my Mac and vice versa) and they just work. Ease of development. Go is a strongly typed language, and has a different approach at code structure than Perl or Python. But this doesn't slow down my coding: it forces me to write better code, resulting in something that is at the same time more robust and easy to extend. Wealth of packages. Go has an amazingly active community, and there is an enormous amount of packages ready for anything. What is dbdeployer? The first goal of dbdeployer is to replace MySQL-Sandbox completely. As such, it has all the main features of MySQL Sandbox, and many more (See the full list of features at the end of this text.) You can deploy a single sandbox, or multiple unrelated sandboxes, or several servers in replication. That you could do also with MySQL-Sandbox. The first difference is in the command structure: $ dbdeployerdbdeployer makes MySQL server installation an easy task.Runs single, multiple, and replicated sandboxes.Usage: dbdeployer [command]Available Commands: admin administrative tasks delete delete an installed sandbox global Runs a given command in every sandbox help Help about any command multiple create multiple sandbox replication create replication sandbox sandboxes List installed sandboxes single deploys a single sandbox templates Admin operations on templates unpack unpack a tarball into the binary directory usage Shows usage of installed sandboxes versions List available versionsFlags: --base-port int Overrides default base-port (for multiple sandboxes) --bind-address string defines the database bind-address (default "") --config string configuration file (default "$HOME/.dbdeployer/config.json") --custom-mysqld string Uses an alternative mysqld (must be in the same directory as regular mysqld) -p, --db-password string database password (default "msandbox") -u, --db-user string database user (default "msandbox") --expose-dd-tables In MySQL 8.0+ shows data dictionary tables --force If a destination sandbox already exists, it will be overwritten --gtid enables GTID -h, --help help for dbdeployer -i, --init-options strings mysqld options to run during initialization --keep-auth-plugin in 8.0.4+, does not change the auth plugin --keep-server-uuid Does not change the server UUID --my-cnf-file string Alternative source file for my.sandbox.cnf -c, --my-cnf-options strings mysqld options to add to my.sandbox.cnf --port int Overrides default port --post-grants-sql strings SQL queries to run after loading grants --post-grants-sql-file string SQL file to run after loading grants --pre-grants-sql strings SQL queries to run before loading grants --pre-grants-sql-file string SQL file to run before loading grants --remote-access string defines the database access (default "127.%") --rpl-password string replication password (default "rsandbox") --rpl-user string replication user (default "rsandbox") --sandbox-binary string Binary repository (default "$HOME/opt/mysql") --sandbox-directory string Changes the default sandbox directory --sandbox-home string Sandbox deployment direcory (default "$HOME/sandboxes") --skip-load-grants Does not load the grants --use-template strings [template_name:file_name] Replace existing template with one from file --version version for dbdeployerUse "dbdeployer [command] --help" for more information about a command. MySQL-Sandbox was created in 2006, and its structure changed as needed, without a real plan. dbdeployer, instead, was designed to have a hierarchical command structure, similar to git or docker, to give users a better feeling. As a result, it has a leaner set of commands, a non-awkward way of using options, and offers a better control of the operations out of the box. For example, here's how we would start to run sandboxes: $ dbdeployer --unpack-version=8.0.4 unpack mysql-8.0.4-rc-linux-glibc2.12-x86_64.tar.gzUnpacking tarball mysql-8.0.4-rc-linux-glibc2.12-x86_64.tar.gz to $HOME/opt/mysql/8.0.4.........100.........200.........292 The first (mandatory) operation is to expand binaries from a tarball. By default, the files will be expanded to $HOME/opt/mysql. Once this is done, we can create sandboxes at will, with simple commands: $ dbdeployer single 8.0.4Database installed in $HOME/sandboxes/msb_8_0_4run 'dbdeployer usage single' for basic instructions'. sandbox server started$ dbdeployer replication 8.0.4[...]Replication directory installed in /$HOME/sandboxes/rsandbox_8_0_4run 'dbdeployer usage multiple' for basic instructions'$ dbdeployer multiple 8.0.4[...]Multiple directory installed in $HOME/sandboxes/multi_msb_8_0_4run 'dbdeployer usage multiple' for basic instructions'$ dbdeployer sandboxesmsb_8_0_4 : single 8.0.4 [8004]multi_msb_8_0_4 : multiple 8.0.4 [24406 24407 24408]rsandbox_8_0_4 : master-slave 8.0.4 [19405 19406 19407] Three differences between dbdeployer and MySQL-Sandbox: There is only one executable, with different commands; After each deployment, there is a suggestion on how to get help about the sandbox usage. There is a command that displays which sandboxes were installed, the kind of deployment, and the ports in use. This will be useful when the ports increase, as in group replication. Here's another take, after deploying group replication: $ dbdeployer sandboxesgroup_msb_8_0_4 : group-multi-primary 8.0.4 [20405 20530 20406 20531 20407 20532]group_sp_msb_8_0_4 : group-single-primary 8.0.4 [21405 21530 21406 21531 21407 21532]msb_8_0_4 : single 8.0.4 [8004]multi_msb_8_0_4 : multiple 8.0.4 [24406 24407 24408]rsandbox_8_0_4 : master-slave 8.0.4 [19405 19406 19407] A few more differences from MySQL-Sandbox are the "global" and "delete" commands. The "global" command can broadcast a command to all the sandboxes. You can start, stop, restart all sandboxes at once, or run a query everywhere. $ dbdeployer global use "select @@server_id, @@port, @@server_uuid"# Running "use_all" on group_msb_8_0_4# server: 1@@server_id @@port @@server_uuid100 20405 00020405-1111-1111-1111-111111111111# server: 2@@server_id @@port @@server_uuid200 20406 00020406-2222-2222-2222-222222222222# server: 3@@server_id @@port @@server_uuid300 20407 00020407-3333-3333-3333-333333333333# Running "use_all" on group_sp_msb_8_0_4# server: 1@@server_id @@port @@server_uuid100 21405 00021405-1111-1111-1111-111111111111# server: 2@@server_id @@port @@server_uuid200 21406 00021406-2222-2222-2222-222222222222# server: 3@@server_id @@port @@server_uuid300 21407 00021407-3333-3333-3333-333333333333# Running "use" on msb_8_0_4@@server_id @@port @@server_uuid1 8004 00008004-0000-0000-0000-000000008004[...] You can run the commands manually. dbdeployer usage will show which commands are available for every sandbox. $ dbdeployer usage single USING A SANDBOXChange directory to the newly created one (default: $SANDBOX_HOME/msb_VERSIONfor single sandboxes)[ $SANDBOX_HOME = $HOME/sandboxes unless modified with flag --sandbox-home ]The sandbox directory of the instance you just created contains some handyscripts to manage your server easily and in isolation."./start", "./status", "./restart", and "./stop" do what their name suggests.start and restart accept parameters that are eventually passed to the server.e.g.: ./start --server-id=1001 ./restart --event-scheduler=disabled"./use" calls the command line client with the appropriate parameters,Example: ./use -BN -e "select @@server_id" ./use -u root"./clear" stops the server and removes everything from the data directory,letting you ready to start from scratch. (Warning! It's irreversible!) When you don't need the sandboxes anymore, you can dismiss them with a single command: $ dbdeployer delete ALLDeleting the following sandboxes$HOME/sandboxes/group_msb_8_0_4$HOME/sandboxes/group_sp_msb_8_0_4$HOME/sandboxes/msb_8_0_4$HOME/sandboxes/multi_msb_8_0_4$HOME/sandboxes/rsandbox_8_0_4Do you confirm? y/[N] There is an option to skip the confirmation, which is useful for scripting unattended tests. Customization One of the biggest problems with MySQL-Sandbox was that most of the functioning is hard-coded, and the scripts needed to run the sandboxes are generated in different places, so that extending or modifying features became more and more difficult. When I designed dbdeployer, I gave myself the goal of making the tool easy to change, and the code easy to understand and extend. For this reason, I organized everything related to code generation (the scripts that initialize and run the sandboxes) in a collection of templates and default variables that are publicly visible and modifiable. $ dbdeployer templates -hThe commands in this section show the templates usedto create and manipulate sandboxes.Usage: dbdeployer templates [command]Aliases: templates, template, tmpl, templAvailable Commands: describe Describe a given template export Exports all templates to a directory import imports all templates from a directory list list available templates reset Removes all template files show Show a given template You can list the templates on the screen. $ dbdeployer templates list single [single] replication_options : Replication options for my.cnf [single] load_grants_template : Loads the grants defined for the sandbox [single] grants_template57 : Grants for sandboxes from 5.7+ [single] grants_template5x : Grants for sandboxes up to 5.6 [single] my_template : Prefix script to run every my* command line tool [single] show_binlog_template : Shows a binlog for a single sandbox [single] use_template : Invokes the MySQL client with the appropriate options [single] clear_template : Remove all data from a single sandbox [single] restart_template : Restarts the database (with optional mysqld arguments) [single] start_template : starts the database in a single sandbox (with optional mysqld arguments) [single] stop_template : Stops a database in a single sandbox [single] send_kill_template : Sends a kill signal to the database [single] show_relaylog_template : Show the relaylog for a single sandbox [single] Copyright : Copyright for every sandbox script [single] expose_dd_tables : Commands needed to enable data dictionary table usage [single] init_db_template : Initialization template for the database [single] grants_template8x : Grants for sandboxes from 8.0+ [single] add_option_template : Adds options to the my.sandbox.cnf file and restarts [single] test_sb_template : Tests basic sandbox functionality [single] sb_include_template : TBD [single] gtid_options : GTID options for my.cnf [single] my_cnf_template : Default options file for a sandbox [single] status_template : Shows the status of a single sandbox Then it's possible to examine template contents: $ dbdeployer templates describe --with-contents init_db_template# Collection : single# Name : init_db_template# Description : Initialization template for the database# Notes : This should normally run only once# Length : 656##START init_db_template#!/bin/bash {{.Copyright}} # Generated by dbdeployer {{.AppVersion}} using {{.TemplateName}} on {{.DateTime}} BASEDIR={{.Basedir}} export LD_LIBRARY_PATH=$BASEDIR/lib:$BASEDIR/lib/mysql:$LD_LIBRARY_PATH export DYLD_LIBRARY_PATH=$BASEDIR/lib:$BASEDIR/lib/mysql:$DYLD_LIBRARY_PATH SBDIR={{.SandboxDir}} DATADIR=$SBDIR/data cd $SBDIR if [ -d $DATADIR/mysql ] then echo "Initialization already done." echo "This script should run only once." exit 0 fi {{.InitScript}} \ {{.InitDefaults}} \ --user={{.OsUser}} \ --basedir=$BASEDIR \ --datadir=$DATADIR \ --tmpdir={{.Tmpdir}} {{.ExtraInitFlags}}##END init_db_template The one above is the template that generates the initialization script. In MySQL-Sandbox, this was handled in the code, and it was difficult to figure out what went wrong when the initialization failed. The Go language has an excellent support for code generation using templates, and with just a fraction of its features I implemented a few dozen scripts which I am able to modify with ease. Here's what the deployed script looks like #!/bin/bash# DBDeployer - The MySQL Sandbox# Copyright (C) 2006-2018 Giuseppe Maxia## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.# Generated by dbdeployer 0.1.24 using init_db_template on Tue Feb 20 14:45:29 CET 2018BASEDIR=$HOME/opt/mysql/8.0.4export LD_LIBRARY_PATH=$BASEDIR/lib:$BASEDIR/lib/mysql:$LD_LIBRARY_PATHexport DYLD_LIBRARY_PATH=$BASEDIR/lib:$BASEDIR/lib/mysql:$DYLD_LIBRARY_PATHSBDIR=$HOME/sandboxes/msb_8_0_4DATADIR=$SBDIR/datacd $SBDIRif [ -d $DATADIR/mysql ]then echo "Initialization already done." echo "This script should run only once." exit 0fi$HOME/opt/mysql/8.0.4/bin/mysqld \ --no-defaults \ --user=$USER \ --basedir=$BASEDIR \ --datadir=$DATADIR \ --tmpdir=$HOME/sandboxes/msb_8_0_4/tmp \ --initialize-insecure --default_authentication_plugin=mysql_native_password Let's see the quick-and-dirty usage. If you want to change a template and use it just once, do the following: $ dbdeployer templates show init_db_template Save it to a file init_db.txt and edit it. Be careful, though: removing or altering essential labels may block the sandbox initialization. Use the template file in the next command: $ dbdeployer single 8.0.4 --use-template=init_db_template:init_db.txt For more permanent results, when you'd like to change a template, or several ones, permanently, you can use the export/import commands List the templates related to replication (dbdeployer templates list replication) Export the templates to the directory "mydir" $ dbdeployer templates export replication mydir edit the templates you want to change inside "mydir/replication" Import the templates dbdeployer templates import replication mydir The templates will end inside $HOME/.dbdeployer/templates_$DBDEPLOYER_VERSION and dbdeployer will load then instead of using the ones stored internally. The next time that one of those templates will be needed, it will be collected from the file. If you run dbdeployer templates list or describe, the ones saved to file will be marked with {F}.To go back to the built-in behavior, simply run dbdeployer templates reset In addition to templates, dbdeployer uses a set of values when creating sandboxes. Like templates, this set is used from internal store, but it can be exported to a configuration file. $ dbdeployer admin show# Internal values:{ "version": "0.1.24", "sandbox-home": "$HOME/sandboxes", "sandbox-binary": "$HOME/opt/mysql", "master-slave-base-port": 11000, "group-replication-base-port": 12000, "group-replication-sp-base-port": 13000, "multiple-base-port": 16000, "group-port-delta": 125, "sandbox-prefix": "msb_", "master-slave-prefix": "rsandbox_", "group-prefix": "group_msb_", "group-sp-prefix": "group_sp_msb_", "multiple-prefix": "multi_msb_"} The values named *-base-port are used to calculate the port for each node in a multiple deployment. The calculation goes: sandbox_port + base_port + (revision_number * 100) So, for example, when deploying replication for 5.7.21, the sandbox port would be 5721, and the final base port will be calculated as follows: 5721 + 11000 + 21 * 100 = 18821 This number will be incremented for each node in the cluster, so that the master will get 18822, and the first slave 18823. Using the commands dbdeployer admin export and import you can customize the default values in a way similar to what we saw for the templates. Thanks I'd like to thank: Ronald Bradford and René Cannaò, for priceless advice on the usability when the tool was in its early stage of development; Shlomi Noach, for adopting dbdeployer even before it was feature complete; Frédéric "lefred" Descamps, for allowing me to present dbdeployer at the pre-FOSDEM MySQL event; The Go community, for the exciting environment offered to newcomers. A note about unpacking MySQL tarball When using MySQL tarballs, we may have some problems due to the enormous size that the tarballs have reached. Look at this: 690M 5.5.521.2G 5.6.392.5G 8.0.4 This becomes a serious problem when you want to unpack the tarball inside a low-resource virtual machine or a Docker container. I have asked the MySQL team to provide reduced tarballs, possibly in a fixed location, so that sandboxes creation could be fully automated. I was told that something will be done soon. In the meantime, I provide such reduced tarballs, which have a more reasonable size: 49M 5.5.52 61M 5.6.39346M 5.7.21447M 8.0.0462M 8.0.1254M 8.0.2270M 8.0.3244M 8.0.4 Using these reduced tarballs, which are conveniently packed in a docker container (datacharmer/mysql-sb-full contains all major MySQL versions), I have automated dbdeployer tests with minimal storage involvement, and that improves the test speed as well. Detailed list of features Feature MySQL-Sandbox dbdeployer dbdeployer planned Single sandbox deployment yes yes unpack command sort of 1 yes multiple sandboxes yes yes master-slave replication yes yes "force" flag yes yes pre-post grants SQL action yes yes initialization options yes yes my.cnf options yes yes custom my.cnf yes yes friendly UUID generation yes yes global commands yes yes test replication flow yes yes delete command yes 2 yes group replication SP no yes group replication MP no yes prevent port collision no yes 3 visible initialization no yes 4 visible script templates no yes 5 replaceable templates no yes 6 configurable defaults no yes 7 list of source binaries no yes 8 list of installed sandboxes no yes 9 test script per sandbox no yes 10 integrated usage help no yes 11 custom abbreviations no yes 12 version flag no yes 13 fan-in no no yes 14 all-masters no no yes 15 Galera/PXC/NDB no no yes 18 finding free ports yes no yes pre-post grants shell action yes no maybe getting remote tarballs yes no yes circular replication yes no no 16 master-master (circular) yes no no Windows support no no no 17 It's achieved using --export_binaries and then abandoning the operation. ↩ Uses the sbtool command ↩ dbdeployer sandboxes store their ports in a description JSON file, which allows the tool to get a list of used ports and act before a conflict happens. ↩ The initialization happens with a script that is generated and stored in the sandbox itself. Users can inspect the init_db script and see what was executed. ↩ All sandbox scripts are generated using templates, which can be examined and eventually changed and re-imported. ↩ See also note 5. Using the flag --use-template you can replace an existing template on-the-fly. Group of templates can be exported and imported after editing. ↩ Defaults can be exported to file, and eventually re-imported after editing.  ↩ This is little more than using an O.S. file listing, with the added awareness of the source directory. ↩ Using the description files, this command lists the sandboxes with their topology and used ports. ↩ It's a basic test that checks whether the sandbox is running and is using the expected port. ↩ The "usage" command will show basic commands for single and multiple sandboxes. ↩ The abbreviations file allows user to define custom shortcuts for frequently used commands. ↩ Strangely enough, this simple feature was never implemented for MySQL-Sandbox, while it was one of the first additions to dbdeployer. ↩ Will use the multi source technology introduced in MySQL 5.7. ↩ Same as n. 13. ↩ Circular replication should not be used anymore. There are enough good alternatives (multi-source, group replication) to avoid this old technology. ↩ I don't do Windows, but you can fork the project if you do. ↩ For Galera/PXC and MySQL Cluster I have ideas, but I may need help to implement. ↩ [Less]
Posted 2 days ago by Open Query
Right now Aurora only allows a single master, with up to 15 read-only replicas. Master/Replica Failover We love testing failure scenarios, however our options for such tests with Aurora are limited (we might get back to that later).  Anyhow, we told ... [More] the system, through the RDS Aurora dashboard, to do a failover. These were our observations: Role Change Method Both master and replica instances are actually restarted (the MySQL uptime resets to 0). This is quite unusual these days, we can do a fully controlled role change in classic asynchronous replication without a restart (CHANGE MASTER TO …), and Galera doesn’t have read/write roles as such (all instances are technically writers) so it doesn’t need role changes at all. Failover Timing Failover between running instances takes about 30 seconds.  This is in line with information provided in the Aurora FAQ. Failover where a new instance needs to be spun up takes 15 minutes according to the FAQ (similar to creating a new instance from the dash). Instance Availability During a failover operation, we observed that all connections to the (old) master, and the replica that is going to be promoted, are first dropped, then refused (the connection refusals will be during the period that the mysqld process is restarting). According to the FAQ, reads to all replicas are interrupted during failover.  Don’t know why. Aurora can deliver a DNS CNAME for your writer instance. In a controlled environment like Amazon, with guaranteed short TTL, this should work ok and be updated within the 30 seconds that the shortest possible failover scenario takes.  We didn’t test with the CNAME directly as we explicitly wanted to observe the “raw” failover time of the instances themselves, and the behaviour surrounding that process. Caching State On the promoted replica, the buffer pool is saved and loaded (warmed up) on the restart; good!  Note that this is not special, it’s desired and expected to happen: MySQL and MariaDB have had InnoDB buffer pool save/restore for years.  Credit: Jeremy Cole initially came up with the buffer pool save/restore idea. On the old master (new replica/slave), the buffer pool is left cold (empty).  Don’t know why.  This was a controlled failover from a functional master. Because of the server restart, other caches are of course cleared also.  I’m not too fussed about the query cache (although, deprecated as it is, it’s currently still commonly used), but losing connections is a nuisance. More detail on that later in this article. Statistics Because of the instance restarts, the running statistics (SHOW GLOBAL STATUS) are all reset to 0. This is annoying, but should not affect proper external stats gathering, other than for uptime. On any replica, SHOW ENGINE INNODB STATUS comes up empty. Always.  This seems like obscurity to me, I don’t see a technical reason to not show it.  I suppose that with a replica being purely read-only, most running info is already available through SHOW GLOBAL STATUS LIKE ‘innodb%’, and you won’t get deadlocks on a read-only slave. Multi-Master Aurora MySQL multi-master was announced at Amazon re:Invent 2017, and appears to currently be in restricted beta test.  No date has been announced for general availability. We’ll have to review it when it’s available, and see how it works in practice. Conclusion Requiring 30 seconds or more for a failover is unfortunate, this is much slower than other MySQL replication (writes can failover within a few seconds, and reads are not interrupted) and Galera cluster environments (which essentially delivers continuity across instance failures – clients talking to the failed instance will need to reconnect to the loadbalancer/cluster to continue). I don’t understand why the old master gets a cold InnoDB buffer pool. I wouldn’t think a complete server restart should be necessary, but since we don’t have insight in the internals, who knows. On Killing Connections (through the restart) Losing connections across an Aurora cluster is a real nuisance that really impacts applications.  Here’s why: When MySQL C client library (which most MySQL APIs either use or are modelled on) is disconnected, it passes back a specific error to the application.  When the application makes its next query call, the C client will automatically reconnect first (so the client does not have to explicitly reconnect).  So a client only needs to catch the error and re-issue its last command, and all will generally be fine.  Of course, if it relies on different SESSION settings, or was in the middle of a multi-statement transaction, it will need to do a bit more. So, this means that the application has to handle disconnects gracefully without chucking hissy-fits at users, and I know for a fact that that’s not how many (most?) applications are written.  Consequently, an Aurora failover will make the frontend of most applications look like a disaster zone for about 30 seconds (provided functional instances are available for the failover, which is the preferred and best case scenario). I appreciate that this is not directly Aurora’s fault, it’s sloppy application development that causes this, but it’s a real-world fact we have to deal with.  And, perhaps importantly: other cluster and replication options do not trigger this scenario. [Less]
Posted 2 days ago by Colin Charles
Been a while since I wrote anything MySQL/MariaDB related here, but there’s the column on the Percona blog, that has weekly updates. Anyway, I’ll be at the developer’s unconference this weekend in NYC. Even managed to snag a session on the schedule ... [More] , MySQL features missing in MariaDB Server (Sunday, 12.15–13.00). Signup on meetup? Due to the prevalence of “VIP tickets”, I too signed up for M|18. If you need a discount code, I’ll happily offer them up to you to see if they still work (though I’m sure a quick Google will solve this problem for you). I’ll publish notes, probably in my weekly column. If you’re in New York and want to say hi, talk shop, etc. don’t hesitate to drop me a line. [Less]