Activity Not Available
I Use This!

News

Analyzed about 1 month ago. based on code collected about 1 month ago.
Posted 3 days ago by Peter Gulutzan
DBMS client applications need to store SQL query results in local memory or local files. The format is flat and the fields are ordered -- that's "serialization". The most important serializer format uses human-readable markup, like [start of field] ... [More] [value] [end of field] and the important ones in the MySQL/MariaDB world are CSV (what you get with SELECT ... INTO OUTFILE or LOAD INFILE), XML (what you get with --xml or LOAD XML), and JSON (for which there are various solutions if you don't use MySQL 5.7). The less important serializer format uses length, like [length of value] [value] and this, although it has the silly name "binary serialization", is what I want to talk about. The length alone isn't enough, we also need to know the type, so we can decode it correctly. With CSV there are hints such as "is the value enclosed in quotes", but with binary serializers the value contains no hints. There has to be an indicator that says what the type is. There might be a single list of types for all of the records, in which case the format is said to "have a schema". Or there might be a type attached to each record, like [type] [length of value] [value] in which case the format is often called "TLV" (type-length-value). Binary serializers are better than markup serializers if you need "traversability" -- the ability to skip to field number 2 without having to read every byte in field number 1. Binary TLV serializers are better than binary with-schema serializers if you ned "flexibility" -- when not every record has the same number of fields and not every field has the same type. But of course TLV serializers might require slightly more space. A "good" binary serializer will have two Characteristics: #1 It is well known, preferably a standard with a clear specification, but otherwise a commonly-used format with a big sponsor. Otherwise you have to write your own library and you will find out all the gotchas by re-inventing a wheel. Also, if you want to ship your file for import by another application, it would be nice if the other application knew how to import it. #2 It can store anything that comes out of MySQL or MariaDB. Unfortunately, as we'll see, Characteristic #1 and Characteristic #2 are contradictory. The well-known serializers usually were made with the objective of storing anything that comes out of XML or JSON, or that handled quirky situations when shipping over a wire. So they're ready for things that MySQL and MariaDB don't generate (such as structured arrays) but not ready for things that MySQL and MariaDB might generate (such as ... well, we'll see as I look at each serializer). To decide "what is well known" I used the Wikipedia article Comparison of data serialization formats. It's missing some formats (for example sereal) but it's the biggest list I know of, from a source that's sometimes neutral. I selected the binary serializers that fit Characteristic #1. I evaluated them according to Characteristic #2. I'll look at each serializer. Then I'll show a chart. Then you'll draw a conclusion. Avro Has schemas. Not standard but sponsored by Apache. I have a gripe. Look at these two logos. The first one is for the defunct British/Canadian airplane maker A.V.Roe (from Wikipedia). The second one is for the binary serializer format Apache Avro (from their site). Although I guess that the Apache folks somehow have avoided breaking laws, I think that taking A.V.Roe's trademark is like wearing medals that somebody else won. But putting my gripe aside, let's look at a technical matter. The set of primitive type names is: null: no value Well, of course, in SQL a NULL is not a type and it is a value. This is not a showstopper, because I can declare a union of a null type and a string type if I want to allow nulls and strings in the same field. Um, okay. But then comes the encoding rule: null is written as zero bytes. I can't read that except as "we're like Oracle 12c, we think empty strings are NULLs". ASN.1 TLV. Standard. ASN means "abstract syntax notation" but there are rules for encoding too, and ASN.1 has a huge advantage: it's been around for over twenty years. So whenever any "why re-invent the wheel?" argument starts up on any forum, somebody is bound to ask why all these whippersnapper TLVs are proposed considering ASN.1 was good enough for grand-pappy, eh? Kidding aside, it's a spec that's been updated as recently as 2015. As usual with official standards, it's hard to find a free-and-legitimate copy, but here it is: the link to a download of "X.690 (08/2015) ITU-T X.690 | ISO/IEC 8825-1 ISO/IEC 8825-1:2015 Information technology -- ASN.1 encoding rules: Specification of Basic Encoding Rules (BER), Canonical Encoding Rules (CER) and Distinguished Encoding Rules (DER)" from the International Telecommunication Union site: http://www.itu.int/rec/T-REC-X.690-201508-I/en. It actually specifies how to handle exotic situations, such as ** If it is a "raw" string of bits, are there unused bits in the final byte? ** If the string length is greater than 2**32, is there a way to store it? ** Can I have a choice between BMP (like MySQL UCS2) and UTF-8 and other character sets? ** Can an integer value be greater than 2**63? ... You don't always see all these things specified except in ASN.1. Unfortunately, if you try to think of everything, your spec will be large and your overhead will be large, so competitors will appear saying they have something "simpler" and "more compact". Have a look at trends.google.com to see how ASN.1 once did bestride the narrow world like a colossus, but nowadays is not more popular than all the others. BSON TLV. Sponsored by MongoDB. Although BSON is "used mainly as a data storage and network transfer format in the MongoDB [DBMS]", anybody can use it. There's a non-Mongo site which refers to independent libraries and discussion groups. BSON is supposed to make you think "binary JSON" but in fact all the binary serializers that I'm discussing (and I few that I'm not discussing such as UBJSON) can do a fair job of representing JSON-marked-up-text in binary format. Some people even claim that MessagePack does a better job of that than BSON does. There is a "date" but it is milliseconds since the epoch, so it might be an okay analogue for MySQL/MariaDB TIMESTAMP but not for DATETIME. CBOR TLV. Proposed standard. CBOR is not well known but there's an IETF Internet Standards Document for it (RFC 7049 Concise Binary Object Representation), so I reckoned it's worth looking at. I don't give that document much weight, though -- it has been in the proposal phase since 2013. The project site page mentions JSON data model, schemalessness, raw binary strings, and concise encoding -- but I wanted to see distinguishing features. There are a few. I was kind of surprised that there are two "integer" types: one type is positive integers, the other type is negative integers. In other words -5 is [type = negative number] [length] [value = 5] rather than the Two's Complement style [type = signed number] [length] [value = -5] but that's just an oddness rather than a problem. There was an acknowledgment in the IETF document that "CBOR is inspired by MessagePack". But one of MessagePack's defects (the lack of a raw string type) has been fixed now. That takes away one of the reasons that I'd have for regarding CBOR as a successor to MessagePack. Fast Infoset TLV. Uses a standard. After seeing so much JSON, it's nice to run into an international standard that specifies a binary encoding format for the XML Information Set (XML Infoset) as an alternative to the XML document format". Okay, they get points for variety. However, it's using ASN.1's underlying encoding methods, so I won't count it as a separate product. MessagePack TLV. Not standard but widely used. MessagePack, also called MsgPack, is popular and is actually used as a data storage format for Pinterest and Tarantool. It's got a following among people who care a lot about saving bytes; for example see this Uber survey where MessagePack beat out some of the other formats that I'm looking at here. One of the flaws of MessagePack, from my point of view, is its poor handling for character sets other than UTF-8. But I'll admit: when MessagePack's original author is named Sadayuki Furuhashi, I'm wary about arguing that back in Japan UTF-8 is not enough. For some of the arguing that happened about supporting other character sets with MessagePack, see this thread. Still, I think my "The UTF-8 world is not enough" post is valid for the purposes I'm discussing. And the maximum length of a string is 2**32-1 bytes, so you can forget about dumping a LONGBLOB. I'd have the same trouble with BSON but BSON allows null-terminated strings. OPC-UA TLV. Sort of a standard for a particular industry group. Open Platform Communications - Unified Architecture has a Binary Encoding format. Most of the expected types are there: boolean, integer, float, double, string, raw string, and datetime. The datetime description is a bit weird though: number of 100 nanosecond intervals since January 1, 1601 (UTC). I've seen strange cutover dates in my time, but this is a new one for me. For strings, there's a way to indicate NULLs (hurrah). I have the impression that OPC is an organization for special purposes (field devices, control systems, etc.) and I'm interested in general-purpose formats, so didn't look hard at this. Protocol Buffers Has schemas. Not standard but sponsored by Google. Like Avro, Google's Protocol Buffers have a schema for the type and so they are schema + LV rather than TLV. But MariaDB uses them for its Dynamic Columns feature, so everybody should know about them. Numbers and strings can be long, but there's very little differentiation -- essentially you have integers, double-precision floating point numbers, and strings. So, since I was objecting earlier when I saw that other serialization formats didn't distinguish (say) character sets, I have to be fair and say: this is worse. When the same "type" tag can be used for multiple different types, it's not specific enough. Supposedly the makers of Protocol Buffers were asked why they didn't use ASN.1 and they answered "We never heard of it before". That's from a totally unreliable biased source but I did stop and ask myself: is that really so unbelievable? In this benighted age? Thrift Can be TLV but depends on protocol. Not standard but sponsored by Apache, used a lot by Facebook. I looked in vain for what one might call a "specification" of Thrift's binary serialization, and finally found an old stackoverflow discussion that said: er, there isn't any. There's a "Thrift Missing Guide" that tells me the base types, and a Java class describer for one of the protocols to help me guess the size limits. Thrift's big advantage is that it's language neutral, which is why it's popular and there are many libraries and high-level tutorials. That makes it great as a communication format, which is what it's supposed to be. However, the number of options is small and the specification is so vague that I can't call it "good" according to the criteria I stated earlier. The Chart I depend on each serializer's specification, I didn't try anything out, I could easily have made some mistakes. For the "NULL is a value" row, I say No (and could have added "Alackaday!") for all the formats that say NULL is a data type. Really the only way to handle NULL is with a flag so this would be best: [type] [length] [flag] [value] and in fact, if I was worried about dynamic schemas, I'd be partial to Codd's "two kinds of NULLs" arguments, in case some application wanted to make a distinction between not-applicable-value and missing-value. For most of the data-type rows, I say Yes for all the formats that have explicit defined support. This does not mean that it's impossible to store the value -- for example it's easy to store a BOOLEAN with an integer or with a user-defined extension -- but then you're not using the format specification so some of its advantages are lost. For dates (including DATETIME TIMESTAMP DATE etc.), I did not worry if the precision and range were less than what MySQL or MariaDB can handle. But for DECIMAL, i say No if the maximum number of digits is 18 or if there are no post-decimal digits. For LONGBLOB, I say No if the maximum number of bytes is 2**32. For VARCHAR, I say Yes if there's any way to store any encoded characters (rather than just bytes, which is what BINARY and BLOB are). In the "VARCHAR+" row I say Yes if there is more than one character set, although this doesn't mean much -- the extra character sets don't match with MySQL/MariaDB's variety. I'll say again that specifications allow for "extensions", for example with ASN.1 you can define your own tags, but I'm only looking at what's specific in the specification. Avro ASN.1 BSON CBOR Message Pack OPC UA Protocol Buffers Thrift NULL is a value no no no no no YES no no BOOLEAN YES YES YES YES YES YES no YES INTEGER YES YES YES YES YES YES YES YES BIGINT YES YES YES YES YES YES YES YES FLOAT YES YES YES YES YES YES no no DOUBLE YES YES YES YES YES YES YES YES BINARY / BLOB YES YES YES YES YES YES YES YES VARCHAR YES YES YES YES YES YES no YES Dates no YES YES YES no YES no no LONGBLOB YES YES no YES no no YES no DECIMAL no YES no YES no no no no VARCHAR+ no YES no no no YES no no BIT no YES no no no no no no Your Conclusion You have multiple choice: (1) Peter Gulutzan is obsessed with standards and exactness, (2) Well, might as well use one of these despite its defects (3) We really need yet another binary serializer format. ocelotgui news Recently there were some changes to the ocelot.ca site to give more prominence to the ocelotgui manual, and a minor release -- ocelotgui version 1.02 -- happened on August 15. [Less]
Posted 3 days ago by Marco Tusa
This blog discusses how to find and address badly written queries using ProxySQL. All of us are very good in writing good queries. We know this to always be true! But sometimes a bad query escapes our control and hits our database. There is the new ... [More] guy, the probie, who just joined the company and is writing all his code using SELECT * instead of WHERE. We’ve told him “STOP” millions of times, but he refuses to listen. Or a there is a new code injection, and it will take developers some time to fix and isolate the part of the code that is sending killing queries to our database. The above are true stories; things that happen every day in at least few environments. Isolating the bad query isn’t the main problem: that is something that we can do very fast. The issue is identifying the code that is generating the query, and disabling that code without killing the whole application. That part can take days. ProxySQL allows us to act fast and stop any offending query in seconds. I will show you how. Let us say our offending query does this:SELECT * from history;Where history is a table of two Tb partitioned by year in our DWH. That query will definitely create some issue on the database. It’s easy to identify this query as badly designed. Unfortunately, it was inserted in the ETL process that uses a multi-thread approach and auto-recovery. Now when you kill it, the process restarts it. After, it takes developers some time to stop that code. In the meantime, your reporting system serving your company in real-time is so slooow (or down). With ProxySQL, you can stop that query in one second:INSERT INTO mysql_query_rules (rule_id, active, match_pattern, error_msg, apply) VALUES (89,1,'^SELECT \* from history$','Query not allowed',1); LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK;Done, your database never receives that query again! Now the application gets a message saying that the query is not allowed. And look, it’s possible to do things even better:INSERT INTO mysql_query_rules (rule_id, active, match_digest, flagOUT, apply) VALUES (89,1,'^SELECT \* FROM history', 100, 0); INSERT INTO mysql_query_rules (rule_id, active, flagIN, match_digest, destination_hostgroup, apply) VALUES (1001,1, 100, 'WHERE', 502, 1); INSERT INTO mysql_query_rules (rule_id, active, flagIN, error_msg, apply) VALUES (1002,1, 100, 'Query not allowed', 1); LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK;In this case, ProxySQL checks for any query having SELECT * FROM history. If the query has a WHERE clause, then it redirects it to the server for execution. If the query does not have a WHERE it stops the query and sends an error message to the application. Conclusion This is a very basic example of offending query. But I think it makes clear how ProxySQL helps any DBA in stopping them quickly in the case of an emergency. This gives the DBAs and the developers time to coordinate a better plan of action to permanently fix the issue. References https://github.com/sysown/proxysql http://www.proxysql.com/2015/09/proxysql-tutorial-setup-in-mysql.html https://github.com/sysown/proxysql/blob/v1.2.2/doc/configuration_howto.md https://github.com/sysown/proxysql/blob/v1.2.2/INSTALL.md [Less]
Posted 4 days ago by MySQL Performance Blog
Percona announces the GA release of Percona Server 5.7.14-7 on August 23, 2016. Download the latest version from the Percona web site or the Percona Software Repositories. Based on MySQL 5.7.14, including all the bug fixes in it, Percona Server ... [More] 5.7.14-7 is the current GA release in the Percona Server 5.7 series. Percona’s provides completely open-source and free software. Find release details in the 5.7.14-7 milestone at Launchpad. New Features: Percona Server Audit Log Plugin now supports filtering by user, database, and sql_command. Percona Server now supports tree map file block allocation strategy for TokuDB. Bugs Fixed: Fixed potential cardinality 0 issue for TokuDB tables if ANALYZE TABLE finds only deleted rows and no actual logical rows before it times out. Bug fixed #1607300 (#1006, #732). TokuDB database.table.index names longer than 256 characters could cause a server crash if background analyze table status was checked while running. Bug fixed #1005. PAM Authentication Plugin would abort authentication while checking UNIX user group membership if there were more than a thousand members. Bug fixed #1608902. If DROP DATABASE would fail to delete some of the tables in the database, the partially-executed command is logged in the binlog as DROP TABLE t1, t2, ... for the tables for which drop succeeded. A slave might fail to replicate such DROP TABLE statement if there exist foreign key relationships to any of the dropped tables and the slave has a different schema from the master. Fix by checking, on the master, whether any of the database to be dropped tables participate in a Foreign Key relationship, and fail the DROP DATABASE statement immediately. Bug fixed #1525407 (upstream #79610). PAM Authentication Plugin didn’t support spaces in the UNIX user group names. Bug fixed #1544443. Due to security reasons ld_preload libraries can now only be loaded from the system directories (/usr/lib64, /usr/lib) and the MySQL installation base directory. In the client library, any EINTR received during network I/O was not handled correctly. Bug fixed #1591202 (upstream #82019). SHOW GLOBAL STATUS was locking more than the upstream implementation which made it less suitable to be called with high frequency. Bug fixed #1592290. The included .gitignore in the percona-server source distribution had a line *.spec, which means someone trying to check in a copy of the percona-server source would be missing the spec file required to build the RPMs. Bug fixed #1600051. Audit Log Plugin did not transcode queries. Bug fixed #1602986. If the changed page bitmap redo log tracking thread stops due to any reason, then shutdown will wait for a long time for the log tracker thread to quit, which it never does. Bug fixed #1606821. Changed page tracking was initialized too late by InnoDB. Bug fixed #1612574. Fixed stack buffer overflow if --ssl-cipher had more than 4000 characters. Bug fixed #1596845 (upstream #82026). Audit Log Plugin events did not report the default database. Bug fixed #1435099. Canceling the TokuDB Background ANALYZE TABLE job twice or while it was in the queue could lead to server assertion. Bug fixed #1004. Fixed various spelling errors in comments and function names. Bug fixed #728 (Otto Kekäläinen). Implemented set of fixes to make PerconaFT build and run on the AArch64 (64-bit ARMv8) architecture. Bug fixed #726 (Alexey Kopytov). Other bugs fixed: #1542874 (upstream #80296), #1610242, #1604462 (upstream #82283), #1604774 (upstream #82307), #1606782, #1607359, #1607606, #1607607, #1607671, #1609422, #1610858, #1612551, #1613663, #1613986, #1455430, #1455432, #1581195, #998, #1003, and #730. The release notes for Percona Server 5.7.14-7 are available in the online documentation. Please report any bugs on the launchpad bug tracker . [Less]
Posted 4 days ago by Yann Larrivee
ConFoo Montreal: March 8th-10th 2016 Want to get your web development ideas in front of a live audience? The call for papers for the ConFoo Montreal 2017 web developer conference is open! If you have a burning desire to hold forth about PHP, Java ... [More] , Ruby, Python, or any other web development topics, we want to see your proposals. The window is open only from August 21 to September 20, 2016, so hurry. An added benefit: If your proposal is selected and you live outside of the Montreal area, we will cover your travel and hotel. You’ll have 45 minutes to wow the crowd, with 35 minutes for your topic and 10 minutes for Q&A. We can’t wait to see your proposals. Knock us out! ConFoo Montreal will be held on March 8-10, 2017. For those of you who already know about our conference, be aware that this annual tradition will still be running in addition to ConFoo Vancouver. Visit our site to learn more about both events. [Less]
Posted 4 days ago by Balazs Pocze
I just created the Budapest MySQL Meetup group. I hope there will be interest for that, the first event is under organising.  Check it if you are near Budapest!   Share This:
Posted 4 days ago by Severalnines
Remember to join us Tuesday, August 30th for the first part of our upcoming webinar trilogy on MySQL Query Tuning. This first of three in-depth webinar sessions led by Krzysztof Książek, Senior Support Engineer at Severalnines, covers MySQL query ... [More] tuning process and tools. When done right, Tuning MySQL queries and indexes can increase the performance of your application and decrease response times. We will be covering this complex topic over the course of three webinars of 60 minutes each, so feel free to also register for parts 2 & 3 here. In this first part of the trilogy we will discuss building, collecting, analysing, tuning and testing processes as well as the main tools involved, tcpdump and pt-query-digest. Register below to join us and get your questions answered around MySQL query tuning. Date & Registration Part 1: Query tuning process and tools Tuesday, August 30th Register Feel free to also register for Parts 2 & 3. Agenda MySQL Query Tuning Trilogy: Process and tools Query tuning process Build Collect Analyse Tune Test Tools tcpdump pt-query-digest Speaker Krzysztof Książek, Senior Support Engineer at Severalnines, is a MySQL DBA with experience in managing complex database environments for companies like Zendesk, Chegg, Pinterest and Flipboard. He’s the main author of the Severalnines blog and webinar series: Become a MySQL DBA. We look forward to “seeing” you there! Tags: MySQLquery tuningsql tuningtcpdumppt-query-digestMariaDBperformancewebinar [Less]
Posted 4 days ago by Mark Callaghan
I spent a few years at Facebook where I was extremely busy helping to make MySQL better at web-scale. I worked a lot with Domas. He found so many problems and I helped fix them along with a few others (the MySQL db-eng team was small). Domas made it ... [More] easy to understand what was broken and there was a lot of low-hanging fruit. This slide deck is one perspective on what we did. I doubt I have the energy to go through another few years like that, but it was a great time. The timing was also right as there were many people at Oracle/MySQL pushing to make MySQL scale on modern hardware. [Less]
Posted 4 days ago by MySQL Performance Blog
Percona announces the release of Percona Server 5.6.32-78.0 on August 22nd, 2016. Download the latest version from the Percona web site or the Percona Software Repositories. Based on MySQL 5.6.32, including all the bug fixes in it, Percona Server ... [More] 5.6.32-78.0 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release are available in the 5.6.32-78.0 milestone on Launchpad. New Features: Percona Server Audit Log Plugin now supports filtering by user and SQL command. Percona Server now supports tree map file block allocation strategy for TokuDB. Bugs Fixed: Fixed potential cardinality 0 issue for TokuDB tables if ANALYZE TABLE finds only deleted rows and no actual logical rows before it times out. Bug fixed #1607300 (#1006, #732). TokuDB database.table.index names longer than 256 characters could cause server crash if background analyze table status was checked while running. Bug fixed #1005. PAM Authentication Plugin would abort authentication while checking UNIX user group membership if there were more than a thousand members. Bug fixed #1608902. If DROP DATABASE would fail to delete some of the tables in the database, the partially-executed command is logged in the binlog as DROP TABLE t1, t2, ... for the tables for which drop succeeded. A slave might fail to replicate such DROP TABLE statement if there exist foreign key relationships to any of the dropped tables and the slave has a different schema from master. Fix by checking, on the master, whether any of the database to be dropped tables participate in a Foreign Key relationship, and fail the DROP DATABASE statement immediately. Bug fixed #1525407 (upstream #79610). PAM Authentication Plugin didn’t support spaces in the UNIX user group names. Bug fixed #1544443. Due to security reasons ld_preload libraries can now only be loaded from the system directories (/usr/lib64, /usr/lib) and the MySQL installation base directory. Percona Server 5.6 could not be built with the -DMYSQL_MAINTAINER_MODE=ON option. Bug fixed #1590454. In the client library, any EINTR received during network I/O was not handled correctly. Bug fixed #1591202 (upstream #82019). The included .gitignore in the percona-server source distribution had a line *.spec, which means someone trying to check in a copy of the percona-server source would be missing the spec file required to build the RPMs. Bug fixed #1600051. Audit Log Plugin did not transcode queries. Bug fixed #1602986. LeakSanitizer-enabled build failed to bootstrap server for MTR. Bug fixed #1603978 (upstream #81674). Fixed MYSQL_SERVER_PUBLIC_KEY connection option memory leak. Bug fixed #1604419. The fix for bug #1341067 added a call to free some of the heap memory allocated by OpenSSL. This is not safe for repeated calls if OpenSSL is linked twice through different libraries and each is trying to free the same. Bug fixed #1604676. If the changed page bitmap redo log tracking thread stops due to any reason, then shutdown will wait for a long time for the log tracker thread to quit, which it never does. Bug fixed #1606821. Audit Log Plugin events did not report the default database. Bug fixed #1435099. Canceling the TokuDB Background ANALYZE TABLE job twice or while it was in the queue could lead to server assertion. Bug fixed #1004. Fixed various spelling errors in comments and function names. Bug fixed #728 (Otto Kekäläinen). Implemented set of fixes to make PerconaFT build and run on the AArch64 (64-bit ARMv8) architecture. Bug fixed #726 (Alexey Kopytov). Other bugs fixed: #1603073, #1604323, #1604364, #1604462, #1604774, #1606782, #1607224, #1607359, #1607606, #1607607, #1607671, #1608385, #1608437, #1608845, #1609422, #1610858, #1612084, #1612551, #1455430, #1455432, #1610242, #998, #1003, #729, and #730. Release notes for Percona Server 5.6.32-78.0 are available in the online documentation. Please report any bugs on the launchpad bug tracker. [Less]
Posted 4 days ago by MariaDB
Mon, 2016-08-22 22:40GuestThe following is a guest blog post from Subodh Kumar, head of technology at Magicbricks, India's largest online property portal. To support our growing online traffic, Magicbricks migrated from a proprietary database to ... [More] MariaDB (version 10.1.x). With this migration, we’ve re-factored our application architecture to separate read and write database calls. This has allowed us to load balance our heavy read calls across multiple instances of Slaves without any worries of lag during data syncs. Using MariaDB, we are now able to serve approximately 7 million page views (from our web and mobile sites) and approximately 6 million API calls per day. MariaDB has not only helped us support this high volume of traffic but has also smoothened our database related operations. We were easily able to setup a multi-master, near real-time replication. Not to mention, this is with no additional license requirements which was a primary consideration with proprietary database servers that we had previously deployed. This deployment has made Magicbricks scale its applications with any number of database instances as desired. The average load factor with the previous proprietary database was around 15 to 20, which has now been tremendously reduced to approximately three after the MariaDB deployment. Tags: CommunityScaling [Less]
Posted 4 days ago by Trent Hornibrook
Yesterday I wrote a blog on some metrics I like to start with when running a delivery team. There are two items that I missed in blog.Building the right thingChecking those metrics"Building the thing right" is not "building the right ... [More] thing"@mysqldbahelp nice write up. I think they cover "are we building the thing right" but need others for "are we building the right thing".— mark barber (@mark_barbs) 22 August 2016 Agile Coach Mark Barber reminded me that measurement is all well and good, but if the item being built is not 'right' then its waste.Having showcases with appropriate stakeholders should ensure that the thing being built is indeed right. The closer you can get the business into the workings of the delivery team, the faster that feedback will be consequently reducing the time waste of 'doing the wrong thing'.Invest in looking at your metricsCapturing metrics is all well and good, but pointless if the data is not looked at at a regular cadence. Set aside time every two weeks to a month to look at the data and try and diagnose what it means. Better yet, take it to the team and have them provide input into the diagnosis (which will generate buy in on capturing the data).Look to make a single change at that cadence and checkpoint at the next heart beat. Not every change will be positive, but doing nothing   is significantly worse than trying something and failing (fast). [Less]