I Use This!
Very High Activity

News

Analyzed 1 day ago. based on code collected 1 day ago.
Posted about 10 years ago
Plpgunit started out of curiosity on why a unit testing framework cannot be simple and easy to use? Unit testing frameworks for databases are not amongst the most widely available tools for developers particularly if you are building an application … ... [More] Continue reading →The post PostgreSQL Unit Testing Framework (plpgunit) appeared first on MixERP. Tags:  PostgreSQL Unit Testing Del.icio.us Facebook TweetThis Digg StumbleUpon Comments:  0 (Zero), Be the first to leave a reply!You might be interested in this:    What is ERP | What is MixERP ?  MixERP Project Milestones  Video | Targeting Mono Profile on Visual Studio  Fun with Date ExpressionsCopyright © MixERP [PostgreSQL Unit Testing Framework (plpgunit)], All Right Reserved. 2014. [Less]
Posted about 10 years ago
If you've been following the bitcoin saga you may have heard about Mt Gox's halting of currency withdrawals.  Well, it's come out that due to their (completely preventable) improper transaction tracking bad actors have been gaming them.  Oops. Mt ... [More] Gox (bitcoin exchange) uses surrogate key improperly and pays the price. I've determined a long time ago that overuse of surrogate keys is a huge [Less]
Posted about 10 years ago
Paris, France - Feb. 10, 2014 DALIBO is proud to announce the release of pgBadger 5, a PostgreSQL performance analyzer, built for speed with fully detailed reports based on your Postgres log files. This major version comes with a bunch of new ... [More] metrics such as the SQL queries times histogram, some fixes in the new HTML5 design and the ability to build cumulative reports. New incremental mode The incremental mode is an old request issued at PgCon Ottawa 2012 that concern the ability to construct incremental reports with successive runs of pgBadger. It is now possible to run pgbadger once a day (or even every hours) and have cumulative reports per day and per week. A top index page allow you to go directly to the weekly and daily reports. Here's screenshot of the new index page http://dalibo.github.io/pgbadger/screenshots/pgbadgerv5_index.png This mode have been build with simplicity in mind. You just need to running pgbadger with cron as follow: 0 23 * * * pgbadger -q -I -O /var/www/pgbadger/ /var/log/postgresql.log This is enough to have daily and weekly reports viewable using your browser. Take a look at our demo here: http://dalibo.github.io/pgbadger/demov5/ There's also a useful improvement to allow pgBadger to seek directly to the last position in the same log file after a successive execution. This feature is only available using the incremental mode or the -l option and parsing a single log file. Let's say you have a weekly rotated log file and want to run pgBadger each days. With 2GB of log per day, pgbadger was spending 5 minutes per block of 2 GB to reach the last position in the log, so at the end of the week this feature will save you 35 minutes. Now pgBadger will start parsing new log entries immediately. This feature is compatible with the multiprocess mode using -j option (n processes for one log file). New Histograms This new major release adds some new metrics like an hourly graphic representation of the average count and duration of top normalized queries. Same for errors or events, you will be able to see graphically at which hours they are occurring the most often. For example: http://dalibo.github.io/pgbadger/screenshots/pgbadgerv5_histogram.png There's also a new "Histogram of query times", which is a new graph in the top queries slide that shows the query times distribution during the analyzed period. For example: http://dalibo.github.io/pgbadger/screenshots/pgbadgerv5_histogram_2.png There is also some graphic and report improvements, such as the mouse tracker formatting that have been reviewed. It now shows a vertical crosshair and all dataset values at a time when mouse pointer moves over series. Automatic queries formatting has also been changed, it is now done on double click event as simple click was painful when you wanted to copy some part of the queries. Autovacuum reports now associate database name to the autovacuum and autoanalyze entries. Statistics now refer to "dbname.schema.table", previous versions was only showing the pair "schema.table". This release also adds "Session peak" information and a report about "Simultaneous sessions". Parameters log_connections and log_disconnections must be enabled in postgresql.conf for this. Links & Credits DALIBO would like to thank the developers who submitted patches and the users who reported bugs and feature requests, especially Martin Prochazka, Herve Werner, tmihail, Reeshna Ramakrishnan, Guillaume Smet, Alexander Korotkov and Casey Allen Shobe. pgBadger is an open project. Any contribution to build a better tool is welcome. You just have to send your ideas, features requests or patches using the GitHub tools or directly on our mailing list. Download : http://dalibo.github.io/pgbadger/ Mailing List : https://listes.dalibo.com/cgi-bin/mailman/listinfo/pgbagder About pgBadger: pgBagder is a new generation log analyzer for PostgreSQL, created by Gilles Darold (also author of ora2pg, the powerful migration tool). pgBadger is a fast and easy tool to analyze your SQL traffic and create HTML5 reports with dynamics graphs. pgBadger is the perfect tool to understand the behavior of your PostgreSQL servers and identify which SQL queries need to be optimized. Docs, Download & Demo at http://dalibo.github.io/pgbadger/ About DALIBO: DALIBO is the leading PostgreSQL company in France, providing support, trainings and consulting to its customers since 2005. The company contributes to the PostgreSQL community in various ways, including : code, articles, translations, free conferences and workshops Check out DALIBO's open source projects at http://dalibo.github.io [Less]
Posted about 10 years ago
Nuptse summit by Flickr user François Bianco I am happy to announce that version 3.0.0 of DBD::Pg, the Perl interface to Postgres, was released on February 3, 2014. This represents a major release, mostly due to the way it now handles UTF-8. I will ... [More] try to blog soon with more details about that and some other major changes in this version. The new version is available from CPAN. Please make sure that this is the latest version, as new versions may have come out since this post was written. Checksums for 3.0.0: 58c2613bcb241279aca4c111ba16db48 DBD-Pg-3.0.0.tar.gz 03ded628d453718cbceaea906da3412df5a7137a DBD-Pg-3.0.0.tar.gz The complete list of changes is below. Thank you to everyone who sent in patches, helped debug, wrote bug reports, and helped me get this version out the door! Version 3.0.0 Released February 3, 2014 (git commit 9725314f27a8d65fc05bdeda3da8ce9c251f79bd) - Major change in UTF-8 handling. If client_encoding is set to UTF-8, always mark returned Perl strings as utf8. See the pg_enable_utf8 docs for more information. [Greg Sabino Mullane, David E. Wheeler, David Christensen] - Bump DBI requirement to 1.614 - Bump Perl requirement to 5.8.1 - Add new handle attribute, switch_prepared, to control when we stop using PQexecParams and start using PQexecPrepared. The default is 2: in previous versions, the effective behavior was 1 (i.e. PQexecParams was never used). [Greg Sabino Mullane] - Better handling of items inside of arrays, particularly bytea arrays. [Greg Sabino Mullane] (CPAN bug #91454) - Map SQL_CHAR back to bpchar, not char [Greg Sabino Mullane, reported by H.Merijn Brand] - Do not force oids to Perl ints [Greg Sabino Mullane] (CPAN bug #85836) - Return better sqlstate codes on fatal errors [Rainer Weikusat] - Better prepared statement names to avoid bug [Spencer Sun] (CPAN bug #88827) - Add pg_expression field to statistics_info output to show functional index information [Greg Sabino Mullane] (CPAN bug #76608) - Adjust lo_import_with_oid check for 8.3 (CPAN bug #83145) - Better handling of libpq errors to return SQLSTATE 08000 [Stephen Keller] - Make sure CREATE TABLE .. AS SELECT returns rows in non do() cases - Add support for AutoInactiveDestroy [David Dick] (CPAN bug #68893) - Fix ORDINAL_POSITION in foreign_key_info [Dagfinn Ilmari Mannsåker] (CPAN bug #88794) - Fix foreign_key_info with unspecified schema [Dagfinn Ilmari Mannsåker] (CPAN bug #88787) - Allow foreign_key_info to work when pg_expand_array is off [Greg Sabino Mullane and Tim Bunce] (CPAN bug #51780) - Remove math.h linking, as we no longer need it (CPAN bug #79256) - Spelling fixes (CPAN bug #78168) - Better wording for the AutoCommit docs (CPAN bug #82536) - Change NOTICE to DEBUG1 in t/02attribs.t test for handle attribute "PrintWarn": implicit index creation is now quieter in Postgres. [Erik Rijkers] - Use correct SQL_BIGINT constant for int8 [Dagfinn Ilmari Mannsåker] - Fix assertion when binding array columns on debug perls >= 5.16 [Dagfinn Ilmari Mannsåker] - Adjust test to use 3 digit exponential values [Greg Sabino Mullane] (CPAN bug #59449) - Avoid reinstalling driver methods in threads [Dagfinn Ilmari Mannsåker] (CPAN bug #83638) - Make sure App::Info does not prompt for pg_config location if AUTOMATED_TESTING or PERL_MM_USE_DEFAULT is set [David E. Wheeler] (CPAN bug #90799) - Fix typo in docs for pg_placeholder_dollaronly [Bryan Carpenter] (CPAN bug #91400) - Cleanup dangling largeobjects in tests [Fitz Elliott] (CPAN bug #92212) - Fix skip test counting in t/09arrays.t [Greg Sabino Mullane] (CPAN bug #79544) - Explicitly specify en_US for spell checking [Dagfinn Ilmari Mannsåker] (CPAN bug #91804) [Less]
Posted about 10 years ago
Several years back, a new fellow on a ride with my bike club had a rather serious crash. I’ll spare you the gory details here1, other than to say that he was very lucky we had two Wilderness First Responders, a nurse, and an Army medic with us. The experience made quite an impression on […]
Posted about 10 years ago
Version 2.0RC1 of repmgr, Replication Manager for PostgreSQL clusters, has been released. This release introduces a new experimental feature, autofailover. With autofailover repmgr is able to automatically promote a standby and let the other standbys ... [More] follow the new master, without interaction of the DBA. It also adds a lot of bug fixes and several new features. Read the full announcement at http://repmgr.org/release-notes.html. About repmgr repmgr is a set of open source tools that helps DBAs and System administrators manage a cluster of PostgreSQL databases. By taking advantage of the Hot Standby capability introduced in PostgreSQL 9, repmgr greatly simplifies the process of setting up and managing database with high availability and scalability requirements. repmgr simplifies administration and daily management, enhances productivity and reduces the overall costs of a PostgreSQL cluster by: monitoring the replication process; allowing DBAs to issue high availability operations such as switch-overs and fail-overs This release introduces a new experimental feature, autofailover. With autofailover repmgr is able to automatically promote a standby and let the other standbys follow the new master, without interaction of the DBA. It also adds a lot of bug fixes and several new features. [Less]
Posted about 10 years ago
When: 7-9pm Thu Feb 20, 2014 Where: Iovation Who: Dave Kerr What: Monitoring Postgres at New Relic You already know that New Relic can give you really good insight into your applications. But how about your PostgreSQL database? Join Dave Kerr as he ... [More] shows that New Relic isn’t just for developers anymore! We’ll demo using New Relic’s system monitoring along with its plugin system where you can get in-depth database information such as slow queries, number of backends, checkpoint info – just about everything a DBA needs! Dave Kerr is a recovering DBA, PostgreSQL evangelist and is currently working as a Software Engineer on the Site Engineering team at New Relic. – Our meeting will be held at Iovation, on the 32nd floor of the US Bancorp Tower at 111 SW 5th (5th & Oak). It’s right on the Green & Yellow Max lines. Underground bike parking is available in the parking garage; outdoors all around the block in the usual spots. No bikes in the office, sorry! Building security will close access to the floor at 7:30. See you there! [Less]
Posted about 10 years ago
HiIs in't the nice way to represent the Oracle Architecture.Soon i will be posting the PostgreSQL architecture as well.దినేష్ కుమార్Dinesh Kumar
Posted about 10 years ago
When writing an extension or module for PostgreSQL, having proper regressions tests and documentation are important things (with actually useful features!) to facilitate its acceptance. When it comes to regressions, PGXS comes up with the necessary ... [More] infrastructure with mainly the variable REGRESS in Makefile, allowing an author to specify a list of tests that can [...] [Less]
Posted about 10 years ago
When writing an extension or module for PostgreSQL, having proper regressions tests and documentation are important things (with actually useful features!) to facilitate its acceptance. When it comes to regressions, PGXS comes up with the necessary ... [More] infrastructure with mainly the variable REGRESS in Makefile, allowing an author to specify a list of tests that can be kicked with "make check" or "make installcheck". Using this flag has the advantage of relying on pgregress when it is necessary to compare expected and generated output, which is also something useful for alternative outputs on multiple platforms (like selecthaving, selectimplicit or selectviews in regression tests of core). In terms of documentation, it is possible to specify a list of raw files with the flag DOCS, that will install documentation in prefix/doc/$MODULEDIR (by default prefix is $PGINSTALL/share/, enforceable with -docdir in ./configure). Note that the list of Makefile variables of PGXS is here. By experience, it incredibly facilitates the maintenance and the consistency of a project to have everything managed in a single place (tests, code and documentation), and it usually does not help a project to have a unique format of documentation. For example, having only a sole wiki or html pages on a web server as documentation might be good for short-term, but proves difficult in long-term if systems need to be migrated for example, and this doc might get easily out-of-sync with the code itself. Some people prefer using man pages, some html, and some others simple README thingies, and impacting the maximum number of users is important. Also, having everything centralized is a real time-saver especially when you are the only maintainer of a project. (As a side note, github actually manages that pretty well by helping people in managing documentation using a particular branch in their git repos, and have it appear automatically on a site, or directly with a dedicated git repository, even if it is easy to forget to update another branch or an additional git repository for the documentation). DOCS makes somewhat difficult such centralized documentation management, and by experience I find it hard to manage a project with raw man pages or html pages as it takes time as well to put such things in a nice shape and understand it. However, what you can do is tricking DOCS by auto-generating extra documentation (each project maintainer has its own way to do as well!), and here is an example of how to do that with asciidoc. This is something that I have done for a small module called pg_arman (Yes this name, somewhat close to its parent name, is for a fork, except that it is a light-weight version keeping only the necessary things, and dropping the weird stuff... That's another topic though). By the way, using asciidoc with xmlto has proved to facilitate the project maintenance and documentation readability for both html and man. Controlling extra-documentation generation with asciidoc and xmlto can be controlled with some environment variables: for this example ASCIIDOC and XMLTO (incredible imagination). If one of those variables is not set, the extra-documentation will simply not be generated. A simple way to set them is to use that in for example bashrc (change it depending on your environment or build machine). export ASCIIDOC=asciidoc export XMLTO=xmlto Or directly enforce those values with the make command. XMLTO=xmlto ASCIIDOC=asciidoc make USE_PGXS=1 [install] The first one is better for developers, the second one better for automated builds. Then, with all the documentation in doc/, generated from doc/pg_arman.txt, here is how looks Makefile at the root of project for the documentation part. DOCS=doc/pg_arman.txt ifneq ($(ASCIIDOC),) ifneq ($(XMLTO),) man_DOCS = doc/pg_arman.1 DOCS = doc/pg_arman.html doc/README.html endif # XMLTO endif # ASCIIDOC [extra process blabla] ifneq ($(ASCIIDOC),) ifneq ($(XMLTO),) all: docs docs: $(MAKE) -C doc/ # Special handling for man pages, they need to be in a dedicated folder install: install-man install-man: $(MKDIR_P) '$(DESTDIR)$(mandir)/man1/' $(INSTALL_DATA) $(man_DOCS) '$(DESTDIR)$(mandir)/man1/' endif # XMLTO endif # ASCIIDOC # Clean up documentation as well clean: clean-docs clean-docs: $(MAKE) -C doc/ clean There are three things to note here: Documentation generation is enforced with the rule "docs", forcing kicking build in subfolder doc/ Installation of man pages needs to be tricked with an extra rule, here "install-man" to redirect it to the same folder as Postgres man documentation. Documentation cleanup is enforced with a new rule "clean-docs" (could be done better though). Generated documentation is cleaned even if ASCIIDOC or XMLTO is not defined. Then, here is to how looks doc/Makefile. manpages = pg_arman.1 EXTRA_DIST = pg_arman.txt Makefile $(manpages) htmls = pg_arman.html README.html # We have asciidoc and xmlto, so build everything and define correct # rules for build. ifneq ($(ASCIIDOC),) ifneq ($(XMLTO),) dist_man_MANS = $(manpages) doc_DATA = $(htmls) pg_arman.1: pg_arman.xml $(doc_DATA) $(XMLTO) man $< %.xml: %.txt $(ASCIIDOC) -b docbook -d manpage -o $@ $< %.html: %.txt $(ASCIIDOC) -a toc -o $@ $< README.html: ../README $(ASCIIDOC) -a toc -o $@ $< endif # XMLTO endif # ASCIIDOC clean: rm -rf $(manpages) *.html *.xml What this does is generating of course the man documentation, but as well a set of html pages that can be used for the project website, README included. Of course this is skipped if ASCIIDOC or XMLTO is not defined. The source of inspiration for that has been actually pgbouncer, the challenge being to simplify enough what was there to have it working with a Postgres extension and PGXS, without any ./configure step and with minimum settings. As a side note, be sure to set XMLCATALOGFILES correctly on OSX, for example in brew use that: export XML_CATALOG_FILES="/usr/local/etc/xml/catalog" That's something I ran into during my own hacking :) [Less]