Posted
over 2 years
ago
PostgreSQL Weekly News - November 7, 2021
PG Build 2021 will be held online on 30 November and 1 December 2021 09:00-17:00 GMT.
Details.
PostgreSQL Product News
PostgresDAC 3.11, a direct access component suite for PostgreSQL, released.
... [More]
http://microolap.com/products/connectivity/postgresdac/download/
JDBC 42.3.1
released.
ODB C++ ORM version 2.5.0-b.21 released.
DynamoDB FDW 1.0.0 released.
Babelfish, a MS SQL Server compatibility layer for PostgreSQL,
released.
PostgreSQL Jobs for November
https://archives.postgresql.org/pgsql-jobs/2021-11/
PostgreSQL in the News
Planet PostgreSQL: https://planet.postgresql.org/
PostgreSQL Weekly News is brought to you this week by David Fetter
Submit news and announcements by Sunday at 3:00pm PST8PDT to [email protected].
Applied Patches
Tom Lane pushed:
plpgsql: report proper line number for errors in variable initialization.
Previously, we pointed at the surrounding block's BEGIN keyword. If there are
multiple variables being initialized in a DECLARE section, this isn't good
enough: it can be quite confusing and unhelpful. We do know where the
variable's declaration started, so it just takes a tiny bit more
error-reporting infrastructure to use that. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/acb2d7d5d2301f07d5857ee252995e62ce9e7055
Avoid O(N^2) behavior when the standby process releases many locks. When
replaying a transaction that held many exclusive locks on the primary, a
standby server's startup process would expend O(N^2) effort on manipulating
the list of locks. This code was fine when written, but commit 1cff1b95a made
repetitive list_delete_first() calls inefficient, as explained in its commit
message. Fix by just iterating the list normally, and releasing storage only
when done. (This'd be inadequate if we needed to recover from an error
occurring partway through; but we don't.) Back-patch to v13 where 1cff1b95a
came in. Nathan Bossart Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/6301c3adabd947394682e37c933b0f3f83353b28
Doc: improve README files associated with TAP tests. Rearrange
src/test/perl/README so that the first section is more clearly "how to run
these tests", and the rest "how to write new tests". Add some basic info
there about debugging test failures. Then, add cross-refs to that READNE from
other READMEs that describe how to run TAP tests. Per suggestion from Kevin
Burke, though this is not his original patch. Discussion:
https://postgr.es/m/CAKcy5eiSbwiQnmCfnOnDCVC7B8fYyev3E=6pvvECP9pLE-Fcuw@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/b21415595cace7f3a45cfeb3023359b4b4d56b85
Don't try to read a multi-GB pg_stat_statements file in one call. Windows
fails on a request to read() more than INT_MAX bytes, and perhaps other
platforms could have similar issues. Let's adjust this code to read at most
1GB per call. (One would not have thought the file could get that big, but
now we have a field report of trouble, so it can. We likely ought to add some
mechanism to limit the size of the query-texts file separately from the size
of the hash table. That is not this patch, though.) Per bug #17254 from
Yusuke Egashira. It's been like this for awhile, so back-patch to all
supported branches. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/a667b066837849c5e55e0d626f1f7c93e267b8b7
Avoid some other O(N^2) hazards in list manipulation. In the same spirit as
6301c3ada, fix some more places where we were using list_delete_first() in a
loop and thereby risking O(N^2) behavior. It's not clear that the lists
manipulated in these spots can get long enough to be really problematic ...
but it's not clear that they can't, either, and the fixes are simple enough.
As before, back-patch to v13. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/e9d9ba2a4ddc39e331dd8461b511aa59ec0dc8af
Avoid O(N^2) behavior in SyncPostCheckpoint(). As in commits 6301c3ada and
e9d9ba2a4, avoid doing repetitive list_delete_first() operations, since that
would be expensive when there are many files waiting to be unlinked. This is
a slightly larger change than in those cases. We have to keep the list state
valid for calls to AbsorbSyncRequests(), so it's necessary to invent a
"canceled" field instead of immediately deleting PendingUnlinkEntry entries.
Also, because we might not be able to process all the entries, we need a new
list primitive list_delete_first_n(). list_delete_first_n() is almost
list_copy_tail(), but it modifies the input List instead of making a new copy.
I found a couple of existing uses of the latter that could profitably use the
new function. (There might be more, but the other callers look like they
probably shouldn't overwrite the input List.) As before, back-patch to v13.
Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/65c6cab1365ac33b11a49a2a193f6b3f9c53e487
Doc: be more precise about conflicts between relation names. Use verbiage like
"The name of the table must be distinct from the name of any other relation
(table, sequence, index, view, materialized view, or foreign table) in the
same schema." in the reference pages for all those object types. The main
change here is to mention materialized views explicitly; although a couple of
these pages failed to say anything at all about name conflicts. Per
suggestion from Daniel Westermann. Discussion:
https://postgr.es/m/ZR0P278MB0920D0946509233459AF0DEFD2889@ZR0P278MB0920.CHEP278.PROD.OUTLOOK.COM
https://git.postgresql.org/pg/commitdiff/af8c580e5cf32bb85dde70083a260c93a1f783eb
Doc: clean up some places that mentioned template1 but not template0. Improve
old text that wasn't updated when we added template0 to the standard database
set. Per suggestion from P. Luzanov. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/7d9ec0754afeabb9f336c5220ef415c3ea27a0b6
Fix variable lifespan in ExecInitCoerceToDomain(). This undoes a mistake in
1ec7679f1: domainval and domainnull were meant to live across loop iterations,
but they were incorrectly moved inside the loop. The effect was only to emit
useless extra EEOP_MAKE_READONLY steps, so it's not a big deal; nonetheless,
back-patch to v13 where the mistake was introduced. Ranier Vilela
Discussion:
https://postgr.es/m/CAEudQAqXuhbkaAp-sGH6dR6Nsq7v28_0TPexHOm6FiDYqwQD-w@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/01fc6527034a6f70ed44a078af8f636b1ab64947
Ensure consistent logical replication of datetime and float8 values. In
walreceiver, set the publisher's relevant GUCs (datestyle, intervalstyle,
extra_float_digits) to the same values that pg_dump uses, and for the same
reason: we need the output to be read the same way regardless of the
receiver's settings. Without this, it's possible for subscribers to
misinterpret transmitted values. Although this is clearly a bug fix, it's not
without downsides: subscribers that are storing values into some other
datatype, such as text, could get different results than before, and perhaps
be unhappy about that. Given the lack of previous complaints, it seems best
to change this only in HEAD, and to call it out as an incompatible change in
v15. Japin Li, per report from Sadhuprasad Patro Discussion:
https://postgr.es/m/CAFF0-CF=D7pc6st-3A9f1JnOt0qmc+BcBPVzD6fLYisKyAjkGA@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/f3d4019da5d026f2c3fe5bd258becf6fbb6b4673
Blind attempt to silence SSL compile failures on hamerkop. Buildfarm member
hamerkop has been failing for the last few days with errors that look like
OpenSSL's X509-related symbols have not been imported into
be-secure-openssl.c. It's unclear why this should be, but let's try adding an
explicit #include of , as there has long been in
fe-secure-openssl.c. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/24f9e49e430b4173d75a570e06abef8e3fd12c5e
Un-break pg_basebackup's MSVC build. Commit 23a1c6578 thought it'd be cute to
refactor pg_basebackup/Makefile with a new variable BBOBJS, but our MSVC build
system knows nothing of that. Per buildfarm.
https://git.postgresql.org/pg/commitdiff/d8bf0a1c1d3429cafb3019f2773e2f3aa68f3b65
Second attempt to silence SSL compile failures on hamerkop. After further
investigation, it seems the cause of the problem is our recent decision to
start defining WIN32_LEAN_AND_MEAN. That causes to no longer
include , which means that the OpenSSL headers are unable to
prevent conflicts with that header by #undef'ing the conflicting macros.
Apparently, some other system header that be-secure-openssl.c #includes after
the OpenSSL headers is pulling in . It's obscure just where that
happens and why we're not seeing it on other Windows buildfarm animals.
However, it should work to move the OpenSSL #includes to the end of the list.
For the sake of future-proofing, do likewise in fe-secure-openssl.c. In
passing, remove useless double inclusions of . Thanks to
Thomas Munro for running down the relevant information. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/1241fcbd7e649414f09f9858ba73e63975dcff64
Disallow making an empty lexeme via array_to_tsvector(). The tsvector data
type has always forbidden lexemes to be empty. However, array_to_tsvector()
didn't get that memo, and would allow an empty-string array element to become
an empty lexeme. This could result in dump/restore failures later, not to
mention whatever semantic issues might be behind the original prohibition.
However, other functions that take a plain text input directly as a lexeme
value do not need a similar restriction, because they only match the string
against existing tsvector entries. In particular it'd be a bad idea to make
ts_delete() reject empty strings, since that is the most convenient way to
clean up any bad data that might have gotten into a tsvector column via this
bug. Reflecting on that, let's also remove the prohibition against NULL array
elements in tsvector_delete_arr and tsvector_setweight_by_filter. It seems
more consistent to ignore them, as an empty-string element would be ignored.
There's a case for back-patching this, since it's clearly a bug fix. On
balance though, it doesn't seem like something to change in a minor release.
Jean-Christophe Arnu Discussion:
https://postgr.es/m/CAHZmTm1YVndPgUVRoag2WL0w900XcoiivDDj-gTTYBsG25c65A@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/cbe25dcff73a297adbada9dc1d6cad3df18014e9
Blind attempt to fix MSVC pgcrypto build. Commit db7d1a7b0 pulled out
Mkvcbuild.pm's custom support for building contrib/pgcrypto, but neglected to
inform it that that module can now be built normally. Or at least I guess it
can now be built normally. But this is definitely causing bowerbird to fail,
since it's trying to test a module it hasn't built.
https://git.postgresql.org/pg/commitdiff/3c2c391dc9f82fae181508ebcc2f7621ffefd024
Doc: add some notes about performance of the List functions. Per suggestion
from Andres Freund. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/27ef132a805c8633ed8bb94ed70be995c681ab1f
contrib/sslinfo needs a fix too to make hamerkop happy. Re-ordering the
#include's is a bit problematic here because libpq/libpq-be.h needs to include
. Instead, let's #undef the unwanted macro after all the
#includes. This is definitely uglier than the other way, but it should work
despite possible future header rearrangements. (A look at the openssl headers
indicates that X509_NAME is the only conflicting symbol that we use.) In
passing, remove a related but long-incorrect comment in pg_backup_archiver.h.
Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/568620dfd6912351b4127435eca5309f823abde8
Silence uninitialized-variable warning. Quite a few buildfarm animals are
warning about this, and lapwing is actually failing (because -Werror). It's a
false positive AFAICS, so no need to do more than zero the variable to start
with. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/c3ec4f8fe867807613c08fe16789434ab1a74a15
Michaël Paquier pushed:
Preserve opclass parameters across REINDEX CONCURRENTLY. The opclass parameter
Datums from the old index are fetched in the same way as for predicates and
expressions, by grabbing them directly from the system catalogs. They are
then copied into the new IndexInfo that will be used for the creation of the
new copy. This caused the new index to be rebuilt with default parameters
rather than the ones pre-defined by a user. The only way to get back a new
index with correct opclass parameters would be to recreate a new index from
scratch. The issue has been introduced by 911e702. Author: Michael Paquier
Reviewed-by: Zhihong Yu Discussion:
https://postgr.es/m/YX0CG/[email protected]
Backpatch-through: 13
https://git.postgresql.org/pg/commitdiff/add5cf28d48149459466b9aff374d78aebf17482
Add TAP test for pg_receivewal with timeline switch. pg_receivewal is able to
follow a timeline switch, but this was not tested. This test uses an empty
archive location with a restart done from a slot, making its implementation a
tad simpler than if we would reuse an existing archive directory. Author:
Ronan Dunklau Reviewed-by: Kyotaro Horiguchi, Michael Paquier Discussion:
https://postgr.es/m/18708360.4lzOvYHigE@aivenronan
https://git.postgresql.org/pg/commitdiff/0f9b9938a0367313fcf6a32fcb7fb5be9e281198
Rework compression options of pg_receivewal. pg_receivewal includes since
cada1af the option --compress, to allow the compression of WAL segments using
gzip, with a value of 0 (the default) meaning that no compression can be used.
This commit introduces a new option, called --compression-method, able to use
as values "none", the default, and "gzip", to make things more extensible.
The case of --compress=0 becomes fuzzy with this option layer, so we have made
the choice to make pg_receivewal return an error when using "none" and a
non-zero compression level, meaning that the authorized values of --compress
are now [1,9] instead of [0,9]. Not specifying --compress with "gzip" as
compression method makes pg_receivewal use the default of zlib instead
(Z_DEFAULT_COMPRESSION). The code in charge of finding the streaming start
LSN when scanning the existing archives is refactored and made more
extensible. While on it, rename "compression" to "compression_level" in
walmethods.c, to reduce the confusion with the introduction of the compression
method, even if the tar method used by pg_basebackup does not rely on the
compression method (yet, at least), but just on the compression level (this
area could be improved more, actually). This is in preparation for an
upcoming patch that adds LZ4 support to pg_receivewal. Author: Georgios
Kokolatos Reviewed-by: Michael Paquier, Jian Guo, Magnus Hagander, Dilip
Kumar, Robert Haas Discussion:
https://postgr.es/m/ZCm1J5vfyQ2E6dYvXz8si39HQ2gwxSZ3IpYaVgYa3lUwY88SLapx9EEnOf5uEwrddhx2twG7zYKjVeuP5MwZXCNPybtsGouDsAD1o2L_I5E=@pm.me
https://git.postgresql.org/pg/commitdiff/d62bcc8b07f921bad105c7a826702c117ea7be58
Fix some thinkos with pg_receivewal --compression-method. The option name was
incorrect in one of the error messages, and the short option 'I' was used in
the code but we did not intend things to be this way. While on it, fix the
documentation to refer to a "method", and not a "level. Oversights in commit
d62bcc8, that I have detected after more review of the LZ4 patch for
pg_receivewal.
https://git.postgresql.org/pg/commitdiff/9588622945754305836555273a6a3be814db315c
Add support for LZ4 compression in pg_receivewal. pg_receivewal gains a new
option, --compression-method=lz4, available when the code is compiled with
--with-lz4. Similarly to gzip, this gives the possibility to compress
archived WAL segments with LZ4. This option is not compatible with
--compress. The implementation uses LZ4 frames, and is compatible with simple
lz4 commands. Like gzip, using --synchronous ensures that any data will be
flushed to disk within the current .partial segment, so as it is possible to
retrieve as much WAL data as possible even from a non-completed segment (this
requires completing the partial file with zeros up to the WAL segment size
supported by the backend after decompression, but this is the same as gzip).
The calculation of the streaming start LSN is able to transparently find and
check LZ4-compressed segments. Contrary to gzip where the uncompressed size
is directly stored in the object read, the LZ4 chunk protocol does not store
the uncompressed data by default. There is contentSize that can be used with
LZ4 frames by that would not help if using an archive that includes segments
compressed with the defaults of a "lz4" command, where this is not stored.
So, this commit has taken the most extensible approach by decompressing the
already-archived segment to check its uncompressed size, through a blank
output buffer in chunks of 64kB (no actual performance difference noticed with
8kB, 16kB or 32kB, and the operation in itself is actually fast). Tests have
been added to verify the creation and correctness of the generated LZ4 files.
The latter is achieved by the use of command "lz4", if found in the
environment. The tar-based WAL method in walmethods.c, used now only by
pg_basebackup, does not know yet about LZ4. Its code could be extended for
this purpose. Author: Georgios Kokolatos Reviewed-by: Michael Paquier, Jian
Guo, Magnus Hagander, Dilip Kumar Discussion:
https://postgr.es/m/ZCm1J5vfyQ2E6dYvXz8si39HQ2gwxSZ3IpYaVgYa3lUwY88SLapx9EEnOf5uEwrddhx2twG7zYKjVeuP5MwZXCNPybtsGouDsAD1o2L_I5E=@pm.me
https://git.postgresql.org/pg/commitdiff/babbbb595d2322da095a1e6703171b3f1f2815cb
Improve psql tab completion for COMMENT. Completion is added for more object
types, like domain constraints, text search-ish objects or policies.
Moreover, the area is reorganized, changing the list of objects supported by
COMMENT to be in the same order as the documentation to ease future additions.
Author: Ken Kato Reviewed-by: Fujii Masao, Shinya Kato, Suraj Khamkar, Michael
Paquier Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/a5b336b8b9e04a93e7c8526302504d2e5201eb80
Álvaro Herrera pushed:
Handle XLOG_OVERWRITE_CONTRECORD in DecodeXLogOp. Failing to do so results in
inability of logical decoding to process the WAL stream. Handle it by doing
nothing. Backpatch all the way back. Reported-by: Petr Jelínek
[email protected]
https://git.postgresql.org/pg/commitdiff/40c516bba864395c77bcfb1bae65ba9562ba8f71
Reword doc blurb for vacuumdb --analyze-in-stages. Make users aware that using
it in a database with existing stats might cause transient problems. Author:
Nikolai Berkoff [email protected] Discussion:
https://postgr.es/m/s-kSljtWXMWgMfGTztPTPcS80R8FHdOrBxDTnrQI6GMZbT7au1A4b0fzaSFtKwCI8nwN0MhgPLfVOTvJ7DwTjkip4P3d0o4VgrMJs4OLN-o=@pm.me
https://git.postgresql.org/pg/commitdiff/00a354a13560dc529ac34a303c85c265aaf033b7
Document default and changeability of log_startup_progress_interval. Review
for 9ce346eabf35. Author: Álvaro Herrera [email protected]
Reviewed-by: Robert Haas [email protected] Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/e543906e217509ad95c1e341de4e874f027f871b
Pipeline mode disallows multicommand strings. ... so mention that in
appropriate places of the libpq docs. Backpatch to 14. Reported-by: RekGRpth
[email protected] Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/105c1de0197473dac8ada55dc8cf773d782224cb
Document that ALTER TABLE .. TYPE removes statistics. Co-authored-by: Nikolai
Berkoff [email protected] Discussion:
https://postgr.es/m/vCc8XnwDmlP4ZnHBQLIVxzD405BiYHVC9qZlhIF7IsfxK0gC9mZ4PUUOH0-3y6kv5p-87-3_ljqT1KvQVAnb8OoWhPU3kcqWn2ZpmxRBCQg=@pm.me
https://git.postgresql.org/pg/commitdiff/df80f9da5c6541e744eeb20eaca919c7fc189999
Avoid crash in rare case of concurrent DROP. When a role being dropped
contains is referenced by catalog objects that are concurrently also being
dropped, a crash can result while trying to construct the string that
describes the objects. Suppress that by ignoring objects whose descriptions
are returned as NULL. The majority of relevant codesites were already
cautious about this already; we had just missed a couple. This is an old bug,
so backpatch all the way back. Reported-by: Alexander Lakhin
[email protected] Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/d74b54b3ddf710926a44bf3f9c87c00e6f82d825
Daniel Gustafsson pushed:
Replace unicode characters in comments with ascii. The unicode characters,
while in comments and not code, caused MSVC to emit compiler warning C4819:
The file contains a character that cannot be represented in the current code
page (number). Save the file in Unicode format to prevent data loss. Fix
by replacing the characters in print.c with descriptive comments containing
the codepoints and symbol names, and remove the character in brin_bloom.c
which was a footnote reference copied from the paper citation. Per report
from hamerkop in the buildfarm. Reviewed-by: Tom Lane [email protected]
Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/43a134f28b350c4b731db9dddf2f53c407a7077f
Amit Kapila pushed:
Replace XLOG_INCLUDE_XID flag with a more localized flag. Commit 0bead9af484c
introduced XLOG_INCLUDE_XID flag to indicate that the WAL record contains
subXID-to-topXID association. It uses that flag later to mark in
CurrentTransactionState that top-xid is logged so that we should not try to
log it again with the next WAL record in the current subtransaction. However,
we can use a localized variable to pass that information. In passing, change
the related function and variable names to make them consistent with what the
code is actually doing. Author: Dilip Kumar Reviewed-by: Alvaro Herrera, Amit
Kapila Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/71db6459e6e4ef623e98f3b1e3e9fed1bfb0ae3b
Move MarkCurrentTransactionIdLoggedIfAny() out of the critical section. We
don't modify any shared state in this function which could cause problems for
any concurrent session. This will make it look similar to the other updates
for the same structure (TransactionState) which avoids confusion for future
readers of code. Author: Dilip Kumar Reviewed-by: Amit Kapila Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/335397456b7e3f9f619038cb322fbfc9dd649d4f
Fujii Masao pushed:
pgbench: Improve error-handling in pgbench. Previously failures of initial
connection and logfile open caused pgbench to proceed the benchmarking, report
the incomplete results and exit with status 2. It didn't make sense to proceed
the benchmarking even when pgbench could not start as prescribed. This commit
improves pgbench so that early errors that occur when starting benchmark such
as those failures should make pgbench exit immediately with status 1. Author:
Yugo Nagata Reviewed-by: Fabien COELHO, Kyotaro Horiguchi, Fujii Masao
Discussion:
https://postgr.es/m/TYCPR01MB5870057375ACA8A73099C649F5349@TYCPR01MB5870.jpnprd01.prod.outlook.com
https://git.postgresql.org/pg/commitdiff/cd29be5459f0e138c0f19d49ee588feeda78e3c9
pgbench: Fix typo in comment. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/d8dba4d03068eeee1ea3ffc8e7c7b4fa3e35a7f4
Peter Geoghegan pushed:
Don't overlook indexes during parallel VACUUM. Commit b4af70cb, which
simplified state managed by VACUUM, performed refactoring of parallel VACUUM
in passing. Confusion about the exact details of the tasks that the leader
process is responsible for led to code that made it possible for parallel
VACUUM to miss a subset of the table's indexes entirely. Specifically,
indexes that fell under the min_parallel_index_scan_size size cutoff were
missed. These indexes are supposed to be vacuumed by the leader (alongside
any parallel unsafe indexes), but weren't vacuumed at all. Affected indexes
could easily end up with duplicate heap TIDs, once heap TIDs were recycled for
new heap tuples. This had generic symptoms that might be seen with almost any
index corruption involving structural inconsistencies between an index and its
table. To fix, make sure that the parallel VACUUM leader process performs any
required index vacuuming for indexes that happen to be below the size cutoff.
Also document the design of parallel VACUUM with these below-size-cutoff
indexes. It's unclear how many users might be affected by this bug. There
had to be at least three indexes on the table to hit the bug: a smaller index,
plus at least two additional indexes that themselves exceed the size cutoff.
Cases with just one additional index would not run into trouble, since the
parallel VACUUM cost model requires two larger-than-cutoff indexes on the
table to apply any parallel processing. Note also that autovacuum was not
affected, since it never uses parallel processing. Test case based on tests
from a larger patch to test parallel VACUUM by Masahiko Sawada. Many thanks
to Kamigishi Rei for her invaluable help with tracking this problem down.
Author: Peter Geoghegan [email protected] Author: Masahiko Sawada
[email protected] Reported-By: Kamigishi Rei [email protected]
Reported-By: Andrew Gierth [email protected] Diagnosed-By: Andres
Freund [email protected] Bug: #17245 Discussion:
https://postgr.es/m/[email protected]
Discussion:
https://postgr.es/m/[email protected]
Backpatch: 14-, where the refactoring commit appears.
https://git.postgresql.org/pg/commitdiff/9bacec15b67d1a643915858f054790f36b2b7871
Fix parallel amvacuumcleanup safety bug. Commit b4af70cb inverted the return
value of the function parallel_processing_is_safe(), but missed the
amvacuumcleanup test. Index AMs that don't support parallel cleanup at all
were affected. The practical consequences of this bug were not very serious.
Hash indexes are affected, but since they just return the number of blocks
during hashvacuumcleanup anyway, it can't have had much impact. Author:
Masahiko Sawada [email protected] Discussion:
https://postgr.es/m/CAD21AoA-Em+aeVPmBbL_s1V-ghsJQSxYL-i3JP8nTfPiD1wjKw@mail.gmail.com
Backpatch: 14-, where commit b4af70cb appears.
https://git.postgresql.org/pg/commitdiff/c59278a1aa5ef2ee8a6d5d83bd987a7ce5c89e84
Add another old commit to git-blame-ignore-revs. Add another historic pgindent
commit that was missed by the initial work done in commit 8e638845.
https://git.postgresql.org/pg/commitdiff/581055c32fbb5018431265877754cbd8019bc012
Add various assertions to heap pruning code. These assertions document (and
verify) our high level assumptions about how pruning can and cannot affect
existing items from target heap pages. For example, one of the new assertions
verifies that pruning does not set a heap-only tuple to LP_DEAD. Author:
Peter Geoghegan [email protected] Reviewed-By: Andres Freund [email protected]
Discussion:
https://postgr.es/m/CAH2-Wz=vhvBx1GjF+oueHh8YQcHoQYrMi0F0zFMHEr8yc4sCoA@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/5cd7eb1f1c32e1b95894f28b277b4e4b89add772
Add hardening to catch invalid TIDs in indexes. Add hardening to the heapam
index tuple deletion path to catch TIDs in index pages that point to a heap
item that index tuples should never point to. The corruption we're trying to
catch here is particularly tricky to detect, since it typically involves
"extra" (corrupt) index tuples, as opposed to the absence of required index
tuples in the index. For example, a heap TID from an index page that turns
out to point to an LP_UNUSED item in the heap page has a good chance of being
caught by one of the new checks. There is a decent chance that the recently
fixed parallel VACUUM bug (see commit 9bacec15) would have been caught had
that particular check been in place for Postgres 14. No backpatch of this
extra hardening for now, though. Author: Peter Geoghegan [email protected]
Reviewed-By: Andres Freund [email protected] Discussion:
https://postgr.es/m/CAH2-Wzk-4_raTzawWGaiqNvkpwDXxv3y1AQhQyUeHfkU=tFCeA@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/e7428a99a13f973549aab30c57ec8380ddda1869
Update obsolete heap pruning comments. Add new comments that spell out what
VACUUM expects from heap pruning: pruning must never leave behind DEAD tuples
that still have tuple storage. This has at least been the case since commit
8523492d, which established the principle that vacuumlazy.c doesn't have to
deal with DEAD tuples that still have tuple storage directly, except perhaps
by simply retrying pruning (to handle a rare corner case involving concurrent
transaction abort). In passing, update some references to old symbol names
that were missed by the snapshot scalability work (specifically commit
dc7420c2c9).
https://git.postgresql.org/pg/commitdiff/f214960adde6028a39ba3014b1ab2b224faeefed
Update obsolete reference in vacuumlazy.c. Oversight in commit 7ab96cf6.
https://git.postgresql.org/pg/commitdiff/02f9fd129432cab565b2a3cb9f3b3a5000dfe540
Peter Eisentraut pushed:
Fix incorrect format placeholder.
https://git.postgresql.org/pg/commitdiff/ef6f047d2c87b91318364341c058dd6b715951b2
pgcrypto: Remove non-OpenSSL support. pgcrypto had internal implementations of
some encryption algorithms, as an alternative to calling out to OpenSSL.
These were rarely used, since most production installations are built with
OpenSSL. Moreover, maintaining parallel code paths makes the code more
complex and difficult to maintain. This patch removes these internal
implementations. Now, pgcrypto is only built if OpenSSL support is
configured. Reviewed-by: Daniel Gustafsson [email protected] Discussion:
https://www.postgresql.org/message-id/flat/0b42f1df-8cba-6a30-77d7-acc241cc88c1%40enterprisedb.com
https://git.postgresql.org/pg/commitdiff/db7d1a7b0530e8cbd045744e1c75b0e63fb6916f
Heikki Linnakangas pushed:
Fix snapshot reference leak if lo_export fails. If lo_export() fails to open
the target file or to write to it, it leaks the created LargeObjectDesc and
its snapshot in the top-transaction context and resource owner. That's pretty
harmless, it's a small leak after all, but it gives the user a "Snapshot
reference leak" warning. Fix by using a short-lived memory context and no
resource owner for transient LargeObjectDescs that are opened and closed
within one function call. The leak is easiest to reproduce with lo_export() on
a directory that doesn't exist, but in principle the other lo_* functions
could also fail. Backpatch to all supported versions. Reported-by: Andrew B
Reviewed-by: Alvaro Herrera Discussion:
https://www.postgresql.org/message-id/[email protected]
https://git.postgresql.org/pg/commitdiff/6b1b405ebfdce9da47f59d8d4144b1168709fbce
Update alternative expected output file. Previous commit added a test to
'largeobject', but neglected the alternative expected output file
'largeobject_1.source'. Per failure on buildfarm animal 'hamerkop'.
Discussion:
https://www.postgresql.org/message-id/[email protected]
https://git.postgresql.org/pg/commitdiff/d5ab0681bf1bbf6c0c2cba9a2d55fe8e080597b6
Robert Haas pushed:
amcheck: Add additional TOAST pointer checks. Expand the checks of toasted
attributes to complain if the rawsize is overlarge. For compressed
attributes, also complain if compression appears to have expanded the
attribute or if the compression method is invalid. Mark Dilger, reviewed by
Justin Pryzby, Alexander Alekseev, Heikki Linnakangas, Greg Stark, and me.
Discussion:
http://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/bd807be6935929bdefe74d1258ca08048f0aafa3
Introduce 'bbsink' abstraction to modularize base backup code. The base backup
code has accumulated a healthy number of new features over the years, but it's
becoming increasingly difficult to maintain and further enhance that code
because there's no real separation of concerns. For example, the code that
understands knows the details of how we send data to the client using the
libpq protocol is scattered throughout basebackup.c, rather than being
centralized in one place. To try to improve this situation, introduce a new
'bbsink' object which acts as a recipient for archives generated during the
base backup progress and also for the backup manifest. This commit introduces
three types of bbsink: a 'copytblspc' bbsink forwards the backup to the client
using one COPY OUT operation per tablespace and another for the manifest, a
'progress' bbsink performs command progress reporting, and a 'throttle' bbsink
performs rate-limiting. The 'progress' and 'throttle' bbsink types also
forward the data to a successor bbsink; at present, the last bbsink in the
chain will always be of type 'copytblspc'. There are plans to add more types
of 'bbsink' in future commits. This abstraction is a bit leaky in the case of
progress reporting, but this still seems cleaner than what we had before.
Patch by me, reviewed and tested by Andres Freund, Sumanta Mukherjee, Dilip
Kumar, Suraj Kharage, Dipesh Pandit, Tushar Ahuja, Mark Dilger, and Jeevan
Ladhe. Discussion:
https://postgr.es/m/CA+TgmoZGwR=ZVWFeecncubEyPdwghnvfkkdBe9BLccLSiqdf9Q@mail.gmail.com
Discussion:
https://postgr.es/m/CA+TgmoZvqk7UuzxsX1xjJRmMGkqoUGYTZLDCH8SmU1xTPr1Xig@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/bef47ff85df18bf4a3a9b13bd2a54820e27f3614
Introduce 'bbstreamer' abstraction to modularize pg_basebackup. pg_basebackup
knows how to do quite a few things with a backup that it gets from the server,
like just write out the files, or compress them first, or even parse the tar
format and inject a modified postgresql.auto.conf file into the archive
generated by the server. Unforatunely, this makes pg_basebackup.c a very large
source file, and also somewhat difficult to enhance, because for example the
knowledge that the server is sending us a 'tar' file rather than some other
sort of archive is spread all over the place rather than centralized. In an
effort to improve this situation, this commit invents a new 'bbstreamer'
abstraction. Each archive received from the server is fed to a bbstreamer
which may choose to dispose of it or pass it along to some other bbstreamer.
Chunks may also be "labelled" according to whether they are part of the
payload data of a file in the archive or part of the archive metadata. So,
for example, if we want to take a tar file, modify the postgresql.auto.conf
file it contains, and the gzip the result and write it out, we can use a
bbstreamer_tar_parser to parse the tar file received from the server, a
bbstreamer_recovery_injector to modify the contents of postgresql.auto.conf, a
bbstreamer_tar_archiver to replace the tar headers for the file modified in
the previous step with newly-built ones that are correct for the modified
file, and a bbstreamer_gzip_writer to gzip and write the resulting data. Only
the objects with "tar" in the name know anything about the tar archive format,
and in theory we could re-archive using some other format rather than "tar" if
somebody wanted to write the code. These chances do add a substantial amount
of code, but I think the result is a lot more maintainable and extensible.
pg_basebackup.c itself shrinks by roughly a third, with a lot of the
complexity previously contained there moving into the newly-added files.
Patch by me. The larger patch series of which this is a part has been reviewed
and tested at various times by Andres Freund, Sumanta Mukherjee, Dilip Kumar,
Suraj Kharage, Dipesh Pandit, Tushar Ahuja, Mark Dilger, Sergei Kornilov, and
Jeevan Ladhe. Discussion:
https://postgr.es/m/CA+TgmoZGwR=ZVWFeecncubEyPdwghnvfkkdBe9BLccLSiqdf9Q@mail.gmail.com
Discussion:
https://postgr.es/m/CA+TgmoZvqk7UuzxsX1xjJRmMGkqoUGYTZLDCH8SmU1xTPr1Xig@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/23a1c6578c87fca0e361c4f5f9a07df5ae1f9858
Don't set ThisTimeLineID when there's no reason to do so. In slotfuncs.c,
pg_replication_slot_advance() needs to determine the LSN up to which the slot
should be advanced, but that doesn't require us to update ThisTimeLineID,
because none of the code called from here depends on it. If the replication
slot is logical, pg_logical_replication_slot_advance will call
read_local_xlog_page, which does use ThisTimeLineID, but also takes care of
making sure it's up to date. If the replication slot is physical, the timeline
isn't used for anything at all. In logicalfuncs.c,
pg_logical_slot_get_changes_guts() has the same issue: the only code we're
going to run that cares about timelines is in or downstream of
read_local_xlog_page, which already makes sure that the correct value gets
set. Hence, don't do it here. Patch by me, reviewed and tested by Michael
Paquier, Amul Sul, and Álvaro Herrera. Discussion:
https://postgr.es/m/CA+TgmobfAAqhfWa1kaFBBFvX+5CjM=7TE=n4r4Q1o2bjbGYBpA@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/caf1f675b88d1aa67ea3fb642e8f38b470cc911e
Remove all use of ThisTimeLineID global variable outside of xlog.c. All such
code deals with this global variable in one of three ways. Sometimes the same
functions use it in more than one of these ways at the same time. First,
sometimes it's an implicit argument to one or more functions being called in
xlog.c or elsewhere, and must be set to the appropriate value before calling
those functions lest they misbehave. In those cases, it is now passed as an
explicit argument instead. Second, sometimes it's used to obtain the current
timeline after the end of recovery, i.e. the timeline to which WAL is being
written and flushed. Such code now calls GetWALInsertionTimeLine() or relies
on the new out parameter added to GetFlushRecPtr(). Third, sometimes it's
used during recovery to store the current replay timeline. That can change, so
such code must generally update the value before each use. It can still do
that, but must now use a local variable instead. The net effect of these
changes is to reduce by a fair amount the amount of code that is directly
accessing this global variable. That's good, because history has shown that we
don't always think clearly about which timeline ID it's supposed to contain at
any given point in time, or indeed, whether it has been or needs to be
initialized at any given point in the code. Patch by me, reviewed and tested
by Michael Paquier, Amul Sul, and Álvaro Herrera. Discussion:
https://postgr.es/m/CA+TgmobfAAqhfWa1kaFBBFvX+5CjM=7TE=n4r4Q1o2bjbGYBpA@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/e997a0c642860a96df0151cbeccfecbdf0450d08
Change ThisTimeLineID from a global variable to a local variable.
StartupXLOG() still has ThisTimeLineID as a local variable, but the remaining
code in xlog.c now needs to the relevant TimeLineID by some other means.
Mostly, this means that we now pass it as a function parameter to a bunch of
functions where we didn't previously. However, a few cases require special
handling: - In functions that might be called by outside callers who
wouldn't necessarily know what timeline to specify, we get the timeline ID
from shared memory. XLogCtl->ThisTimeLineID can be used in most cases since
recovery is known to have completed by the time those functions are called.
In xlog_redo(), we can use XLogCtl->replayEndTLI. - XLogFileClose() needs
to know the TLI of the open logfile. Do that with a new global variable
openLogTLI. While someone could argue that this is just trading one global
variable for another, the new one has a far more narrow purposes and is
referenced in just a few places. - read_backup_label() now returns the TLI
that it obtains by parsing the backup_label file. Previously, ReadRecord()
could be called to parse the checkpoint record without ThisTimeLineID having
been initialized. Now, the timeline is passed down, and I didn't want to
pass an uninitialized variable; this change lets us avoid that. The old
coding didn't seem to have any practical consequences that we need to
worry about, but this is cleaner. - In BootstrapXLOG(), it's just a constant.
Patch by me, reviewed and tested by Michael Paquier, Amul Sul, and Álvaro
Herrera. Discussion:
https://postgr.es/m/CA+TgmobfAAqhfWa1kaFBBFvX+5CjM=7TE=n4r4Q1o2bjbGYBpA@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/4a92a1c3d1c361ffb031ed05bf65b801241d7cdd
Remove tests added by bd807be6935929bdefe74d1258ca08048f0aafa3. The buildfarm
is unhappy. It's not obvious why it doesn't like these tests, but let's remove
them until we figure it out. Discussion:
http://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/ccf289745d3e50360653181dce6a277a1fc79730
Tomáš Vondra pushed:
Fix handling of NaN values in BRIN minmax multi. When calculating distance
between float4/float8 values, we need to be a bit more careful about NaN
values in order not to trigger assert. We consider NaN values to be equal
(distace 0.0) and in infinite distance from all other values. On builds
without asserts, this issue is mostly harmless - the ranges may be merged in
less efficient order, but the index is still correct. Per report from Andreas
Seltenreich. Backpatch to 14, where this new BRIN opclass was introduced.
Reported-by: Andreas Seltenreich Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/d91353f4b21f10417d142e6ac17a0490adae867c
Mark mystreamer variable as PG_USED_FOR_ASSERTS_ONLY. Silences warnings about
unused variable, when built without asserts.
https://git.postgresql.org/pg/commitdiff/dafcf887daa472b0a49bee7e07042372bc37cee4
Add bool GiST opclass to btree_gist. Adds bool opclass to btree_gist
extension, to allow creating GiST indexes on bool columns. GiST indexes on a
single bool column don't seem particularly useful, but this allows defining
exclusion constraings involving a bool column, for example. Author: Emre
Hasegeli Reviewed-by: Andrey Borodin Discussion:
https://postgr.es/m/CAE2gYzyDKJBZngssR84VGZEN=Ux=V9FV23QfPgo+7-yYnKKg4g@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/57e3c5160b24e61758f817feb7aac152cd695c6f
Fix gist_bool_ops to use gbtreekey2. Commit 57e3c5160b added a new GiST bool
opclass, but it used gbtreekey4 to store the data, which left two bytes
undefined, as reported by skink, our valgrind animal. There was a bit more
confusion, because the opclass also used gbtreekey8 in the definition. Fix by
defining a new gbtreekey2 struct, and using it in all the places. Discussion:
https://postgr.es/m/CAE2gYzyDKJBZngssR84VGZEN=Ux=V9FV23QfPgo+7-yYnKKg4g@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/e2fbb883720aa222f61eb9f3affad1c63bac7cbb
Alexander Korotkov pushed:
Reset lastOverflowedXid on standby when needed. Currently, lastOverflowedXid
is never reset. It's just adjusted on new transactions known to be
overflowed. But if there are no overflowed transactions for a long time,
snapshots could be mistakenly marked as suboverflowed due to wraparound. This
commit fixes this issue by resetting lastOverflowedXid when needed altogether
with KnownAssignedXids. Backpatch to all supported versions. Reported-by:
Stan Hu Discussion:
https://postgr.es/m/CAMBWrQ%3DFp5UAsU_nATY7EMY7NHczG4-DTDU%3DmCvBQZAQ6wa2xQ%40mail.gmail.com
Author: Kyotaro Horiguchi, Alexander Korotkov Reviewed-by: Stan Hu, Simon
Riggs, Nikolay Samokhvalov, Andrey Borodin, Dmitry Dolgov
https://git.postgresql.org/pg/commitdiff/05e6e78c1840d07154a4b52092178a2d1ad39445
Andres Freund pushed:
windows: Remove use of WIN32_LEAN_AND_MEAN from crashdump.c. Since 8162464a25e
we do so in win32_port.h. But it likely didn't do much before that either,
because at that point windows.h was already included via win32_port.h.
Reported-By: Tom Lane Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/87bb606b20ae4e52fa45eda2d9914c19eb7eea5e
[Less]
|
Posted
over 2 years
ago
The just published ODB C++ ORM version 2.5.0-b.21 adds support for bulk operations in PostgreSQL using the new pipeline mode introduced in libpq 14.
For background on bulk operations (which until now were only available in ODB when using Oracle and
... [More]
SQL Server) see Section 15.3, "Bulk Database Operations" in the ODB manual.
Note that while this functionality requires libpq version 14 or later, it should be usable with PostgreSQL servers version 7.4 or later. The development of this support was sponsored by Qube Research & Technologies Limited.
Both ODB 2.5.0-b.21 and libpq 14.0.0 packages are available from the cppget.org package repository. For instructions on building this version of ODB, see Installation Instructions.
[Less]
|
Posted
over 2 years
ago
DynamoDB FDW 1.0.0 is newly released.
We have just newly released Foreign Data Wrapper for DynamoDB.
This release can work with PostgreSQL 13.
This FDW is implemented in C language with AWS C ++ library.
This release supports following features :
... [More]
Support SELECT feature to get data from DynamoDB.
DynamoDB FDW supports selecting columns or nested attribute object by using -> or ->> jsonb operator.
Support INSERT feature.
Support UPDATE feature.
Support DELETE feature.
Support push down WHERE clause (including nested attribute object).
Support push down function SIZE of DynamoDB.
This is developed by Toshiba Software Engineering & Technology Center.
Please see the repository for details.
Source repository : https://github.com/pgspider/dynamodb_fdw
Best Regards,
Shigeo Hirose
This email was sent to you from Toshiba. It was delivered on their behalf by the PostgreSQL project. Any questions about the content of the message should be sent to Toshiba.
[Less]
|
Posted
over 2 years
ago
The new milestone PostgresDAC 3.11 release is out!
Now with PostgreSQL 14 and RAD Studio 11 Alexandria support
PostgresDAC is a direct access component suite for RAD Studio (Delphi and C++Builder)/FreePascal/Lazarus and PostgreSQL, EnterpriseDB
... [More]
, Amazon RDS, PostgresPro, and Heroku Postgres.
Full changelog:
[!] RAD Studio 11 Alexandria (Delphi and C++ Builder) support introduced
[!] v14 client libraries added
[!] v14 dump & restore libraries (pg_dump.dll, pg_restore.dll) added
[*] TPSQLDump will warn if postfix operators are being dumped
Download
You're welcome to download the PostgresDAC v3.11 right now at http://microolap.com/products/connectivity/postgresdac/download/ or login to your private area on our site at http://microolap.com/my/downloads/.
Feedback
Please don't hesitate to ask any questions or report bugs with our Support Ticketing system available at http://www.microolap.com/support/.
[Less]
|
Posted
over 2 years
ago
Registration now open
After the huge success of our Postgres Build 2020 (PG Build) online conference, the event returns in 2021 with an expanded agenda, more speakers, more tracks, and the latest in-depth talks around PostgreSQL.
PG Build 2021 will
... [More]
bring together the world’s leading PostgreSQL experts, enterprises, community members, and thought leaders to connect and explore the latest in PostgreSQL technology and use cases.
The conference will be held online on 30 November and 1 December 2021, offering live events between 9:00 am and 5:00 pm GMT, as well as recorded sessions. Registration for Postgres Build is free.
You can attend the talks you want and ask live questions to the speakers and expert panels, yet still make the key meetings in your workday without the need to jump on a plane!
Agenda and Conference Tracks
Featuring over 30+ talks, keynotes, and live panel discussions, PG Build 2021 will gather leading PostgreSQL experts and users from around Europe and the world:
Nov 30 - Opening Keynote: “Future Postgres Challenges” by Bruce Momjian, VP, Postgres Evangelist, at EDB
Dec 1 - Opening Keynote: "Quo vadis, Postgres in the cloud? Kubernetes or DBaaS?" by Marc Linster, CTO, at EDB
Dec 1 - Closing Keynote: “Where the DBMS Market is Going” by Michael Stonebraker
This year’s agenda highlights PostgreSQL user success stories and includes tracks on PostgreSQL for the Enterprise, PostgreSQL tools and features, and PostgreSQL use cases.
For more information and to register for free, visit the Postgres Build 2021 website.
[Less]
|
Posted
over 2 years
ago
AWS is pleased to announce the release of Babelfish for PostgreSQL.
What is Babelfish?
Babelfish for PostgreSQL is a dual licensed, Apache-2.0 and PostgreSQL, open
source project that adds additional syntax, functions, data types, and more
to
... [More]
PostgreSQL to help in the migration from SQL Server. It includes a network
end-point added to PostgreSQL to enable your PostgreSQL database to understand
the SQL Server wire protocol and commonly used SQL Server commands. With
Babelfish, applications that were originally built for SQL Server can work
directly with PostgreSQL, with little to no code changes, and without changing
database drivers.
For more information, please see our launch announcement.
You can view and download the source code on GitHub.
[Less]
|
Posted
over 2 years
ago
PostgreSQL Weekly News - October 31, 2021
Happy Hallowe'en!
PostgreSQL Product News
pg_statement_rollback v1.3, an extension that adds server side
transaction with rollback at statement level,
released.
PostgreSQL Jobs for October
... [More]
https://archives.postgresql.org/pgsql-jobs/2021-10/
PostgreSQL in the News
Planet PostgreSQL: https://planet.postgresql.org/
PostgreSQL Weekly News is brought to you this week by David Fetter
Submit news and announcements by Sunday at 3:00pm PST8PDT to [email protected].
Applied Patches
Michaël Paquier pushed:
Add replication command READ_REPLICATION_SLOT. The command is supported for
physical slots for now, and returns the type of slot, its restart_lsn and its
restart_tli. This will be useful for an upcoming patch related to
pg_receivewal, to allow the tool to be able to stream from the position of a
slot, rather than the last WAL position flushed by the backend (as reported by
IDENTIFY_SYSTEM) if the archive directory is found as empty, which would be an
advantage in the case of switching to a different archive locations with the
same slot used to avoid holes in WAL segment archives. Author: Ronan Dunklau
Reviewed-by: Kyotaro Horiguchi, Michael Paquier, Bharath Rupireddy Discussion:
https://postgr.es/m/18708360.4lzOvYHigE@aivenronan
https://git.postgresql.org/pg/commitdiff/b4ada4e19fd7bedb433e46516ccd0ca4213d2719
Allow pg_receivewal to stream from a slot's restart LSN. Prior to this patch,
when running pg_receivewal, the streaming start point would be the current
location of the archives if anything is found in the local directory where WAL
segments are written, and pg_receivewal would fall back to the current WAL
flush location if there are no archives, as of the result of an
IDENTIFY_SYSTEM command. If for some reason the WAL files from pg_receivewal
were moved, it is better to try a restart where we left at, which is the
replication slot's restart_lsn instead of skipping right to the current flush
location, to avoid holes in the WAL backed up. This commit changes
pg_receivewal to use the following sequence of methods to determine the
starting streaming LSN: - Scan the local archives. - Use the slot's
restart_lsn, if supported by the backend and if a slot is defined. - Fallback
to the current flush LSN as reported by IDENTIFY_SYSTEM. To keep
compatibility with older server versions, we only attempt to use
READ_REPLICATION_SLOT if the backend version is at least 15, and fallback to
the older behavior of streaming from the current flush LSN if the command is
not supported. Some TAP tests are added to cover this feature. Author: Ronan
Dunklau Reviewed-by: Kyotaro Horiguchi, Michael Paquier, Bharath Rupireddy
Discussion:
https://postgr.es/m/18708360.4lzOvYHigE@aivenronan
https://git.postgresql.org/pg/commitdiff/f61e1dd2cee6b1a1da75c2bb0ca3bc72f18748c1
Fix overly-lax regex pattern in TAP test of READ_REPLICATION_SLOT. The case
checking for a NULL output when a slot does not exist was too lax, as it was
passing for any output generated by the query. This fixes the matching
pattern to be what it should be, matching only on "||". Oversight in b4ada4e.
https://git.postgresql.org/pg/commitdiff/0db343dc13bc8657976c39ddbf7e0c7db8b2efff
doc: Fix grammar in page of pg_receivewal. Introduced by f61e1dd. Author:
Kyotaro Horiguchi Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/8af09daf5629e9b85f37cc23983819b8ccd11b43
Add test for copy of shared dependencies from template database. As 98ec35b
has proved, there has never been any coverage in this area of the code. This
commit adds a new TAP test with a template database that includes a small set
of shared dependencies copied to a new database. The test is added in
createdb, where we have never tested that -T generates a query with TEMPLATE,
either. Reviewed-by: Tom Lane Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/70bfc5ae537c8bfeed4849b7d9f814de89a155fe
doc: Fix link to SELinux user guide in sepgsql page. Reported-by: Anton
Voloshin Discussion:
https://postgr.es/m/[email protected]
Backpatch-through: 10
https://git.postgresql.org/pg/commitdiff/cc1853b30048307d93f8aa30f4d64f88b527f04d
Add TAP test for archive_cleanup_command and recovery_end_command. This adds
tests checking for the execution of both commands. The recovery test
002_archiving.pl is nicely adapted to that, as promotion is triggered already
twice there, and even if any of those commands fail they don't affect recovery
or promotion. A command success is checked using a file generated by an
"echo" command, that should be able to work in all the buildfarm environments,
even Msys (but we'll know soon about that). Command failure is tested with an
"echo" command that points to a path that does not exist, scanning the backend
logs to make sure that the failure happens. Both rely on the backend
triggering the commands from the root of the data folder, making its logic
more robust. Thanks to Neha Sharma for the extra tests on Windows. Author:
Amul Sul, Michael Paquier Reviewed-by: Andres Freund, Euler Taveira
Discussion:
https://postgr.es/m/CAAJ_b95R_c4T5moq30qsybSU=eDzDHm=4SPiAWaiMWc2OW7=1Q@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/46dea2419ee7895a4eb3d048317682e6f18a17e1
Speed up TAP tests of pg_receivewal. This commit improves the speed of those
tests by 25~30%, using some simple ideas to reduce the amount of data written
by pg_receivewal: - Use a segment size of 1MB. While reducing the amount of
data zeroed by pg_receivewal for the new segments, this improves the code
coverage with a non-default segment size. - In the last test involving a
slot's restart_lsn, generate a checkpoint to advance the redo LSN and the WAL
retained by the slot created, reducing the number of segments that need to be
archived. This counts for most of the gain. - Minimize the amount of data
inserted into the dummy table. Reviewed-by: Ronan Dunklau Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/d680992af5406245f769b697fbb4e130e6220664
Heikki Linnakangas pushed:
Clarify the logic in a few places in the new balanced merge code. In
selectnewtape(), use 'nOutputTapes' rather than 'nOutputRuns' in the check for
whether to start a new tape or to append a new run to an existing tape. Until
'maxTapes' is reached, nOutputTapes is always equal to nOutputRuns, so it
doesn't change the logic, but it seems more logical to compare # of tapes with
# of tapes. Also, currently maxTapes is never modified after the merging
begins, but written this way, the code would still work if it was. (Although
the nOutputRuns == nOutputTapes assertion would need to be removed and using
nOutputRuns % nOutputTapes to distribute the runs evenly across the tapes
wouldn't do a good job anymore). Similarly in mergeruns(), change to
USEMEM(state->tape_buffer_mem) to account for the memory used for tape
buffers. It's equal to availMem currently, but tape_buffer_mem is more direct
and future-proof. For example, if we changed the logic to only allocate half
of the remaining memory to tape buffers, USEMEM(state->tape_buffer_mem) would
still be correct. Coverity complained about these. Hopefully this patch helps
it to understand the logic better. Thanks to Tom Lane for initial analysis.
https://git.postgresql.org/pg/commitdiff/166f94377c886516ca986ef8a623cd2e854fe911
Robert Haas pushed:
StartupXLOG: Call CleanupAfterArchiveRecovery after XLogReportParameters. This
does a better job grouping related operations together, since all of the WAL
records that we need to write prior to allowing WAL writes generally and
written by a single uninterrupted stretch of code. Since
CleanupAfterArchiveRecovery() just (1) runs recovery_end_command, (2) removes
non-parent xlog files, and (3) archives any final partial segment, this should
be safe, because all of those things are pretty much unrelated to the WAL
record written by XLogReportParameters(). Amul Sul, per a suggestion from me
Discussion:
http://postgr.es/m/CAAJ_b97fysj6sRSQEfOHj-y8Jfd5uPqOgO74qast89B4WfD+TA@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/a75dbf7f9ee6ff0c0e2ab4b224b04fc50c4e6577
StartupXLOG: Don't repeatedly disable/enable local xlog insertion. All the
code that runs in the startup process to write WAL records before that's
allowed generally is now consecutive, so there's no reason to shut the
facility to write WAL locally off and then turn it on again three times in a
row. Unfortunately, this requires a slight kludge in the checkpointer, which
needs to separately enable writing WAL in order to write the checkpoint
record. Because that code might run in the same process as StartupXLOG() if we
are in single-user mode, we must save/restore the state of the
LocalXLogInsertAllowed flag. Hopefully, we'll be able to eliminate this wart
in further refactoring, but it's not too bad anyway. Amul Sul, with
modifications by me. Discussion:
http://postgr.es/m/CAAJ_b97fysj6sRSQEfOHj-y8Jfd5uPqOgO74qast89B4WfD+TA@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/18e0913a420349d373cfd8e45b91b4777501fb74
Remove useless code from CreateReplicationSlot. According to the comments, we
initialize sendTimeLineIsHistoric and sendTimeLine here for the benefit of
WalSndSegmentOpen. However, the only way that can happen is if
logical_read_xlog_page calls WALRead. And since logical_read_xlog_page
initializes the same global variables internally, we don't need to also do it
here. These initializations have been here since replication slots were
introduced in commit 858ec11858a914d4c380971985709b6d6b7dd6fc. They were
certainly useless at that time, too, because logical decoding didn't yet exist
then, and physical replication doesn't examine any WAL at the time of slot
creation. I haven't checked all the intermediate versions, but I suspect
there's no point at which this code ever did anything useful. To reduce
future confusion, remove the code. Since there's no functional defect, no
back-patch. Discussion:
http://postgr.es/m/CA+TgmobSWzacEs+r6C-7DrOPDHoDar4i9gzxB3SCBr5qjnLmVQ@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/902a2c280012557b85c7e0fce3f6f0e355cb2d69
Add enable_timeout_every() to fire the same timeout repeatedly.
enable_timeout_at() and enable_timeout_after() can still be used when you want
to fire a timeout just once. Patch by me, per a suggestion from Tom Lane.
Discussion:
http://postgr.es/m/[email protected]
Discussion:
http://postgr.es/m/CA+TgmoYqSF5sCNrgTom9r3Nh=at4WmYFD=gsV-omStZ60S0ZUQ@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/732e6677a667c03b1551a855e3216644b0f125ec
Report progress of startup operations that take a long time. Users sometimes
get concerned whe they start the server and it emits a few messages and then
doesn't emit any more messages for a long time. Generally, what's happening is
either that the system is taking a long time to apply WAL, or it's taking a
long time to reset unlogged relations, or it's taking a long time to fsync the
data directory, but it's not easy to tell which is the case. To fix that, add
a new 'log_startup_progress_interval' setting, by default 10s. When an
operation that is known to be potentially long-running takes more than this
amount of time, we'll log a status update each time this interval elapses. To
avoid undesirable log chatter, don't log anything about WAL replay when in
standby mode. Nitin Jadhav and Robert Haas, reviewed by Amul Sul, Bharath
Rupireddy, Justin Pryzby, Michael Paquier, and Álvaro Herrera. Discussion:
https://postgr.es/m/CA+TgmoaHQrgDFOBwgY16XCoMtXxsrVGFB2jNCvb7-ubuEe1MGg@mail.gmail.com
Discussion:
https://postgr.es/m/CAMm1aWaHF7VE69572_OLQ+MgpT5RUiUDgF1x5RrtkJBLdpRj3Q@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/9ce346eabf350a130bba46be3f8c50ba28506969
Initialize variable to placate compiler. Per Nathan Bossart. Discussion:
http://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/a030a0c5ccb113ccd09d0f0b82f1edb5e49ed607
When fetching WAL for a basebackup, report errors with a sensible TLI. The
previous code used ThisTimeLineID, which need not even be initialized here,
although it usually was in practice, because pg_basebackup issues
IDENTIFY_SYSTEM before calling BASE_BACKUP, and that initializes
ThisTimeLineID as a side effect. That's not really good enough, though, not
only because we shoudn't be counting on side effects like that, but also
because the TLI could change meanwhile. Fortunately, we have convenient access
to more meaningful TLI values, so use those instead. Because of the way this
logic is coded, the consequences of using a possibly-incorrect TLI here are no
worse than a slightly confusing error message, I don't want to take any risk
here, so no back-patch at least for now. Patch by me, reviewed by Kyotaro
Horiguchi and Michael Paquier Discussion:
http://postgr.es/m/CA+TgmoZRNWGWYDX9RgTXMG6_nwSdB=PB-PPRUbvMUTGfmL2sHQ@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/2f5c4397c39dea49c5608ba583868e26d767fc32
Fix race condition in startup progress reporting. Commit
9ce346eabf350a130bba46be3f8c50ba28506969 added startup progress reporting, but
begin_startup_progress_phase has a race condition: the timeout for the
previous phase might fire just before we reschedule the interrupt for the next
phase. To avoid the race, disable the timeout, clear the flag, and then
re-enable the timeout. Patch by me, reviewed by Nitin Jadhav. Discussion:
https://postgr.es/m/CA+TgmoYq38i6iAzfRLVxA6Cm+wMCf4WM8wC3o_a+X_JvWC8bJg@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/5ccceb2946d4104804f8dca67515b602f5e78cdd
Thomas Munro pushed:
Reject huge_pages=on if shared_memory_type=sysv. It doesn't work (it could,
but hasn't been implemented). Back-patch to 12, where shared_memory_type
arrived. Reported-by: Alexander Lakhin [email protected] Reviewed-by:
Alexander Lakhin [email protected] Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/8781b0ce25e702ba4a4f032d00da7acdef8dbfe1
Daniel Gustafsson pushed:
Ensure that slots are zeroed before use. The previous coding relied on the
memory for the slots being zeroed elsewhere, which while it was true in this
case is not an contract which is guaranteed to hold. Explicitly clear the
tts_isnull array to ensure that the slots are filled from a known state.
Backpatch to v14 where the catalog multi-inserts were introduced.
Reviewed-by: Michael Paquier [email protected] Discussion:
https://postgr.es/m/CAJ7c6TP0AowkUgNL6zcAK-s5HYsVHVBRWfu69FRubPpfwZGM9A@mail.gmail.com
Backpatch-through: 14
https://git.postgresql.org/pg/commitdiff/e63ce9e8d6ac8dced20592c4134004640f9f5644
Fix VPATH builds for src/test/ssl targets. Commit b4c4a00ea refactored the
gist of the sslfiles target into a separate makefile in order to override
settings in Makefile.global. The invocation of this this file didn't however
include the absolute path for VPATH builds, resulting in "make clean" failing.
Fix by providing the path to the new makefile. Reported-by: Andres Freund
[email protected] Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/349cd8c582a1e666c9c804850cf5b532b86cd1b4
Fix typos in comments. Author: Peter Smith [email protected] Discussion:
https://postgr.es/m/CAHut+PsN_gmKu-KfeEb9NDARoTPbs4AN4PPu=6LZXFZRJ13SEw@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/8af57ad81578f825ac8c46840c841833db205106
Fujii Masao pushed:
Improve HINT message that FDW reports when there are no valid options. The
foreign data wrapper's validator function provides a HINT message with list of
valid options for the object specified in CREATE or ALTER command, when the
option given in the command is invalid. Previously postgresql_fdw_validator()
and the validator functions for postgres_fdw and dblink_fdw worked in that way
even there were no valid options in the object, which could lead to the HINT
message with empty list (because there were no valid options). For example,
ALTER FOREIGN DATA WRAPPER postgres_fdw OPTIONS (format 'csv') reported the
following ERROR and HINT messages. This behavior was confusing. ERROR:
invalid option "format" HINT: Valid options in this context are: There is
no such issue in file_fdw. The validator function for file_fdw reports the
HINT message "There are no valid options in this context." instead in that
case. This commit improves postgresql_fdw_validator() and the validator
functions for postgres_fdw and dblink_fdw so that they do likewise. For
example, this change causes the above ALTER FOREIGN DATA WRAPPER command to
report the following messages. ERROR: invalid option "nonexistent"
HINT: There are no valid options in this context. Author: Kosei Masumura
Reviewed-by: Bharath Rupireddy, Fujii Masao Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/5fedf7417b69295294b154a219edd8a26eaa6ab6
Jeff Davis pushed:
Allow GRANT on pg_log_backend_memory_contexts(). Remove superuser check,
allowing any user granted permissions on pg_log_backend_memory_contexts() to
log the memory contexts of any backend. Note that this could allow a
privileged non-superuser to log the memory contexts of a superuser backend,
but as discussed, that does not seem to be a problem. Reviewed-by: Nathan
Bossart, Bharath Rupireddy, Michael Paquier, Kyotaro Horiguchi, Andres Freund
Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/f0b051e322d530a340e62f2ae16d99acdbcb3d05
Grant memory views to pg_read_all_stats. Grant privileges on views
pg_backend_memory_contexts and pg_shmem_allocations to the role
pg_read_all_stats. Also grant on the underlying functions that those views
depend on. Author: Bharath Rupireddy [email protected]
Reviewed-by: Nathan Bossart [email protected] Discussion:
https://postgr.es/m/CALj2ACWAZo3Ar_EVsn2Zf9irG+hYK3cmh1KWhZS_Od45nd01RA@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/77ea4f94393eb4a16df32b573bf053bedaef2e09
Amit Kapila pushed:
Allow publishing the tables of schema. A new option "FOR ALL TABLES IN SCHEMA"
in Create/Alter Publication allows one or more schemas to be specified, whose
tables are selected by the publisher for sending the data to the subscriber.
The new syntax allows specifying both the tables and schemas. For example:
CREATE PUBLICATION pub1 FOR TABLE t1,t2,t3, ALL TABLES IN SCHEMA s1,s2; OR
ALTER PUBLICATION pub1 ADD TABLE t1,t2,t3, ALL TABLES IN SCHEMA s1,s2; A new
system table "pg_publication_namespace" has been added, to maintain the
schemas that the user wants to publish through the publication. Modified the
output plugin (pgoutput) to publish the changes if the relation is part of
schema publication. Updates pg_dump to identify and dump schema publications.
Updates the \d family of commands to display schema publications and \dRp+
variant will now display associated schemas if any. Author: Vignesh C, Hou
Zhijie, Amit Kapila Syntax-Suggested-by: Tom Lane, Alvaro Herrera Reviewed-by:
Greg Nancarrow, Masahiko Sawada, Hou Zhijie, Amit Kapila, Haiying Tang, Ajin
Cherian, Rahila Syed, Bharath Rupireddy, Mark Dilger Tested-by: Haiying Tang
Discussion:
https://www.postgresql.org/message-id/CALDaNm0OANxuJ6RXqwZsM1MSY4s19nuH3734j4a72etDwvBETQ@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/5a2832465fd8984d089e8c44c094e6900d987fcd
Add tap tests for the schema publications. This adds additional tests for
commit 5a2832465f ("Allow publishing the tables of schema.). This allows
testing streaming of data in tables that are published via schema
publications. Author: Vignesh C, Haiying Tang Reviewed-by: Greg Nancarrow,
Hou Zhijie, Amit Kapila Discussion:
https://www.postgresql.org/message-id/CALDaNm0OANxuJ6RXqwZsM1MSY4s19nuH3734j4a72etDwvBETQ%40mail.gmail.com
https://git.postgresql.org/pg/commitdiff/6b0f6f79eef2168ce38a8ee99c3ed76e3df5d7ad
Magnus Hagander pushed:
Clarify that --system reindexes system catalogs only. Make this more clear
both in the help message and docs. Reviewed-By: Michael Paquier
Backpatch-through: 9.6 Discussion:
https://postgr.es/m/CABUevEw6Je0WUFTLhPKOk4+BoBuDrE-fKw3N4ckqgDBMFu4paA@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/eff61383b982be8dc71d942340a839bea88a9eab
Peter Geoghegan pushed:
Further harden nbtree posting split code. Add more defensive checks around
posting list split code. These should detect corruption involving duplicate
table TIDs earlier and more reliably than any existing check. Follow up to
commit 8f72bbac. Discussion:
https://postgr.es/m/CAH2-WzkrSY_kjyd1_M5xJK1uM0govJXMxPn8JUSvwcUOiHuWVw@mail.gmail.com
Backpatch: 13-, where nbtree deduplication was introduced.
https://git.postgresql.org/pg/commitdiff/a5213adf3d351a31c5f5eae1a756a9d3555dc31c
Fix ordering of items in nbtree error message. Oversight in commit a5213adf.
Backpatch: 13-, just like commit a5213adf.
https://git.postgresql.org/pg/commitdiff/c2381b51049bad5dd1863ab1116b315bd7693b7c
Remove obsolete nbtree LP_DEAD item comments. Comments above
_bt_findinsertloc() that talk about LP_DEAD items are now out of place. We
already discuss index tuple deletion at an earlier point in the same comment
block. Oversight in commit d168b666.
https://git.postgresql.org/pg/commitdiff/4c6afd805b8db3492c8f409ecdba192d853fd571
Demote pg_unreachable() in heapam to an assertion. Commit d168b66682, which
overhauled index deletion, added a pg_unreachable() to the end of a sort
comparator used when sorting heap TIDs from an index page. This allows the
compiler to apply optimizations that assume that the heap TIDs from the index
AM must always be unique. That doesn't seem like a good idea now, given
recent reports of corruption involving duplicate TIDs in indexes on Postgres
Demote to an assertion, just in case. Backpatch: 14-, where index
deletion was overhauled.
https://git.postgresql.org/pg/commitdiff/5f55fc5a346e1ab54f3d756e368d276b95be8c4a
Tom Lane pushed:
Improve contrib/amcheck's tests for CREATE INDEX CONCURRENTLY. Commits
fdd965d07 and 3cd9c3b92 tested CREATE INDEX CONCURRENTLY by launching two
separate pgbench runs concurrently. This was needed so that only a single
client thread would run CREATE INDEX CONCURRENTLY, avoiding deadlock between
two CICs. However, there's a better way, which is to use an advisory lock to
prevent concurrent CICs. That's better in part because the test code is
shorter and more readable, but mostly because it automatically scales things
to launch an appropriate number of CICs relative to the number of INSERT
transactions. As committed, typically half to three-quarters of the CIC
transactions were pointless because the INSERT transactions had already
stopped. In passing, remove background_pgbench, which was added to support
these tests and isn't needed anymore. We can always put it back if we find a
use for it later. Back-patch to v12; older pgbench versions lack the
conditional-execution features needed for this method. Tom Lane and Andrey
Borodin Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/7f580aa5d88a9b03d66fcb9a1d7c4fcd69d9e126
Speed up printing of integers in snprintf.c. Since the only possible divisors
are 8, 10, and 16, it doesn't cost much code space to replace the division
loop with three copies using constant divisors. On most machines, division by
a constant can be done a lot more cheaply than division by an arbitrary value.
A microbenchmark testing just snprintf("foo %d") with a 9-digit value showed
about a 2X speedup for me (tgl). Most of Postgres isn't too dependent on the
speed of snprintf, so that the effect in real-world cases is barely
measurable. Still, a cycle saved is a cycle earned. Arjan van de Ven
Discussion:
https://postgr.es/m/[email protected]
Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/3c17926eedd51c4094db7c62f59950918044ab1c
Update time zone data files to tzdata release 2021e. DST law changes in Fiji,
Jordan, Palestine, and Samoa. Historical corrections for Barbados, Cook
Islands, Guyana, Niue, Portugal, and Tonga. Also, the Pacific/Enderbury zone
has been renamed to Pacific/Kanton. The following zones have been merged into
nearby, more-populous zones whose clocks have agreed since 1970: Africa/Accra,
America/Atikokan, America/Blanc-Sablon, America/Creston, America/Curacao,
America/Nassau, America/Port_of_Spain, Antarctica/DumontDUrville, and
Antarctica/Syowa.
https://git.postgresql.org/pg/commitdiff/937aafd6d5580b81134c7f303d04cf7561ad0309
Test and document the behavior of initialization cross-refs in plpgsql. We had
a test showing that a variable isn't referenceable in its own initialization
expression, nor in prior ones in the same block. It is referenceable in
later expressions in the same block, but AFAICS there is no test case
exercising that. Add one, and also add some error cases. Also, document that
this is possible, since the docs failed to cover the point. Per question from
tomás at tuxteam. I don't feel any need to back-patch this, but we should
ensure we don't break it in future. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/a2a731d6c9db0ba650aa6f7c4fe349ccf712f74d
Peter Eisentraut pushed:
Remove unused chunk from standalone-profile.xsl. unused since
1707a0d2aa6b2bcfe78f63836c769943a1a6b9e0
https://git.postgresql.org/pg/commitdiff/b8b62b4be28b8acd36d32d5db65162bbbcd3a754
uuid-ossp: Remove obsolete build connection with pgcrypto. unused since
a8ed6bb8f4cf259b95c1bff5da09a8f4c79dca46
https://git.postgresql.org/pg/commitdiff/237c12aabe39a58f3f5364fd94e0ca8ae8824957
doc: Remove some obsolete pgcrypto documentation. The pgcrypto documentation
contained acknowledgments of used external code, but some of this code has
been moved to src/common/, so mentioning it with pgcrypto no longer makes
sense, so remove it.
https://git.postgresql.org/pg/commitdiff/e6c60719e6c6ee9bd396f430879e1de9079bf74c
pg_dump: Refactor messages. This reduces the number of separate messages for
translation.
https://git.postgresql.org/pg/commitdiff/fd2706589a7da4be6f6998befdf8e5fdea1565b8
[Less]
|
Posted
over 2 years
ago
The PGJDBC team are proud to announce release 42.3.1 of the JDBC driver for PostgreSQL
This is mainly a regression fix to fix an issues with binary transfers of numeric. The details can be found in this issue
There are a number of other performance
... [More]
tweaks the entire change log can be found in the Changelog
The team would like to thank all that contributed.
JDBC Team
[Less]
|
Posted
over 2 years
ago
PostgreSQL Weekly News - October 24, 2021
PostgreSQL Product News
JDBC 42.3.0
released.
pgmetrics 1.12, a command-line tool for PostgreSQL metrics,
released.
StackGres 1.0.0, a platform for running PostgreSQL on Kubernetes, released.
... [More]
https://stackgres.io/
pgexporter 0.2.0, a Prometheus exporter for PostgreSQL,
released
pgAdmin4 6.1, a web- and native GUI control center for PostgreSQL,
released.
PostgreSQL Jobs for October
https://archives.postgresql.org/pgsql-jobs/2021-10/
PostgreSQL in the News
Planet PostgreSQL: https://planet.postgresql.org/
PostgreSQL Weekly News is brought to you this week by David Fetter
Submit news and announcements by Sunday at 3:00pm PST8PDT to [email protected].
Applied Patches
Michaël Paquier pushed:
Fix portability issues in new TAP tests of psql. The tests added by c0280bc
and d9ddc50 in 001_basic.pl have introduced commands calling directly psql,
making them sensitive to the environment. One issue was that those commands
forgot -X to not use a local .psqlrc, causing all those tests to fail if psql
cannot properly parse this file. TAP tests should be designed so as they run
in an isolated fashion, without any dependency on the environment where they
are run. As PostgresNode::psql gives already all the facilities those new
tests need, switch to that instead of calling plain psql commands where
interactions with a backend are needed. The test is slightly refactored to be
able to check after the expected patterns of stdout and stderr, keeping the
same amount of coverage as previously. Reported-by: Peter Geoghegan
Discussion:
https://postgr.es/m/CAH2-Wzn8ftvcDPwomn+y04JJzbT=TG7TN=QsmSEATUOW-ZuvQQ@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/384f1abdb9b0f669279fcd57ba2173eb31724740
Reset properly snapshot export state during transaction abort. During a
replication slot creation, an ERROR generated in the same transaction as the
one creating a to-be-exported snapshot would have left the backend in an
inconsistent state, as the associated static export snapshot state was not
being reset on transaction abort, but only on the follow-up command received
by the WAL sender that created this snapshot on replication slot creation.
This would trigger inconsistency failures if this session tried to export
again a snapshot, like during the creation of a replication slot. Note that a
snapshot export cannot happen in a transaction block, so there is no need to
worry resetting this state for subtransaction aborts. Also, this inconsistent
state would very unlikely show up to users. For example, one case where this
could happen is an out-of-memory error when building the initial snapshot
to-be-exported. Dilip found this problem while poking at a different patch,
that caused an error in this code path for reasons unrelated to HEAD. Author:
Dilip Kumar Reviewed-by: Michael Paquier, Zhihong Yu Discussion:
https://postgr.es/m/CAFiTN-s0zA1Kj0ozGHwkYkHwa5U0zUE94RSc_g81WrpcETB5=w@mail.gmail.com
Backpatch-through: 9.6
https://git.postgresql.org/pg/commitdiff/409f9ca4471331be0f77b665ff3a1836a41de5b3
Block ALTER INDEX/TABLE index_name ALTER COLUMN colname SET (options). The
grammar of this command run on indexes with column names has always been
authorized by the parser, and it has never been documented. Since 911e702, it
is possible to define opclass parameters as of CREATE INDEX, which actually
broke the old case of ALTER INDEX/TABLE where relation-level parameters
n_distinct and n_distinct_inherited could be defined for an index (see 76a47c0
and its thread where this point has been touched, still remained unused).
Attempting to do that in v13~ would cause the index to become unusable, as
there is a new dedicated code path to load opclass parameters instead of the
relation-level ones previously available. Note that it is possible to fix
things with a manual catalog update to bring the relation back online. This
commit disables this command for now as the use of column names for indexes
does not make sense anyway, particularly when it comes to index expressions
where names are automatically computed. One way to properly support this case
properly in the future would be to use column numbers when it comes to
indexes, in the same way as ALTER INDEX .. ALTER COLUMN .. SET STATISTICS.
Partitioned indexes were already blocked, but not indexes. Some tests are
added for both cases. There was some code in ANALYZE to enforce n_distinct to
be used for an index expression if the parameter was defined, but just remove
it for now until/if there is support for this (note that index-level
parameters never had support in pg_dump either, previously), so this was just
dead code. Reported-by: Matthijs van der Vleuten Author: Nathan Bossart,
Michael Paquier Reviewed-by: Vik Fearing, Dilip Kumar Discussion:
https://postgr.es/m/[email protected]
Backpatch-through: 13
https://git.postgresql.org/pg/commitdiff/fdd88571454e2b00dbe446e8609c6e4294ca89ae
Fix build of MSVC with OpenSSL 3.0.0. The build scripts of Visual Studio would
fail to detect properly a 3.0.0 build as the check on the second digit was
failing. This is adjusted where needed, allowing the builds to complete.
Note that the MSIs of OpenSSL mentioned in the documentation have not changed
any library names for Win32 and Win64, making this change straight-forward.
Reported-by: htalaco, via github Reviewed-by: Daniel Gustafsson Discussion:
https://postgr.es/m/[email protected]
Backpatch-through: 9.6
https://git.postgresql.org/pg/commitdiff/41f30ecc29c89285d3eecd435906c4e9cb048be4
Fix corruption of pg_shdepend when copying deps from template database. Using
for a new database a template database with shared dependencies that need to
be copied over was causing a corruption of pg_shdepend because of an
off-by-one computation error of the index number used for the values inserted
with a slot. Issue introduced by e3931d0. Monitoring the rest of the code,
there are no similar mistakes. Reported-by: Sven Klemm Author: Aleksander
Alekseev Reviewed-by: Daniel Gustafsson, Michael Paquier Discussion:
https://postgr.es/m/CAJ7c6TP0AowkUgNL6zcAK-s5HYsVHVBRWfu69FRubPpfwZGM9A@mail.gmail.com
Backpatch-through: 14
https://git.postgresql.org/pg/commitdiff/98ec35b0bbf6003e89fc06aa140e12fd90bbad47
doc: Describe calculation method of streaming start for pg_receivewal. The
documentation was imprecise about the starting LSN used for WAL streaming if
nothing can be found in the local archive directory defined with the
pg_receivewal command, so be more talkative on this matter. Extracted from a
larger patch by the same author. Author: Ronan Dunklau, Michael Paquier
Discussion:
https://postgr.es/m/18708360.4lzOvYHigE@aivenronan
Backpatch-through: 10
https://git.postgresql.org/pg/commitdiff/1e9475694b0ae2cf1204d01d2ef6ad86f3c7cac8
Heikki Linnakangas pushed:
Replace polyphase merge algorithm with a simple balanced k-way merge. The
advantage of polyphase merge is that it can reuse the input tapes as output
tapes efficiently, but that is irrelevant on modern hardware, when we can
easily emulate any number of tape drives. The number of input tapes we
can/should use during merging is limited by work_mem, but output tapes that we
are not currently writing to only cost a little bit of memory, so there is no
need to skimp on them. This makes sorts that need multiple merge passes
faster. Discussion:
https://www.postgresql.org/message-id/420a0ec7-602c-d406-1e75-1ef7ddc58d83%40iki.fi
Reviewed-by: Peter Geoghegan, Zhihong Yu, John Naylor
https://git.postgresql.org/pg/commitdiff/65014000b351d5725eb00d133416ab1b4f8245b1
Refactor LogicalTapeSet/LogicalTape interface. All the tape functions, like
LogicalTapeRead and LogicalTapeWrite, now take a LogicalTape as argument,
instead of LogicalTapeSet+tape number. You can create any number of
LogicalTapes in a single LogicalTapeSet, and you don't need to decide the
number upfront, when you create the tape set. This makes the tape management
in hash agg spilling in nodeAgg.c simpler. Discussion:
https://www.postgresql.org/message-id/420a0ec7-602c-d406-1e75-1ef7ddc58d83%40iki.fi
Reviewed-by: Peter Geoghegan, Zhihong Yu, John Naylor
https://git.postgresql.org/pg/commitdiff/c4649cce39a41b27db874e75ddd47adaec1b0ea4
Fix format modifier used in elog. The previous commit 65014000b3 changed the
variable passed to elog from an int64 to a size_t variable, but neglected to
change the modifier in the format string accordingly. Per failure on
buildfarm member lapwing.
https://git.postgresql.org/pg/commitdiff/0bd65a3905706927cdd6b3158b6457c1c854471b
Fix duplicate typedef LogicalTape. To make buildfarm member locust happy.
https://git.postgresql.org/pg/commitdiff/aa3ac6453b28049b3198433b75228271b7612d4a
Fix parallel sort, broken by the balanced merge patch. The code for
initializing the tapes on each merge iteration was skipped in a parallel
worker. I put the !WORKER(state) check in wrong place while rebasing the
patch. That caused failures in the index build in 'multiple-row-versions'
isolation test, in multiple buildfarm members. On my laptop it was easier to
reproduce by building an index on a larger table, so that you got a parallel
sort more reliably.
https://git.postgresql.org/pg/commitdiff/fc0f3b4cb0e882a9c5d51c302d4aa3591e4f80fd
Álvaro Herrera pushed:
Invalidate partitions of table being attached/detached. Failing to do that,
any direct inserts/updates of those partitions would fail to enforce the
correct constraint, that is, one that considers the new partition constraint
of their parent table. Backpatch to 10. Reported by: Hou Zhijie
[email protected] Author: Amit Langote [email protected]
Author: Álvaro Herrera [email protected] Reviewed-by: Nitin Jadhav
[email protected] Reviewed-by: Pavel Borisov
[email protected] Discussion:
https://postgr.es/m/OS3PR01MB5718DA1C4609A25186D1FBF194089%40OS3PR01MB5718.jpnprd01.prod.outlook.com
https://git.postgresql.org/pg/commitdiff/d6f1e16c8fe27100e371a15aeeb498faa680ceed
Ensure correct lock level is used in ALTER ... RENAME. Commit 1b5d797cd4f7
intended to relax the lock level used to rename indexes, but inadvertently
allowed any relation to be renamed with a lowered lock level, as long as the
command is spelled ALTER INDEX. That's undesirable for other relation types,
so retry the operation with the higher lock if the relation turns out not to
be an index. After this fix, ALTER INDEX RENAME will require
access exclusive lock, which it didn't before. Author: Nathan Bossart
[email protected] Author: Álvaro Herrera [email protected]
Reported-by: Onder Kalaci [email protected] Discussion:
https://postgr.es/m/PH0PR21MB1328189E2821CDEC646F8178D8AE9@PH0PR21MB1328.namprd21.prod.outlook.com
https://git.postgresql.org/pg/commitdiff/c2c618ff1137f9ef58827f57e4ec0f97453e454e
Protect against collation variations in test. Discussion:
https://postgr.es/m/YW/[email protected]
https://git.postgresql.org/pg/commitdiff/cd124d205c42a623b68cd155ace94cc376851b78
Daniel Gustafsson pushed:
Fix sscanf limits in pg_basebackup and pg_dump. Make sure that the string
parsing is limited by the size of the destination buffer. In pg_basebackup
the available values sent from the server is limited to two characters so
there was no risk of overflow. In pg_dump the buffer is bounded by MAXPGPATH,
and thus the limit must be inserted via preprocessor expansion and the buffer
increased by one to account for the terminator. There is no risk of overflow
here, since in this case, the buffer scanned is smaller than the destination
buffer. Backpatch the pg_basebackup fix to 11 where it was introduced, and
the pg_dump fix all the way down to 9.6. Reviewed-by: Tom Lane Discussion:
https://postgr.es/m/[email protected]
Backpatch-through: 11 and 9.6
https://git.postgresql.org/pg/commitdiff/1d7641d51a51aa00dff685022fab6c03be8f8af8
Fix bug in TOC file error message printing. If the blob TOC file cannot be
parsed, the error message was failing to print the filename as the variable
holding it was shadowed by the destination buffer for parsing. When the
filename fails to parse, the error will print an empty string: ./pg_restore
-d foo -F d dump pg_restore: error: invalid line in large object TOC file "":
.. ..instead of the intended error message: ./pg_restore -d foo -F d dump
pg_restore: error: invalid line in large object TOC file "dump/blobs.toc": ..
Fix by renaming both variables as the shared name was too generic to store
either and still convey what the variable held. Backpatch all the way down to
9.6. Reviewed-by: Tom Lane Discussion:
https://postgr.es/m/[email protected]
Backpatch-through: 9.6
https://git.postgresql.org/pg/commitdiff/998d060f3db79c6918cb4a547695be150833f9a4
Refactor the sslfiles Makefile target for ease of use. The Makefile handling
of certificate and keypairs used for TLS testing had become quite difficult to
work with. Adding a new cert without the need to regenerate everything was too
complicated. This patch refactors the sslfiles make target such that adding a
new certificate requires only adding a .config file, adding it to the top of
the Makefile, and running make sslfiles. Improvements: - Interfile
dependencies should be fixed, with the exception of the CRL dirs. - New
certificates have serial numbers based on the current time, reducing the
chance of collision. - The CA index state is created on demand and cleaned up
automatically at the end of the Make run. - *.config files are now
self-contained; one certificate needs one config file instead of two. -
Duplication is reduced, and along with it some unneeded code (and possible
copy-paste errors). - all configuration files underneath the conf/ directory.
The target is moved to its own makefile in order to avoid colliding with
global make settings. Author: Jacob Champion [email protected]
Reviewed-by: Michael Paquier [email protected] Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/b4c4a00eada3c512e819e9163114a5ad1606bc7e
Fix SSL tests on 32-bit Perl. The certificate serial number generation was
changed in b4c4a00ea to use the current timestamp. The testharness must thus
interrogate the cert for the serialnumber using "openssl x509" which emits the
serial in hex format. Converting the serial to integer format to match whats
in pg_stat_ssl requires a 64-bit capable Perl. This adds a fallback to
checking for an integer when the tests with a 32-bit Perl. Per failure on
buildfarm member prairiedog. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/0c04342b1d3dd5b24f795f94874163be8e21710e
Tom Lane pushed:
Remove bogus assertion in transformExpressionList(). I think when I added this
assertion (in commit 8f889b108), I was only thinking of the use of
transformExpressionList at top level of INSERT and VALUES. But it's also
called by transformRowExpr(), which can certainly occur in an UPDATE
targetlist, so it's inappropriate to suppose that p_multiassign_exprs must be
empty. Besides, since the input is not expected to contain ResTargets,
there's no reason it should contain MultiAssignRefs either. Hence this code
need not be concerned about the state of p_multiassign_exprs, and we should
just drop the assertion. Per bug #17236 from ocean_li_996. It's been wrong
for years, so back-patch to all supported branches. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/697dd1925f418c9f54ee1fd1cefbc613d6504b1f
Fix assignment to array of domain over composite. An update such as "UPDATE
... SET fld[n].subfld = whatever" failed if the array elements were domains
rather than plain composites. That's because isAssignmentIndirectionExpr()
failed to cope with the CoerceToDomain node that would appear in the
expression tree in this case. The result would typically be a crash, and even
if we accidentally didn't crash, we'd not correctly preserve other fields of
the same array element. Per report from Onder Kalaci. Back-patch to v11
where arrays of domains came in. Discussion:
https://postgr.es/m/PH0PR21MB132823A46AA36F0685B7A29AD8BD9@PH0PR21MB1328.namprd21.prod.outlook.com
https://git.postgresql.org/pg/commitdiff/3e310d837a9b3de8ad977c0a3e2a769bcdf61cc9
pg_dump: Reorganize getTables(). Along the same lines as 047329624, ed2c7f65b
and daa9fe8a5, reduce code duplication by having just one copy of the parts of
the query that are the same across all server versions; and make the
conditionals control the smallest possible amount of code. This also gets rid
of the confusing assortment of different ways to accomplish the same result
that we had here before. While at it, make sure all three relevant parts of
the function list the fields in the same order. This is just neatnik-ism, of
course. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/4438eb4a495c977d8ac485dd6e544c2b6e077deb
Improve pg_regress.c's infrastructure for issuing psql commands. Support
issuing more than one "-c command" switch to a single psql invocation. This
allows combining some things that formerly required two or more backend
launches into a single session. In particular, we can issue DROP DATABASE as
one of the -c commands without getting "DROP DATABASE cannot run inside a
transaction block". In addition to reducing the number of sessions needed,
this patch also suppresses "NOTICE: database "foo" does not exist, skipping"
chatter that was formerly generated during pg_regress's DROP DATABASE (or
ROLE) IF NOT EXISTS calls. That moves us another step closer to the ideal of
not seeing any messages during successful build/test. This also eliminates
some hard-coded restrictions on the length of the commands issued. I don't
think we were anywhere near hitting those, but getting rid of the limit is
comforting. Patch by me, but thanks to Nathan Bossart for starting the
discussion. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/f45dc59a38cab1d2af6baaedb79559fe2e9b3781
Doc: clarify a critical and undocumented aspect of simplehash.h. I just got
burnt by trying to use pg_malloc instead of pg_malloc0 with this. Save the
next hacker some time by not leaving this API detail undocumented.
https://git.postgresql.org/pg/commitdiff/b1ce6c284366ce1dae120f5d10dd59e8804322ee
pg_dump: fix mis-dumping of non-global default privileges. Non-global default
privilege entries should be dumped as-is, not made relative to the default ACL
for their object type. This would typically only matter if one had revoked
some on-by-default privileges in a global entry, and then wanted to grant them
again in a non-global entry. Per report from Boris Korzun. This is an old
bug, so back-patch to all supported branches. Neil Chen, test case by
Masahiko Sawada Discussion:
https://postgr.es/m/[email protected]
Discussion:
https://postgr.es/m/CAA3qoJnr2+1dVJObNtfec=qW4Z0nz=A9+r5bZKoTSy5RDjskMw@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/2acc84c6fd299125702c8a8af13820abcc0d4891
Fix frontend version of sh_error() in simplehash.h. The code does not expect
sh_error() to return, but the patch that made this header usable in frontend
didn't get that memo. While here, plaster unlikely() on the tests that decide
whether to invoke sh_error(), and add our standard copyright notice. Noted by
Andres Freund. Back-patch to v13 where this frontend support came in.
Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/974aedcea46dfd0119eea2fbb2eeacd232596f05
In pg_dump, use simplehash.h to look up dumpable objects by OID. Create a hash
table that indexes dumpable objects by CatalogId (that is, catalog OID +
object OID). Use this to replace the former catalogIdMap array, as well as
various other single- catalog index arrays, and also the extension membership
map. In principle this should be faster for databases with many objects,
since lookups are now O(1) not O(log N). However, it seems that these lookups
are pretty much negligible in context, so that no overall performance change
can be measured. But having only one lookup data structure to maintain makes
the code simpler and more flexible, so let's do it anyway. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/92316a4582a5714d4e494aaf90360860e7fec37a
Fix minor memory leaks in pg_dump. I found these by running pg_dump under
"valgrind --leak-check=full". The changes in flagInhIndexes() and
getIndexes() replace allocation of an array of which we use only some elements
by individual allocations of just the actually-needed objects. The previous
coding wasted some memory, but more importantly it confused valgrind's leak
tracking. collectComments() and collectSecLabels() remain major blots on the
valgrind report, because they don't PQclear their query results, in order to
avoid a lot of strdup's. That's a dubious tradeoff, but I'll leave it alone
here; an upcoming patch will modify those functions enough to justify changing
the tradeoff.
https://git.postgresql.org/pg/commitdiff/70bef494000e4dbbeca0f0a40347ca1747aea701
Andres Freund pushed:
Adapt src/test/ldap/t/001_auth.pl to work with openldap 2.5. ldapsearch's
deprecated -h/-p arguments were removed, need to use -H now - which has been
around for over 20 years. As perltidy insists on reflowing the parameters
anyway, change order and "phrasing" to yield a less confusing layout (per
suggestion from Tom Lane). Discussion:
https://postgr.es/m/[email protected]
Backpatch: 11-, where the tests were added.
https://git.postgresql.org/pg/commitdiff/984f460e2f29e7ba9174cabb9f43a0d1dce543bf
Amit Kapila pushed:
Remove unused wait events. Commit 464824323e introduced the wait events which
were neither used by that commit nor by follow-up commits for that work.
Author: Masahiro Ikeda Backpatch-through: 14, where it was introduced
Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/1607cd0b6c9919bf765198882ea48a98e901e1bc
Andrew Dunstan pushed:
Add module build directory to the PATH for TAP tests. For non-MSVC builds this
is make's $(CURDIR), while for MSVC builds it is $topdir/$Config/$module. The
directory is added as the second element in the PATH, so that the install
location takes precedence, but the added PATH element takes precedence over
the rest of the PATH. The reason for this is to allow tests to find built
products that are not installed, such as the libpq_pipeline test driver. The
libpq_pipeline test is adjusted to take advantage of this. Based on a
suggestion from Andres Freund. Backpatch to release 14. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/f4ce6c4d3a30ec3a12c7f64b90a6fc82887ddd7b
Move Perl test modules to a better namespace. The five modules in our TAP test
framework all had names in the top level namespace. This is unwise because,
even though we're not exporting them to CPAN, the names can leak, for example
if they are exported by the RPM build process. We therefore move the modules
to the PostgreSQL::Test namespace. In the process PostgresNode is renamed to
Cluster, and TestLib is renamed to Utils. PostgresVersion becomes simply
PostgreSQL::Version, to avoid possible confusion about what it's the version
of. Discussion:
https://postgr.es/m/[email protected]
Reviewed by Erik Rijkers and Michael Paquier
https://git.postgresql.org/pg/commitdiff/b3b4d8e68ae83f432f43f035c7eb481ef93e1583
Noah Misch pushed:
Avoid race in RelationBuildDesc() affecting CREATE INDEX CONCURRENTLY. CIC and
REINDEX CONCURRENTLY assume backends see their catalog changes no later than
each backend's next transaction start. That failed to hold when a backend
absorbed a relevant invalidation in the middle of running RelationBuildDesc()
on the CIC index. Queries that use the resulting index can silently fail to
find rows. Fix this for future index builds by making RelationBuildDesc()
loop until it finishes without accepting a relevant invalidation. It may be
necessary to reindex to recover from past occurrences; REINDEX CONCURRENTLY
suffices. Back-patch to 9.6 (all supported versions). Noah Misch and Andrey
Borodin, reviewed (in earlier versions) by Andres Freund. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/fdd965d074d46765c295223b119ca437dbcac973
Fix CREATE INDEX CONCURRENTLY for the newest prepared transactions. The
purpose of commit 8a54e12a38d1545d249f1402f66c8cde2837d97c was to fix this,
and it sufficed when the PREPARE TRANSACTION completed before the CIC looked
for lock conflicts. Otherwise, things still broke. As before, in a cluster
having used CIC while having enabled prepared transactions, queries that use
the resulting index can silently fail to find rows. It may be necessary to
reindex to recover from past occurrences; REINDEX CONCURRENTLY suffices. Fix
this for future index builds by making CIC wait for arbitrarily-recent
prepared transactions and for ordinary transactions that may yet PREPARE
TRANSACTION. As part of that, have PREPARE TRANSACTION transfer locks to its
dummy PGPROC before it calls ProcArrayClearTransaction(). Back-patch to 9.6
(all supported versions). Andrey Borodin, reviewed (in earlier versions) by
Andres Freund. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/3cd9c3b921977272e6650a5efbeade4203c4bca2
[Less]
|
Posted
over 2 years
ago
Zurich, Switzerland - Oct 22th, 2021
Server side rollback at statement level for PostgreSQL
pg_statement_rollback is a PostgreSQL extension to add server side
transaction with rollback at statement level like in Oracle or DB2.
Release v1.3 of
... [More]
pg_statement_rollback was released. This is
a maintenance release to add support to PostgreSQL 14.
See ChangeLog for a complete list of changes.
Links & Credits
pg_statement_rollback is an open project from LzLabs (https://www.lzlabs.com/).
Any contribution to build a better tool is welcome. You just have to send
your ideas, features requests or patches using the GitHub tools.
Links :
Documentation: https://github.com/lzlabs/pg_statement_rollback/#readme
Download: https://github.com/lzlabs/pg_statement_rollback/releases/
Support: use GitHub report tool at https://github.com/lzlabs/pg_statement_rollback/issues
Authors: https://github.com/lzlabs/pg_statement_rollback/#authors
[Less]
|