-
Notifications
You must be signed in to change notification settings - Fork 4.9k
Rel9_4_stable_first #6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The table-rewriting forms of ALTER TABLE are MVCC-unsafe, in much the same way as TRUNCATE, because they replace all rows of the table with newly-made rows with a new xmin. (Ideally, concurrent transactions with old snapshots would continue to see the old table contents, but the data is not there anymore --- and if it were there, it would be inconsistent with the table's updated rowtype, so there would be serious implementation problems to fix.) This was nowhere documented though, and the problem was only documented for TRUNCATE in a note in the TRUNCATE reference page. Create a new "Caveats" section in the MVCC chapter that can be home to this and other limitations on serializable consistency. In passing, fix a mistaken statement that VACUUM and CLUSTER would reclaim space occupied by a dropped column. They don't reconstruct existing tuples so they couldn't do that. Back-patch to all supported branches.
This behavior wasn't documented, but it should be because it's user-visible in triggers and other functions executed on the remote server. Per question from Adam Fuchs. Back-patch to 9.3 where postgres_fdw was added.
plpgsql's error location context messages ("PL/pgSQL function fn-name line line-no at stmt-type") would misreport a CONTINUE statement as being an EXIT, and misreport a MOVE statement as being a FETCH. These are clear bugs that have been there a long time, so back-patch to all supported branches. In addition, in 9.5 and HEAD, change the description of EXECUTE from "EXECUTE statement" to just plain EXECUTE; there seems no good reason why this statement type should be described differently from others that have a well-defined head keyword. And distinguish GET STACKED DIAGNOSTICS from plain GET DIAGNOSTICS. These are a bit more of a judgment call, and also affect existing regression-test outputs, so I did not back-patch into stable branches. Pavel Stehule and Tom Lane
If we have the typmod that identifies a registered record type, there's no reason that record_in() should refuse to perform input conversion for it. Now, in direct SQL usage, record_in() will always be passed typmod = -1 with type OID RECORDOID, because no typmodin exists for type RECORD, so the case can't arise. However, some InputFunctionCall users such as PLs may be able to supply the right typmod, so we should allow this to support them. Note: the previous coding and comment here predate commit 59c016a. There has been no case since 8.1 in which the passed type OID wouldn't be valid; and if it weren't, this error message wouldn't be apropos anyway. Better to let lookup_rowtype_tupdesc complain about it. Back-patch to 9.1, as this is necessary for my upcoming plpython fix. I'm committing it separately just to make it a bit more visible in the commit history.
…esult. PLyString_ToComposite() blithely overwrote proc->result.out.d, even though for a composite result type the other union variant proc->result.out.r is the one that should be valid. This could result in a crash if out.r had in fact been filled in (proc->result.is_rowtype == 1) and then somebody later attempted to use that data; as per bug #13579 from Paweł Michalak. Just to add insult to injury, it didn't work for RECORD results anyway, because record_in() would refuse the case. Fix by doing the I/O function lookup in a local PLyTypeInfo variable, as we were doing already in PLyObject_ToComposite(). This is not a great technique because any fn_extra data allocated by the input function will be leaked permanently (thanks to using TopMemoryContext as fn_mcxt). But that's a pre-existing issue that is much less serious than a crash, so leave it to be fixed separately. This bug would be a potential security issue, except that plpython is only available to superusers and the crash requires coding the function in a way that didn't work before today's patches. Add regression test cases covering all the supported methods of converting composite results. Back-patch to 9.1 where the faulty coding was introduced.
For no obvious reason, spi_printtup() was coded to enlarge the tuple pointer table by just 256 slots at a time, rather than doubling the size at each reallocation, as is our usual habit. For very large SPI results, this makes for O(N^2) time spent in repalloc(), which of course soon comes to dominate the runtime. Use the standard doubling approach instead. This is a longstanding performance bug, so back-patch to all active branches. Neil Conway
The default argument, if given, has to be of exactly the same datatype as the first argument; but this was not stated in so many words, and the error message you get about it might not lead your thought in the right direction. Per bug #13587 from Robert McGehee. A quick scan says that these are the only two built-in functions with two anyelement arguments and no other polymorphic arguments. There are plenty of cases of, eg, anyarray and anyelement, but those seem less likely to confuse. For instance this doesn't seem terribly hard to figure out: "function array_remove(integer[], numeric) does not exist". So I've contented myself with fixing these two cases.
This makes the parameter names match the documented prototype names. Report by Erwin Brandstetter Backpatch through 9.0
…bler. On recent AIX it's necessary to configure gcc to use the native assembler (because the GNU assembler hasn't been updated to handle AIX 6+). This caused PG builds to fail with assembler syntax errors, because we'd try to compile s_lock.h's gcc asm fragment for PPC, and that assembly code relied on GNU-style local labels. We can't substitute normal labels because it would fail in any file containing more than one inlined use of tas(). Fortunately, that code is stable enough, and the PPC ISA is simple enough, that it doesn't seem like too much of a maintenance burden to just hand-code the branch offsets, removing the need for any labels. Note that the AIX assembler only accepts "$" for the location counter pseudo-symbol. The usual GNU convention is "."; but it appears that all versions of gas for PPC also accept "$", so in theory this patch will not break any other PPC platforms. This has been reported by a few people, but Steve Underwood gets the credit for being the first to pursue the problem far enough to understand why it was failing. Thanks also to Noah Misch for additional testing.
It's only necessary to fix this in 9.4; later versions don't have this code (because we ripped out Alpha support entirely), while earlier ones aren't actually using pg_read_barrier() anywhere. Per rather belated report from Christoph Berg.
The regression tests for sepgsql were broken by changes in the base distro as-shipped policies. Specifically, definition of unconfined_t in the system default policy was changed to bypass multi-category rules, which the regression test depended on. Fix that by defining a custom privileged domain (sepgsql_regtest_superuser_t) and using it instead of system's unconfined_t domain. The new sepgsql_regtest_superuser_t domain performs almost like the current unconfined_t, but restricted by multi-category policy as the traditional unconfined_t was. The custom policy module is a self defined domain, and so should not be affected by related future system policy changes. However, it still uses the unconfined_u:unconfined_r pair for selinux-user and role. Those definitions have not been changed for several years and seem less risky to rely on than the unconfined_t domain. Additionally, if we define custom user/role, they would need to be manually defined at the operating system level, adding more complexity to an already non-standard and complex regression test. Back-patch to 9.3. The regression tests will need more work before working correctly on 9.2. Starting with 9.2, sepgsql has had dependencies on libselinux versions that are only available on newer distros with the changed set of policies (e.g. RHEL 7.x). On 9.1 sepgsql works fine with the older distros with original policy set (e.g. RHEL 6.x), and on which the existing regression tests work fine. We might want eventually change 9.1 sepgsql regression tests to be more independent from the underlying OS policies, however more work will be needed to make that happen and it is not clear that it is worth the effort. Kohei KaiGai with review by Adam Brightwell and me, commentary by Stephen, Alvaro, Tom, Robert, and others.
For some reason this message was not updated when the longtable option was added. Backpatch through 9.3
The setting values of some parameters including max_worker_processes must be equal to or higher than the values on the master. However, previously max_worker_processes was not listed as such parameter in the document. So this commit adds it to that list. Back-patch to 9.4 where max_worker_processes was added.
Formerly, we treated only portals created in the current subtransaction as having failed during subtransaction abort. However, if the error occurred while running a portal created in an outer subtransaction (ie, a cursor declared before the last savepoint), that has to be considered broken too. To allow reliable detection of which ones those are, add a bookkeeping field to struct Portal that tracks the innermost subtransaction in which each portal has actually been executed. (Without this, we'd end up failing portals containing functions that had called the subtransaction, thereby breaking plpgsql exception blocks completely.) In addition, when we fail an outer-subtransaction Portal, transfer its resources into the subtransaction's resource owner, so that they're released early in cleanup of the subxact. This fixes a problem reported by Jim Nasby in which a function executed in an outer-subtransaction cursor could cause an Assert failure or crash by referencing a relation created within the inner subtransaction. The proximate cause of the Assert failure is that AtEOSubXact_RelationCache assumed it could blow away a relcache entry without first checking that the entry had zero refcount. That was a bad idea on its own terms, so add such a check there, and to the similar coding in AtEOXact_RelationCache. This provides an independent safety measure in case there are still ways to provoke the situation despite the Portal-level changes. This has been broken since subtransactions were invented, so back-patch to all supported branches. Tom Lane and Michael Paquier
Oskari Saarenmaa. Backpatch to stable branches where applicable.
This has been broken since 9.3 (commit 82b1b21 to be exact), which suggests that nobody is any longer using a Windows build system that doesn't provide a symlink emulation. Still, it's wrong on its own terms, so repair. YUriy Zhuravlev
RESERV. RESERV is meant for tokens like "now" and having them in that category throws errors like these when used as an input date: stark=# SELECT 'doy'::timestamptz; ERROR: unexpected dtype 33 while parsing timestamptz "doy" LINE 1: SELECT 'doy'::timestamptz; ^ stark=# SELECT 'dow'::timestamptz; ERROR: unexpected dtype 32 while parsing timestamptz "dow" LINE 1: SELECT 'dow'::timestamptz; ^ Found by LLVM's Libfuzzer
Cleanup process could be called by ordinary insert/update and could take a lot of time. Add vacuum_delay_point() to make this process interruptable. Under vacuum this call will also throttle a vacuum process to decrease system load, called from insert/update it will not throttle, and that reduces a latency. Backpatch for all supported branches. Jeff Janes <[email protected]>
We were missing a few return checks on OpenSSL calls. Should be pretty harmless, since we haven't seen any user reports about problems, and this is not a high-traffic module anyway; still, a bug is a bug, so backpatch this all the way back to 9.0. Author: Michael Paquier, while reviewing another sslinfo patch
Even views considered "simple" enough to be automatically updatable may have mulitple relations involved (eg: in a where clause). We need to make sure and lock those relations when rewriting the query. Back-patch to 9.3 where updatable views were added. Pointed out by Andres, patch thanks to Dean Rasheed.
Commit fa4a4df attempted to backpatch to 9.4 a change to improve the logging of TAP tests. Unfortunately, due to how 9.4 and 9.5 had diverged, the backpatch to 9.4 depended on a few things which didn't exist in 9.4 (but did in 9.5), specifically, the 'temp-check' production and the 'with_temp_install' definition (which also required 'abs_top_builddir'). Add these definitions into REL9_4_STABLE to allow the TAP tests to run correctly under 'make check'.
This commit makes postmaster forcibly remove the files signaling a standby promotion request. Otherwise, the existence of those files can trigger a promotion too early, whether a user wants that or not. This removal of files is usually unnecessary because they can exist only during a few moments during a standby promotion. However there is a race condition: if pg_ctl promote is executed and creates the files during a promotion, the files can stay around even after the server is brought up to new master. Then, if new standby starts by using the backup taken from that master, the files can exist at the server startup and should be removed in order to avoid an unexpected promotion. Back-patch to 9.1 where promote signal file was introduced. Problem reported by Feike Steenbergen. Original patch by Michael Paquier, modified by me. Discussion: [email protected]
The list-wrangling here was done wrong, allowing the same state to get put into the list twice. The following loop then would clone it twice. The second clone would wind up with no inarcs, so that there was no observable misbehavior AFAICT, but a useless state in the finished NFA isn't an especially good thing.
We're adding OIDs, not TIDs, to invalItems. Pointed out by Etsuro Fujita. Back-patch to all supported branches.
The "typo" alleged in commit 1e460d4 was actually a comment that was correct when written, but I missed updating it in commit b5282aa. Use a slightly less specific (and hopefully more future-proof) description of what is collected. Back-patch to 9.2 where that commit appeared, and revert the comment to its then-entirely-correct state before that.
Otherwise running installcheck-force on a server with synchronous_commit=off will result in the tests failing. All the other tests already do so... Backpatch: 9.4, where logical decoding was added
This makes the flag more visible for testers using the default file as a template, increasing the likelyhood that the test suite will be run. Also have the flag be displayed in the fake "configure" output, if set. This patch is two new lines only, but perltidy decides to shift things around which makes it appear a bit bigger. Author: Michaël Paquier Reviewed-by: Craig Ringer Discussion: https://www.postgresql.org/message-id/CAB7nPqRet6UAP2APhZAZw%3DVhJ6w-Q-gGLdZkrOqFgd2vc9-ZDw%40mail.gmail.com
The existing code confuses the byte length of the string (which is relevant when passing it to pg_strncasecmp) with the character length of the string (which is relevant when it is used with the SQL substring function). Separate those two concepts. Report and patch by Kyotaro Horiguchi, reviewed by Thomas Munro and reviewed and further revised by me.
I wasn't careful enough when back-patching.
… state Previously recovery_min_apply_delay was applied even before recovery had reached consistency. This could cause us to wait a long time unexpectedly for read-only connections to be allowed. It's problematic because the standby was useless during that wait time. This patch changes recovery_min_apply_delay so that it's applied once the database has reached the consistent state. That is, even if the delay is set, the standby tries to replay WAL records as fast as possible until it has reached consistency. Author: Michael Paquier Reviewed-By: Julien Rouhaud Reported-By: Greg Clough Backpatch: 9.4, where recovery_min_apply_delay was added Bug: #13770 Discussion: http://www.postgresql.org/message-id/[email protected]
Logical decoding's reorderbuffer keeps transactions in an LSN ordered list for efficiency. To make that's efficiently possible upper-level xids are forced to be logged before nested subtransaction xids. That only works though if these records are all looked at: Unfortunately we didn't do so for e.g. row level locks, which are otherwise uninteresting for logical decoding. This could lead to errors like: "ERROR: subxact logged without previous toplevel record". It's not sufficient to just look at row locking records, the xid could appear first due to a lot of other types of records (which will trigger the transaction to be marked logged with MarkCurrentTransactionIdLoggedIfAny). So invent infrastructure to tell reorderbuffer about xids seen, when they'd otherwise not pass through reorderbuffer.c. Reported-By: Jarred Ward Bug: #13844 Discussion: [email protected] Backpatch: 9.4, where logical decoding was added
… around. Somehow I managed to flip the order of restoring old & new tuples when de-spooling a change in a large transaction from disk. This happens to only take effect when a change is spooled to disk which has old/new versions of the tuple. That only is the case for UPDATEs where he primary key changed or where replica identity is changed to FULL. The tests didn't catch this because either spooled updates, or updates that changed primary keys, were tested; not both at the same time. Found while adding tests for the following commit. Backpatch: 9.4, where logical decoding was added
…ity full. When decoding the old version of an UPDATE or DELETE change, and if that tuple was bigger than MaxHeapTupleSize, we either Assert'ed out, or failed in more subtle ways in non-assert builds. Normally individual tuples aren't bigger than MaxHeapTupleSize, with big datums toasted. But that's not the case for the old version of a tuple for logical decoding; the replica identity is logged as one piece. With the default replica identity btree limits that to small tuples, but that's not the case for FULL. Change the tuple buffer infrastructure to separate allocate over-large tuples, instead of always going through the slab cache. This unfortunately requires changing the ReorderBufferTupleBuf definition, we need to store the allocated size someplace. To avoid requiring output plugins to recompile, don't store HeapTupleHeaderData directly after HeapTupleData, but point to it via t_data; that leaves rooms for the allocated size. As there's no reason for an output plugin to look at ReorderBufferTupleBuf->t_data.header, remove the field. It was just a minor convenience having it directly accessible. Reported-By: Adam Dratwiński Discussion: CAKg6ypLd7773AOX4DiOGRwQk1TVOQKhNwjYiVjJnpq8Wo+i62Q@mail.gmail.com
…es(). There were two places in spell.c that supposed that they could search for a location in a string produced by lowerstr() and then transpose the offset into the original string. But this fails completely if lowerstr() transforms any characters into characters of different byte length, as can happen in Turkish UTF8 for instance. We'd added some comments about this coding in commit 51e78ab, but failed to realize that it was not merely confusing but wrong. Coverity complained about this code years ago, but in such an opaque fashion that nobody understood what it was on about. I'm not entirely sure that this issue *is* what it's on about, actually, but perhaps this patch will shut it up -- and in any case the problem is clear. Back-patch to all supported branches.
In c8f621c I forgot to account for MAXALIGN when allocating a new tuplebuf in ReorderBufferGetTupleBuf(). That happens to currently not cause active problems on a number of platforms because the affected pointer is already aligned, but others, like ppc and hppa, trigger this in the regression test, due to a debug memset clearing memory. Fix that. Backpatch: 9.4, like the previous commit.
A thinko in a967613 caused pg_ctl to get it exactly backwards when deciding whether to report problems to the Windows eventlog or to stderr. Per bug #14001 from Manuel Mathar, who also identified the fix. Like the previous patch, back-patch to all supported branches.
Coverity and inspection for the issue addressed in fd45d16 found some questionable code. Specifically coverity noticed that the wrong length was added in ReorderBufferSerializeChange() - without immediate negative consequences as the variable isn't used afterwards. During code-review and testing I noticed that a bit of space was wasted when allocating tuple bufs in several places. Thirdly, the debug memset()s in ReorderBufferGetTupleBuf() reduce the error checking valgrind can do. Backpatch: 9.4, like c8f621c.
David Rowley
plperl_ref_from_pg_array() didn't consider the case that postgrs arrays can have 0 dimensions (when they're empty) and accessed the first dimension without a check. Fix that by special casing the empty array case. Author: Alex Hunsaker Reported-By: Andres Freund / valgrind / buildfarm animal skink Discussion: [email protected] Backpatch: 9.1-
…le data. ltree/ltree_gist/ltxtquery's headers stores data at MAXALIGN alignment, requiring some padding bytes. So far we left these uninitialized. Zero those by using palloc0. Author: Andres Freund Reported-By: Andres Freund / valgrind / buildarm animal skink Backpatch: 9.1-
Author: Andres Freund Backpatch: 9.4, where we started to maintain valgrind suppressions
Python's allocator does some low-level tricks for efficiency; unfortunately they trigger valgrind errors. Those tricks can be disabled making instrumentation easier; but few people testing postgres will have such a build of python. So add broad suppressions of the resulting errors. See also https://svn.python.org/projects/python/trunk/Misc/README.valgrind This possibly will suppress valid errors, but without it it's basically impossible to use valgrind with plpython code. Author: Andres Freund Backpatch: 9.4, where we started to maintain valgrind suppressions
…sons. An index search using a row comparison such as ROW(a, b) > ROW('x', 'y') would stop upon reaching a NULL entry in the "b" column, ignoring the fact that there might be non-NULL "b" values associated with later values of "a". This happens because _bt_mark_scankey_required() marks the subsidiary scankey for "b" as required, which is just wrong: it's for a column after the one with the first inequality key (namely "a"), and thus can't be considered a required match. This bit of brain fade dates back to the very beginnings of our support for indexed ROW() comparisons, in 2006. Kind of astonishing that no one came across it before Glen Takahashi, in bug #14010. Back-patch to all supported versions. Note: the given test case doesn't actually fail in unpatched 9.1, evidently because the fix for bug #6278 (i.e., stopping at nulls in either scan direction) is required to make it fail. I'm sure I could devise a case that fails in 9.1 as well, perhaps with something involving making a cursor back up; but it doesn't seem worth the trouble.
Renaming a file using rename(2) is not guaranteed to be durable in face of crashes; especially on filesystems like xfs and ext4 when mounted with data=writeback. To be certain that a rename() atomically replaces the previous file contents in the face of crashes and different filesystems, one has to fsync the old filename, rename the file, fsync the new filename, fsync the containing directory. This sequence is not generally adhered to currently; which exposes us to data loss risks. To avoid having to repeat this arduous sequence, introduce durable_rename(), which wraps all that. Also add durable_link_or_rename(). Several places use link() (with a fallback to rename()) to rename a file, trying to avoid replacing the target file out of paranoia. Some of those rename sequences need to be durable as well. There seems little reason extend several copies of the same logic, so centralize the link() callers. This commit does not yet make use of the new functions; they're used in a followup commit. Author: Michael Paquier, Andres Freund Discussion: [email protected] Backpatch: All supported branches
Renaming a file using rename(2) is not guaranteed to be durable in face of crashes. Use the previously added durable_rename()/durable_link_or_rename() in various places where we previously just renamed files. Most of the changed call sites are arguably not critical, but it seems better to err on the side of too much durability. The most prominent known case where the previously missing fsyncs could cause data loss is crashes at the end of a checkpoint. After the actual checkpoint has been performed, old WAL files are recycled. When they're filled, their contents are fdatasynced, but we did not fsync the containing directory. An OS/hardware crash in an unfortunate moment could then end up leaving that file with its old name, but new content; WAL replay would thus not replay it. Reported-By: Tomas Vondra Author: Michael Paquier, Tomas Vondra, Andres Freund Discussion: [email protected] Backpatch: All supported branches
The Visual Studio 2013 CRT generates invalid code when it makes a 64-bit build that is later used on a CPU that supports AVX2 instructions using a version of Windows before 7SP1/2008R2SP1. Detect this combination, and in those cases turn off the generation of FMA3, per recommendation from the Visual Studio team. The bug is actually in the CRT shipping with Visual Studio 2013, but Microsoft have stated they're only fixing it in newer major versions. The fix is therefor conditioned specifically on being built with this version of Visual Studio, and not previous or later versions. Author: Christian Ullrich
This is just a mirror of PostgreSQL, and not the development repo. Please see http://wiki.postgresql.org/wiki/Submitting_a_Patch for how to submit a patch to PostgreSQL. |
When a subtransaction is aborted in plpython because of an SPI exception, it tries to find a matching python exception in a hash `PLy_spi_exceptions` and to make python vm raise it. That hash is generated during module initialization, but the exception objects are not marked to prevent the garbage collector from collecting them, which can lead to a segmentation fault when processing any SPI exception. PoC to reproduce the issue: ```sql CREATE OR REPLACE FUNCTION server_crashes() RETURNS VOID AS $$ import gc gc.collect() plan = plpy.prepare('SELECT raises_an_spi_exception();', []) plpy.execute(plan) $$ LANGUAGE plpythonu; CREATE OR REPLACE FUNCTION raises_an_spi_exception() RETURNS VOID AS $$ DECLARE sql TEXT; BEGIN sql = format('%I', NULL); -- NullValueNotAllowed END $$ LANGUAGE plpgsql; SELECT server_crashes(); -- segfault here ``` Stacktrace of the problem (using PostgreSQL `REL9_5_STABLE` and python `2.7.3-0ubuntu3.8` on a Ubuntu 12.04): ``` Program received signal SIGSEGV, Segmentation fault. 0x00007f3155c7670b in PyObject_Call (func=0x7f31b7db2a30, arg=0x7f31b87d17d0, kw=0x0) at ../Objects/abstract.c:2525 2525 ../Objects/abstract.c: No such file or directory. (gdb) bt #0 0x00007f3155c7670b in PyObject_Call (func=0x7f31b7db2a30, arg=0x7f31b87d17d0, kw=0x0) at ../Objects/abstract.c:2525 #1 0x00007f3155d81ab1 in PyEval_CallObjectWithKeywords (func=0x7f31b7db2a30, arg=0x7f31b87d17d0, kw=0x0) at ../Python/ceval.c:3890 #2 0x00007f3155c766ed in PyObject_CallObject (o=0x7f31b7db2a30, a=0x7f31b87d17d0) at ../Objects/abstract.c:2517 #3 0x00007f31561e112b in PLy_spi_exception_set (edata=0x7f31b8743d78, excclass=0x7f31b7db2a30) at plpy_spi.c:547 #4 PLy_spi_subtransaction_abort (oldcontext=<optimized out>, oldowner=<optimized out>) at plpy_spi.c:527 #5 0x00007f31561e2185 in PLy_spi_execute_plan (ob=0x7f31b87d0cd8, list=0x7f31b7c530d8, limit=0) at plpy_spi.c:307 #6 0x00007f31561e22d4 in PLy_spi_execute (self=<optimized out>, args=0x7f31b87a6d08) at plpy_spi.c:180 #7 0x00007f3155cda4d6 in PyCFunction_Call (func=0x7f31b7d29600, arg=0x7f31b87a6d08, kw=0x0) at ../Objects/methodobject.c:81 #8 0x00007f3155d82383 in call_function (pp_stack=0x7fff9207e710, oparg=2) at ../Python/ceval.c:4021 #9 0x00007f3155d7cda4 in PyEval_EvalFrameEx (f=0x7f31b8805be0, throwflag=0) at ../Python/ceval.c:2666 #10 0x00007f3155d82898 in fast_function (func=0x7f31b88b5ed0, pp_stack=0x7fff9207ea70, n=0, na=0, nk=0) at ../Python/ceval.c:4107 #11 0x00007f3155d82584 in call_function (pp_stack=0x7fff9207ea70, oparg=0) at ../Python/ceval.c:4042 #12 0x00007f3155d7cda4 in PyEval_EvalFrameEx (f=0x7f31b8805a00, throwflag=0) at ../Python/ceval.c:2666 #13 0x00007f3155d7f8a9 in PyEval_EvalCodeEx (co=0x7f31b88aa460, globals=0x7f31b8727ea0, locals=0x7f31b8727ea0, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ../Python/ceval.c:3253 #14 0x00007f3155d74ff4 in PyEval_EvalCode (co=0x7f31b88aa460, globals=0x7f31b8727ea0, locals=0x7f31b8727ea0) at ../Python/ceval.c:667 #15 0x00007f31561dc476 in PLy_procedure_call (kargs=kargs@entry=0x7f31561e5690 "args", vargs=<optimized out>, proc=0x7f31b873b2d0, proc=0x7f31b873b2d0) at plpy_exec.c:801 #16 0x00007f31561dd9c6 in PLy_exec_function (fcinfo=fcinfo@entry=0x7f31b7c1f870, proc=0x7f31b873b2d0) at plpy_exec.c:61#17 0x00007f31561de9f9 in plpython_call_handler (fcinfo=0x7f31b7c1f870) at plpy_main.c:291 ```
Thanks for your Pull Request! 😄 This repo on GitHub is just a mirror of our real git repositories though, and can't really handle PRs. 😦 Hopefully you can redo the PR, and direct it to the git.postgresql.org repos? We have a developer guide, if that helps: https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F. If this was a PR for pgAdmin, please visit https://www.pgadmin.org/docs/pgadmin4/dev/submitting_patches.html. |
No description provided.