pg_dump may fail with an OOM error due to plugindata table size

Still need help?

The Atlassian Community is here for you.

Ask the community


Platform notice: Server and Data Center only. This article only applies to Atlassian products on the Server and Data Center platforms.

Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Except Fisheye and Crucible

Problem

The plugindata table of Confluence keeps the user-installed plugin .jar files. The table saves the entire .jar file (size) on the data column, during Confluence startup, it reads from this table and populates the plugins-cache folder of Confluence.

On this table, when using pSQL, the data type used by the column data is bytea (binary data). This type has a limitation in pSQL when being read which may cause an OOM:

select data from plugindata;
ERROR: out of memory
DETAIL: Failed on request of size 268435456.

Backup error:
pg_dump: Dumping the contents of table "plugindata" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR: out of memory
DETAIL: Failed on request of size 159380181.
pg_dump: The command was: COPY public.plugindata (plugindataid, pluginkey, filename, lastmoddate, data) TO stdout;

Diagnosis

Environment

  • PostgreSQL 9.6.x

Cause

  • The root cause for the memory issue is that you are selecting the output of a bytea column in text mode(i.e. either for SELECT or pg_dump), it requires more memory than the size of the underlying datum. 
  • The limit for such memory allocations in PostgreSQL is 1GB.

Workaround

  • If you hit this problem when running a pg_dump, use lo_import and lo_export to create the dump successfully.
  • Clearing Confluence Plugin Cache folders can help temporarily


Descriptionplugindata psql
ProductConfluence
Last modified on Aug 20, 2019

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.