Major Version Upgrade Guide for OCI Database with PostgreSQL

Upgrade major versions for OCI Database with PostgreSQL using the pg_dump/pg_restore or pglogical approach.

OCI Database with PostgreSQL is a fully managed service for running PostgreSQL on Oracle Cloud, designed to reduce the operational overhead associated with managing underlying infrastructure. It streamlines day-to-day administration through automated provisioning, supported patching, automated backups, and integrated monitoring capabilities.

Maintaining up-to-date OCI PostgreSQL environments is important for taking advantage of the latest performance improvements, security updates, and new features.

The following commonly adopted approaches for major version upgrades offer structured and repeatable methods for migrating data and transitioning applications to newer versions.

pg_dump & pg_restore

Upgrading OCI PostgreSQL using pg_dump and pg_restore is a simple, reliable approach, especially when performing major version upgrades or migrating between environments. This method ensures a clean and controlled migration of both database objects and data. With proper handling of roles, parallelism, and validation, this method ensures a smooth and controlled migration process.

This document walks through the complete upgrade process using the exact commands.

Important Notes

  • pg_dump and pg_restore are logical backup tools.
  • Suitable for major version upgrades.
  • Requires downtime during final switchover.
  • Roles must always be handled separately.

Upgrade Workflow

The upgrade process using pg_dump and pg_restore consists of the following steps.

  1. Step 1: Dump Global Objects (Roles)

    This step helps ensure that all roles, permissions, and access controls are preserved, allowing users and applications to maintain consistent access after the upgrade.

  2. Step 2: Dump the Database

    This step creates a complete logical backup of the database, providing a reliable way to transfer data into the upgraded environment.

  3. Step 3: Modify Global Dump File

    This step allows us to review and adjust the dump as needed, helping to ensure compatibility with the target PostgreSQL version and a smoother restore process.

  4. Step 4: Restore Global Objects

    Restoring roles first helps ensure that the appropriate users and permissions are in place before data is reintroduced, supporting a consistent and secure environment.

  5. Step 5: Restore the Database

    This step brings the data into the upgraded system, completing the transition and making the database ready for use on the new PostgreSQL version.

Step 1: Dump Global Objects (Roles)

First, export roles and global objects from the source database (earlier version):

/usr/lib/postgresql/bin/pg_dumpall -U psql -h <IP_of_Source_DB> -g --no-role-passwords --no-tablespaces -f <all_roles>.sql
  • -g: Dumps global objects such as roles and users

    This option focuses on extracting cluster-wide objects such as roles, users, and their associated privileges, rather than database-specific data. Capturing these elements separately helps ensure that access controls can be recreated consistently in the target environment.

  • --no-role-passwords: Avoids password-related issues

    This excludes role password definitions from the dump, which can help simplify the restore process, especially when password policies or authentication methods differ between environments. It also provides flexibility to reconfigure credentials as needed after the upgrade.

  • --no-tablespaces: Prevents tablespace dependency issues

    This omits tablespace definitions from the dump, helping to avoid dependencies on specific storage configurations that might not exist in the target environment. This can make the restore process more portable and easier to adapt to OCI-managed storage.

  • -f <all_roles>.sql: Writes output to a file for reuse

    This directs the output to a SQL file, making it easier to review, version, and reuse during the restore phase. Having a separate file for global objects also supports a more controlled and stepwise upgrade process.

Step 2: Dump the Database

Take a database dump using directory format:

/usr/lib/postgresql/bin/pg_dump -v -h < IP_of_Source_DB > -U <username> -d <databasename> -Fd -C -j <Num of parallel jobs> -Z0 -f sampledb_dir_format
  • -Fd: Directory format (creates multiple files)

    The -Fd (directory format) option is chosen because it supports parallel dumps (-j), improving performance for large databases, and allows more granular and flexible restores of individual objects. This format stores the dump as a directory containing multiple files rather than a single archive. It supports parallel processing and provides greater flexibility during restore, especially for larger databases.

    Alternatively, the custom format (-Fc) can be used when a single-file backup is preferred, while still supporting compression and selective restore capabilities.

  • -C: Includes database creation command

    This option adds the necessary statements to create the database during restore, helping streamline the setup process in the target environment.

  • -j : Enables parallel jobs for faster dump

    This allows the dump operation to run multiple jobs concurrently, which can significantly reduce the overall time required for large datasets.

  • -Z0: No compression (improves performance)

    Disabling compression can improve dump performance by reducing CPU overhead, which might be beneficial when speed is a priority and storage constraints are minimal.

  • -v: Verbose mode to monitor progress

    This option provides detailed output during execution, making it easier to track progress and identify any issues early in the dump process.

Step 3: Modify Global Dump File

Before restoring roles:

Modify the <all_roles>.sql file as required based on the following guidance.

Remove the following from the role dump file:

  • All CREATE, ALTER ROLE,GRANT statements for roles starting with oci_*
  • Any unsupported role attributes (especially in managed services such as OCI PostgreSQL)
  • All CREATE and ALTER ROLE statements for the admin user that was created during the initial OCI PostgreSQL database setup, as it needs to be recreated when provisioning the later version database. If you plan to use a different admin username, ensure the appropriate admin user is specified during the creation of the upgraded OCI PostgreSQL database.

Additionally, update the role dump file to remove or modify any commands that require SUPERUSER privileges, as these aren't supported in managed environments.

For example:

Original:

ALTER ROLE test WITH NOSUPERUSER INHERIT CREATEROLE CREATEDB LOGIN NOREPLICATION NOBYPASSRLS PASSWORD 'test';

Modified:

ALTER ROLE/USER test WITH CREATEROLE CREATEDB LOGIN PASSWORD 'test';

Step 4: Restore Global Objects

Restore roles on the target database system (later version):

/usr/lib/postgresql/bin/psql -U <username> -d <databasename> -h <IP_of_Target_Database_System> -f <all_roles>.sql

This step ensures:

  • All users and roles are created
  • Permissions are available before data restore

Step 5: Restore the Database

Ignore the following from the toc.dat dump file:

  • All CREATE, ALTER ROLE and GRANT statements are failing for roles whose names start with oci_*.

Restore the database on the target database system using:

/usr/lib/postgresql/bin/pg_restore -v -h <IP_of_Target_Database_System> -U <username> -j <Num of parallel jobs> -C -d <databasename> sampledb_dir_format
  • -C: Create database, connect to it and restores into the database
  • -j: Parallel restore for faster execution
  • -v: Verbose output to track progress

This step ensures:

  • Check the verbose output for any ERROR: messages and ensure the command completed with a shell exit code of 0. Errors related to oci_* users can be ignored.
Note

For environments with multiple databases, we recommend running the previous commands separately for each database to ensure a complete and consistent upgrade process.

Performance Considerations

  • Use higher -j values based on available CPU
  • Use -Z0 during dump for faster performance
  • Ensure sufficient disk I/O capacity
  • Monitor restore progress using verbose logs

Post-Restore Validation

After completing the restore, do the following tasks.

  • Verify row counts

    This task helps confirm that all data has been successfully migrated and that there are no discrepancies between the source and target databases.

  • Check application connectivity

    This task ensures that applications can connect and interact with the upgraded database as expected.

  • Validate roles and permissions

    This task helps confirm that users and roles have the appropriate access and privileges in the new environment.

  • Run the following command for each database
    VACUUM ANALYZE <database_name>;

    This command helps update internal statistics used by the query planner, allowing OCI PostgreSQL to generate more efficient execution plans based on the current data. It can also help optimize overall performance following the restore, particularly for larger databases or workloads with complex queries.

pglogical

The pglogical extension enables logical replication across multiple Oracle Cloud Infrastructure (OCI) PostgreSQL database systems. This extension supports cross-version PostgreSQL replication, making it a suitable, robust approach for performing database upgrades with minimal downtime. By enabling logical replication between source and target systems, you can seamlessly migrate data across versions while maintaining application availability.

pglogical is particularly useful for the following situations:

  • Major version upgrades
  • Cross-environment migrations

pglogical allows seamless data synchronization between:

  • PostgreSQL instances across different OCI regions and availability domains
  • Databases deployed in different Virtual Cloud Networks (VCNs)
  • Managed PostgreSQL services across multiple cloud providers
  • Self-managed PostgreSQL instances (cloud or on-premises)

This flexibility makes pglogical a powerful solution for database upgrades, migrations, and hybrid deployments.

Network Connectivity Requirements

  • If both database systems are within the same VCN, connectivity is automatically available.
  • If databases are in different VCNs within the same region, configure the Local Peering Gateway (LPG) to establish communication.
  • If databases are in different VCNs within the different region, configure Dynamic Route Gateway (DRG) to establish communication.

Enable pglogical Extension on Source and Target Database System

Perform the following steps on both source (earlier version) and target (later version) databases:

  1. Sign in to the Oracle Cloud Console and navigate to your OCI PostgreSQL database system.
  2. Modify the configuration by accessing the configuration file.
  3. Select Copy Configuration and update the required parameters. Under User Variables (Read/Write):
    • wal_level = logical
    • track_commit_timestamp = 1
  4. Enable Extension: Under Configure Extensions, select pglogical and create the configuration.
  5. Navigate to your database system, select Edit under Configuration, and apply the previously created configuration to your database system.

    The database system state is initially Updating. Wait until it becomes Active before proceeding.

    This configuration prepares both environments for logical replication by enabling the required settings and extensions, allowing pglogical to capture and synchronize changes reliably between the source and target databases during the upgrade process.

  6. After your database system is active, connect to your database and verify the enabled extensions using the following query.
    SHOW oci.admin_enabled_extensions;
  7. Create the pglogical extension using the following command:
    CREATE EXTENSION pglogical;

Configure Source Database

Log in as the admin user (the user specified during OCI PostgreSQL database creation) and grant the necessary privileges to enable logical replication.

alter role xxx with replication; 
grant EXECUTE on FUNCTION pg_catalog.pg_replication_origin_session_reset() to xxx ; 
grant EXECUTE on FUNCTION pg_catalog.pg_replication_origin_session_setup to xxx ; 
grant all on FUNCTION pg_catalog.pg_replication_origin_session_setup to xxx;

xxx is the sample user created during database system provision.

Configure pglogical on Source (Publisher)

  1. Create the publisher node on the source database.
    SELECT pglogical.create_node(node_name := 'provider1', dsn := 'host=<source_database_fqdn> port=5432 user=xxx password=xxxxx dbname=pglogical_source');
    • node_name: Specify the name of the publisher to be created on the source database.
    • host: Enter the fully qualified domain name (FQDN) of the source database.
    • port_number: Provide the port on which the source database is running.
    • database_name: Specify the database to create the publication in.
  2. Add all tables in the public schema to the default replication set.
    SELECT pglogical.replication_set_add_all_tables('default', ARRAY['public']);

Configure Target Database

Log in as the admin user (the user specified during OCI PostgreSQL database creation) and grant the necessary privileges to enable logical replication.

alter role xxx with replication;
grant EXECUTE on FUNCTION pg_catalog.pg_replication_origin_session_reset() to xxx ;
grant EXECUTE on FUNCTION pg_catalog.pg_replication_origin_session_setup to xxx ;
grant all on FUNCTION pg_catalog.pg_replication_origin_session_setup to xxx;

xxx is the sample user created during database setup.

Note

Ignore the alter role ...replication command for the admin user on OCI PostgreSQL database version 16 and later, as the admin user inherits the replication role by default starting from OCI PostgreSQL 16.

Before proceeding, ensure that:

  • All required schemas and objects exist on the target

    Items include schemas, tables, indexes, extensions, and any required database objects. Having these in place helps ensure that replication or restore operations can proceed without errors from missing dependencies. Existence can be confirmed by comparing schema listings (for example, using \dn and \dt in psql) and validating that required extensions are installed on both source and target.

  • Database structure matches the source

    The target database should closely align with the source in terms of schema definitions, object names, and overall structure. This consistency helps support a smooth data migration and reduces the likelihood of conflicts or inconsistencies during synchronization. Verify by reviewing schema definitions (such as using pg_dump --schema-only) or by comparing table structures and object counts between the source and target systems.

Configure pglogical on Target (Subscriber)

  1. Create the subscriber node on the target database:
    SELECT pglogical.create_node(node_name := 'subscriber1', dsn := 'host=<target_database_fqdn> port=5432 user=xxx password=xxxxx dbname=pglogical_target');
    • node_name: Define the name of the subscriber on the target database.
    • host: Enter the fully qualified domain name (FQDN) of the target database.
    • port_number: Enter the port on which the target database is running.
    • database_name: Provide the name of the database where the subscription will be created
  2. Create the subscription for start the background synchronization and replication processes:
    SELECT pglogical.create_subscription(subscription_name := 'subscription1', provider_dsn := 'host=<source_database_fqdn> port=5432 user=xxx password=xxxxxx dbname=pglogical_source sslmode=require');
    • subscription_name: Provide the name of the subscription.
    • host: Provide the FQDN of the source database.
    • port_number: Provide the port on which the target database is running.
    • database_name: Provide the name of the source database.

Verify Replication

  1. Check subscription status (target): Run the following statement to check the status of your subscription on your target database.
    SELECT * FROM pglogical.show_subscription_status();
  2. Check replication status (source): Run the following statement to check the status of your replication on your source database.
    SELECT * FROM pg_stat_replication;

If both checks indicate an active status with no lag or pending changes, this result suggests that replication is complete and the target database is fully synchronized with the source.

For more information on pglogical, including detailed steps and explanations, see Cross-Region Synchronization of OCI Database with PostgreSQL using the pglogical Extension

Conclusion

In conclusion, major version upgrades using pg_dump/pg_restore and pglogical each offer distinct advantages depending on the use case. The pg_dump/pg_restore approach is straightforward and reliable for smaller databases or when downtime is acceptable, but it can be time-consuming for large datasets. In contrast, pglogical enables near-zero downtime upgrades by replicating data between versions, making it better suited for large or mission-critical systems.

Choosing the right method depends on factors such as database size, acceptable downtime, and operational complexity. Regardless of the approach, thorough planning, testing, and post-upgrade validation are critical to ensure data integrity and a smooth transition to the newer OCI PostgreSQL version.