Oracle Data Guard Per Pluggable Database (DGPDB): Technical Deep Dive & Comparison
I. Introduction to Data Guard per Pluggable Database (DGPDB)
Defining DGPDB: PDB-Granular Protection
Oracle Data Guard per Pluggable Database (DGPDB) is an Oracle Data Guard capability enabling disaster recovery (DR) and high availability (HA) configurations specifically for individual Pluggable Databases (PDBs) within an Oracle Multitenant Container Database (CDB) environment. Unlike traditional Data Guard which protects the entire CDB, DGPDB focuses protection on specific PDBs. This allows a PDB to undergo failover or switchover operations independently of other PDBs within the same CDB. This feature was introduced in Oracle Database 21c (specifically Release Update 21.7) and enhanced in the 23ai release.
Core Architectural Shift: Two Primary CDBs
The fundamental architectural difference in DGPDB is the presence of two primary CDBs, typically both open read-write, instead of the traditional primary-standby CDB pair. Each primary CDB hosts source PDBs (open read-write, being protected) and target PDBs (in mount state, acting as standbys for PDBs in the peer CDB). This architecture is key to enabling PDB-level independence for role transitions.
This shift to a two-primary-CDB model fundamentally alters the HA/DR topology. Traditional Data Guard involves one primary CDB and one or more standby CDBs. DGPDB explicitly uses two primary CDBs. In traditional Data Guard, the standby CDB is usually a full physical or logical copy, often in mount or read-only mode. In DGPDB, both CDBs are active and can host their own primary PDBs alongside the standby PDBs for their peers. This not only provides PDB-level protection but presents a fundamentally different active-active (at the CDB level) or active-passive (at the PDB level) configuration model compared to the traditional active-passive CDB model. Consequently, management complexity increases. Administrators must now manage two fully active CDBs, their respective primary PDBs, and the standby PDBs within them. Resource planning must account for potentially co-located primary and standby PDB workloads within the same CDB instance, unlike traditional Data Guard where the standby CDB primarily handles recovery and potentially offloaded read-only workloads.
II. Architectural Comparison: DGPDB vs. Traditional Data Guard
Scope of Protection: PDB vs. Entire CDB
Traditional Data Guard replicates and protects the entire CDB, meaning all PDBs within it share the same role (primary or standby) and transition together. In contrast, DGPDB defines and manages protection for individual PDBs, allowing different PDBs within the same CDB to have different roles (some primary, some standby for peers).
Role Management and Flexibility
In traditional Data Guard, role transitions (switchover, failover) occur at the CDB level, affecting all contained PDBs simultaneously. This can be disruptive in highly consolidated environments. DGPDB enables switchover and failover specifically for individual PDBs without impacting other PDBs in the same CDB, offering significant operational flexibility.
Redo Transport and Apply Mechanisms
DGPDB utilizes the same fundamental redo transport architecture (LGWR -> ORL -> Transport Process -> RFS -> SRL) as traditional Data Guard. The entire redo stream from the source CDB is shipped to the target CDB. The key difference lies in the apply process: DGPDB employs a dedicated apply process per target PDB. This process filters the incoming redo stream and applies only the changes relevant to that specific target PDB. This contrasts with traditional Data Guard, which typically has a single apply process (MRP or SQL Apply) for the entire standby CDB.
Shipping the entire CDB redo stream but applying only PDB-specific changes implies potential overhead and filtering complexity. DGPDB sends the full CDB redo but applies changes per PDB. This means the target CDB receives redo for all PDBs from the source CDB, including those not participating in the DGPDB relationship or whose target PDBs reside elsewhere. The per-PDB apply process must actively filter this comprehensive redo stream to identify and apply only the relevant records. This filtering adds computational overhead on the target CDB compared to a traditional standby that applies everything it receives. Consequently, while network bandwidth requirements between the two primary CDBs in a DGPDB setup might be similar to a traditional DG setup (since full CDB redo is sent), the CPU and potentially I/O overhead for the apply process on the target CDB might be higher or exhibit different characteristics due to the filtering logic, especially in CDBs where numerous PDBs generate significant redo. Performance monitoring must account for this per-PDB apply behavior.
2.4. Technical Differences in Redo Log Processing
The core technical distinctions between traditional CDB-level Data Guard and PDB-level Data Guard (DGPDB) manifest primarily during the redo apply phase. Redo generation and transport are largely similar:
- Redo Generation: In both scenarios, redo data is generated at the source (primary) CDB level. The Log Writer (LGWR) process in the source CDB writes redo information encompassing changes for all PDBs within that CDB to the Online Redo Logs (ORLs). There’s no difference in how redo is generated for a PDB based on the Data Guard type.
- Redo Transport: The transport mechanism is fundamentally the same for both architectures. The entire redo stream generated by the source CDB (containing changes for all PDBs, protected or not) is captured and sent to the target site. This uses standard Data Guard transport processes (e.g., ASYNC TTnn process).
- Redo Reception: At the target site, the Remote File Server (RFS) process receives the incoming redo stream and writes it to the Standby Redo Logs (SRLs). In a DGPDB configuration, there is a single set of SRLs on the target CDB to receive the entire redo stream from the source CDB. This mirrors traditional Data Guard where the standby CDB receives the full redo stream.
- Redo Apply (Key Difference):
- Traditional Data Guard: Typically involves a single Managed Recovery Process (MRP0) or SQL Apply process responsible for applying the entire received redo stream to the standby CDB. This means all PDBs within the standby CDB are updated concurrently.
- Data Guard per PDB (DGPDB): This architecture employs a separate apply process (TTnn) for each target PDB. Each PDB-specific apply process reads the entire redo stream from the SRLs but actively filters it. It identifies and applies only those redo records relevant to its specific target PDB. Redo records belonging to other PDBs in the source CDB are ignored by that specific PDB’s apply process. Recovery (redo apply) can be started and stopped independently for each target PDB.
This per-PDB filtering and apply mechanism is the core technical underpinning that enables DGPDB’s capability for independent PDB-level role transitions (switchover/failover). However, it also implies potential overhead, as the target CDB must process the entire incoming redo stream, and each PDB apply process must filter for its relevant records.
Table: Key Differences Between DGPDB and Traditional Data Guard
The following table summarizes the core architectural and operational differences:
Feature | Traditional Data Guard | Data Guard per PDB (DGPDB) |
---|---|---|
Architecture | Primary CDB / Standby CDB (Active-Passive) | Two Primary CDBs (Active-Active or Active-Passive at PDB level) |
Protection Scope | Entire CDB | Individual PDBs |
Role Transition Unit | CDB | PDB |
Primary Databases | One | Two |
Standby Role | Entire CDB | Target PDB (for source PDB in peer CDB) |
Redo Apply Mechanism | CDB Level (Single Process) | PDB Level (Per-PDB Process with Filtering) |
Supported Protection Modes | Max Protection, Max Availability, Max Performance | Maximum Performance (ASYNC) Only |
Primary Use Case | Highest data protection, CDB-level HA/DR | PDB-level flexibility (maintenance, testing, failover) |
Licensing Considerations | Enterprise Edition required. ADG for some features. | Enterprise Edition required. ADG for features like Real-Time Query. |
III. DGPDB Purpose and Strategic Use Cases
Providing Granular High Availability and Disaster Recovery
The primary goal of DGPDB is to offer HA/DR capabilities at a more granular level than the entire CDB, aligning DR strategies with individual applications or PDBs.
Facilitating Planned Maintenance with Minimal Disruption
DGPDB allows administrators to perform maintenance on one of the primary CDBs by switching over its PDBs individually to the peer CDB. This avoids a full CDB outage, which is crucial in highly consolidated environments with varying PDB maintenance windows or SLAs.
Enabling Flexible Workload Rebalancing Across Sites
Describes the ability to move specific PDBs (and their associated workloads) between the two primary CDBs via switchover to optimize resource utilization or balance load across data centers/sites.
Isolating Faults: “Sick PDB” Protection
DGPDB allows a single problematic or failed PDB to failover to its standby counterpart on the peer CDB without impacting the availability of other healthy PDBs in the original source CDB. This differs from traditional Data Guard where a PDB-level issue might necessitate a full CDB failover.
Simplifying Disaster Recovery Testing per Application/PDB
Highlights the benefit of testing DR plans for individual applications by failing over only their specific PDB(s), reducing the scope, risk, and downtime associated with DR tests compared to full CDB failovers.
DGPDB enables a tiered HA/DR strategy within a consolidated environment. DGPDB allows selective protection and role transitions for individual PDBs. Traditional DG protects the entire CDB. Organizations often consolidate PDBs with varying criticality and SLA requirements into a single CDB. DGPDB allows administrators to apply robust (though not maximum protection) DR specifically to critical PDBs within a CDB, while potentially leaving less critical PDBs unprotected or using other HA methods. This is impossible with traditional DG, where all PDBs inherit the CDB’s protection level. Consequently, a single CDB could host “Gold” PDBs protected via DGPDB to a peer primary CDB, “Silver” PDBs perhaps using application-level replication or less frequent backups, and “Bronze” PDBs with basic backup strategies. This enables more cost-effective and resource-efficient HA/DR solutions in consolidated environments. Instead of needing a full standby CDB (requiring potentially significant resources and licensing) to protect only a few critical PDBs, DGPDB provides targeted protection. However, this also increases management complexity in tracking the protection status and procedures for each PDB tier.
IV. Configuration Details
Prerequisites
- Licensing: Requires Oracle Database Enterprise Edition. The Active Data Guard (ADG) option license is needed for features like Real-Time Query on standby PDBs. While core DGPDB functionality (Max Performance mode, standby creation/apply) might work without ADG if the standby PDB is never opened read-only during redo apply, opening it for Real-Time Query explicitly requires the ADG license on both primary and standby. Clarifying this nuance is important.
- Software Versions: Introduced in Oracle Database 21c RU 21.7. Enhanced in Oracle Database 23ai. Both involved CDBs should ideally be at the same version or follow standard Data Guard compatibility rules.
- Database Configuration: Both CDBs must be in
ARCHIVELOG
mode withFORCE LOGGING
enabled.STANDBY_FILE_MANAGEMENT
should beAUTO
.DG_BROKER_START
must beTRUE
. Flashback Database should be enabled. SPFILEs must be used. - Network Setup: Reliable network connectivity between the two primary CDB hosts is essential. TNS entries for cross-CDB communication must be correctly configured on both sites. Firewall rules must allow traffic on the listener port (e.g., 1521) and potentially other Data Guard-related ports. Passwordless connectivity via Oracle Wallet for Broker operations is recommended.
- TDE Wallets: Transparent Data Encryption (TDE) is a common prerequisite, especially in cloud environments. Ensure wallets are correctly configured and keys can be managed/transferred between sites.
Basic Configuration Steps with DGMGRL (Data Guard Broker)
- Broker Setup: Create separate broker configurations for each primary CDB using
CREATE CONFIGURATION
. - Link Configurations: Add the peer configuration to each local configuration using
ADD CONFIGURATION
. - Enable Configurations: Enable both configurations using
ENABLE CONFIGURATION ALL
. - Prepare for DGPDB: Run
EDIT CONFIGURATION PREPARE DGPDB
on both configurations, providing a password for the internalDGPDB_INT
user when prompted. - Add PDB Protection: Use
ADD PLUGGABLE DATABASE
specifying the source PDB, target PDB, source CDB, target CDB, and crucially, thePDBFILENAMECONVERT
parameter for datafile path translation. The keystore password must also be provided if TDE is used.
Datafile Management and TDE Key Handling
Explain that ADD PLUGGABLE DATABASE
does not automatically copy datafiles. Detail the manual process: identify source PDB datafiles (e.g., via V$DATAFILE
), use RMAN BACKUP AS COPY... AUXILIARY FORMAT NEW
to copy files to the target CDB’s storage location. Explain the need to rename the copied datafiles on the target PDB using ALTER DATABASE RENAME FILE
to their correct names. Outline the TDE key export/import process using ADMINISTER KEY MANAGEMENT EXPORT/IMPORT ENCRYPTION KEYS
for both the CDB root and the specific PDB between source and target sites.
Basic DGMGRL Management Commands
List key commands: SHOW CONFIGURATION
, SHOW DATABASE
, SHOW ALL PLUGGABLE DATABASE AT <CDB>
, SHOW PLUGGABLE DATABASE <PDB> AT <CDB>
, VALIDATE PLUGGABLE DATABASE
, EDIT PLUGGABLE DATABASE... SET STATE='APPLY-ON'/'APPLY-OFF'
, SWITCHOVER TO PLUGGABLE DATABASE
, FAILOVER TO PLUGGABLE DATABASE
, REMOVE PLUGGABLE DATABASE
, REMOVE CONFIGURATION
. Provide brief descriptions derived from snippets.
The manual steps required for datafile copying and TDE key management represent a significant operational difference and potential point of failure compared to traditional DG physical standby creation, which often uses RMAN DUPLICATE or broker-driven instantiation. DGPDB setup requires manual RMAN copy and TDE key export/import. Traditional physical standby creation can often be fully automated by the broker or RMAN DUPLICATE. Manual steps increase the likelihood of human error (wrong file paths, incorrect passwords, missing files/keys). They also add significant time to the PDB standby instantiation process compared to more automated methods. Consequently, DGPDB setup is operationally more complex than traditional Data Guard and potentially more error-prone for the initial PDB instantiation phase. Organizations adopting DGPDB need robust, well-tested procedures and potentially automation scripts to handle PDB instantiation reliably and efficiently, especially when dealing with numerous PDBs. The lack of full automation here might be a barrier to adoption for some.
V. Analysis: Advantages and Limitations
Advantages
- PDB-Level Flexibility: The core benefit – independent management, switchover, failover, and testing for individual PDBs.
- Granular Control: Aligning HA/DR with specific application needs in a consolidated environment.
- Faster Role Transitions (per PDB): Switching or failing over a single PDB is significantly faster than transitioning an entire CDB.
- Isolated Operations: Maintenance or failure of one PDB does not force downtime or role changes for other PDBs in the same CDB.
- Workload Balancing: Ability to actively distribute PDBs between two sites.
- Real-Time Query (with ADG License): Standby PDBs can be opened read-only for query offloading while redo apply continues.
Limitations and Unsupported Features
- Protection Modes: Only Maximum Performance (ASYNC redo transport) is supported. Maximum Availability and Maximum Protection modes are not supported. This implies potential data loss in some failover scenarios.
- Unsupported Topologies/Features: Far Sync instances, Snapshot Standby databases, bystander standby databases are not supported. Rolling upgrades using
DBMS_ROLLING
are not supported. Data Guard broker external destinations and ZDLRA integration/backups are not supported. Application Containers are not supported. Only one target CDB per source CDB is allowed. Downstream capture for GoldenGate is not supported. - GoldenGate Integration: Limited integration, although 23ai allows DGPDB source and GG Per-PDB Capture to coexist with seamless transition on role change.
Potential Complexity and Performance Considerations
- Increased Management Overhead: Managing two primary CDBs, individual PDB states, and potentially complex failover/switchover scenarios across multiple PDBs can increase administrative burden compared to managing a single primary/standby CDB pair.
- Performance Impact: While redo transport is similar, the per-PDB filtering and apply process on the target CDB might introduce performance overhead, especially with high redo rates or many PDBs. Careful network bandwidth assessment is needed. Resource contention within the target CDB hosting both primary and standby PDBs needs management. discuss general DG performance tuning, which is relevant but needs adaptation for the DGPDB apply model. notes potential bugs and robustness concerns in early releases.
Table: Summary of DGPDB Advantages and Disadvantages
Advantages | Disadvantages / Limitations |
---|---|
PDB-level independent management & flexibility | Maximum Performance mode only (potential data loss) |
Granular HA/DR and maintenance operations | Maximum Availability/Protection modes not supported |
Faster per-PDB role transitions | Far Sync, Snapshot Standby not supported |
Isolated failover for “sick PDB” | Rolling upgrade with DBMS_ROLLING not supported |
Easier per-application DR testing | ZDLRA integration/backups not supported |
Workload balancing capability across sites | Application Containers not supported |
Real-Time Query (with ADG license) | Limited GoldenGate integration |
Beneficial in highly consolidated environments | Increased management complexity (two primary CDBs) |
Potential performance overhead in per-PDB apply process | |
Manual steps for PDB instantiation (datafiles/TDE keys) | |
Only one target CDB per source CDB |
VI. Operational Considerations
PDB-Level Switchover and Failover Procedures
Describe the process using DGMGRL commands (SWITCHOVER TO PLUGGABLE DATABASE
, FAILOVER TO PLUGGABLE DATABASE
). Explain prerequisites and broker actions during switchover. Detail the difference between complete failover (default, no data loss if source CDB available) and immediate failover (potential data loss, used when source CDB unavailable). Note the need to manually reinstate the old source PDB after a failover using EDIT PLUGGABLE DATABASE... SET STATE=APPLY-ON
.
Monitoring DGPDB Environments
Discuss using DGMGRL SHOW
commands (SHOW CONFIGURATION
, SHOW ALL PLUGGABLE DATABASE AT <CDB>
, SHOW PLUGGABLE DATABASE <PDB> AT <CDB>
) to monitor the status and roles of configurations and PDBs. Mention relevant V$ views for monitoring Data Guard status, adapting interpretation for per-PDB apply lag.
Key Management Differences from Traditional Data Guard
Reiterate managing two primary CDBs vs. a primary/standby pair. Emphasize the need to manage states (APPLY-ON
/APPLY-OFF
) at the PDB level. Highlight the manual steps required for PDB instantiation (datafile copy, TDE key sync). Discuss implications of different PDBs within the same CDB potentially having different DR targets (or none), requiring careful monitoring.
The operational model for DGPDB requires a shift in DBA mindset from CDB-centric management to PDB-centric HA/DR management. Commands, states, and role transitions are PDB-specific in DGPDB. Traditional DG operations are CDB-focused. DBAs accustomed to CDB-level Data Guard management need to adapt runbooks, monitoring scripts, and troubleshooting procedures to account for individual PDB states, roles, and potential issues. A failure or maintenance operation might involve checking and manipulating multiple PDBs individually within the broker configuration, rather than issuing a single command for the entire CDB. Monitoring must track per-PDB apply lag and status. This increases potential operational complexity and requires more sophisticated monitoring and automation to manage effectively at scale. DBAs need specific training and updated procedures for DGPDB environments. The risk of misconfiguration or operational error might be higher initially due to the increased granularity.
VII. Conclusion and Recommendations
Summarizing DGPDB’s Role in Multitenant Environments
Reiterate DGPDB as a specialized Data Guard architecture offering PDB-level flexibility within the Multitenant framework. Position it as a complement to, not a complete replacement for, traditional Data Guard.
Guidance: Choosing Between DGPDB and Traditional Data Guard
Provide clear recommendations based on the analysis:
- Choose Traditional Data Guard: For critical databases requiring the highest data protection needs (RPO=0 requiring Max Availability/Protection), complex topologies (multiple standbys, Far Sync), where CDB-level consistency in DR is paramount.
- Choose DGPDB: For highly consolidated environments with varying PDB SLAs/maintenance needs, where PDB-level operational flexibility (maintenance, testing, workload balancing) is the primary driver, and the Maximum Performance protection mode (potential minimal data loss) is acceptable.
Implementation Best Practices
- Thoroughly test configuration and role transition procedures in a non-production environment.
- Develop clear operational runbooks for PDB switchover, failover, and reinstatement.
- Implement robust monitoring for PDB-level apply lag and status.
- Carefully manage TDE keys and wallet synchronization.
- Consider scripting/automating the manual PDB instantiation steps.
- Ensure adequate network bandwidth and target CDB resources for redo transport and per-PDB apply processing.
- Stay current with patches and releases as the feature is relatively new and evolving