Search This Blog

Wednesday, 4 December 2024

ORA-00494: enqueue [CF] held for too long (more than 900 seconds)

ORA-00494: enqueue [CF] held for too long (more than 900 seconds) by ‘inst 1, osid 4460’

Error:
Following error causing the Oracle instance to crash. Tried some solution on it.

Errors in file E:\ORACLE\diag\rdbms\icfisc\icfisc\trace\icfisc_arc2_4388.trc (incident=482225):
ORA-00494: enqueue [CF] held for too long (more than 900 seconds) by 'inst 1, osid 4460'
Incident details in: E:\ORACLE\diag\rdbms\icfisc\icfisc\incident\incdir_482225\icfisc_arc2_4388_i482225.trc
2019-02-03T20:24:43.252616-06:00
Killing enqueue blocker (pid=4460) on resource CF-00000000-00000000-00000000-00000000 by (pid=4388) by killing session 190.32607
KILL SESSION for sid=(190, 32607):
Reason = RAC enqueue blocker]
2019-02-03T20:24:43.721432-06:00
Mode = KILL SOFT -/-/-
Requestor = ARC2 (orapid = 38, ospid = 4388, inst = 1)
Owner = Process: LGWR (orapid = 17, ospid = 4460)
Result = ORA-29
Killing enqueue blocker (pid=4460) on resource CF-00000000-00000000-00000000-00000000 by (pid=4388)
by terminating the process
2019-02-03T20:32:10.146244-06:00
Errors in file E:\ORACLE\diag\rdbms\icfisc\icfisc\trace\icfisc_vkrm_4784.trc:
ORA-27300:OS system dependent operation:WaitForSingleObject failed with status:0
ORA-27301: OS failure message: The operation completed successfully.
ORA-27302: failure occurred at: sssxcpttcs5
2019-02-03T20:34:39.313336-06:00
Errors in file E:\ORACLE\diag\rdbms\icfisc\icfisc\trace\icfisc_pman_4616.trc:

Cause
Caused for error is the process did not release the enqueue within the maximum allowed time. General cause of ORA-00494 error is during periods of high utilization of system resources that instance becomes unresponsive due to overloaded disk I/O, CPU or RAM.

When any process holds the CF enqueue for long time, the other processes which need to perform the Controlfile transaction will be waiting to acquire the enqueue. Holding the enqueue for very long time can lead to database hang. Hence there is a timeout set for holding the controlfile enqueue. The timeout is 900 secs (15 min). If the process exceeds this timeout, then the holder process is killed by killing the session of holding process. The waiting process normally kills the holding process which exceeds the timeout. Then the error is logged in the alert log.

Solution
Following hidden parameters may be help in this situation:

Check value of hidden parameters before changing it:

COLUMN ksppinm FORMAT A35
COLUMN ksppstvl FORMAT A10
SELECT ksppinm,ksppstvl
FROM x$ksppi a, x$ksppsv b
WHERE a.indx=b.indx
AND ksppinm in ('_kill_controlfile_enqueue_blocker','_kill_enqueue_blocker','_controlfile_enqueue_timeout') ORDER BY ksppinm;

_kill_controlfile_enqueue_blocker = { TRUE | FALSE }
Workaround: Disable the parameter

alter system set "_kill_controlfile_enqueue_blocker"=false;

Note:
TRUE: Enables this mechanism and kills blocker process in CF enqueue.(Default value).
FALSE: Disables this mechanism and no blocker process in CF enqueue will be killed.

_kill_enqueue_blocker = { 0 | 1 | 2 | 3 }
Workaround: Set 1 value for this parameter

alter system set "_kill_enqueue_blocker"=1 scope=spfile;

0. Disables this mechanism and no foreground or background blocker process in enqueue will be killed.
1. Enables this mechanism and only kills foreground blocker process in enqueue while background process is not affected.
2. Enables this mechanism and only kills background blocker process in enqueue.
3. Enables this mechanism and kills blocker processes in enqueue.
Note: you can set the following init.ora parameter to 1 (Default value is 3 for 10g and 2 for 11g databases)

_controlfile_enqueue_timeout = { INTEGER }
Workaround: Increase value

ALTER SYSTEM SET "_controlfile_enqueue_timeout"=1800;

Note:
900. Default value.
1800. Optimum value to prevent enqueue timeout.

Rest or remove these parameter:

alter system reset "_kill_controlfile_enqueue_blocker";
alter system reset "_kill_enqueue_blocker";

Friday, 15 November 2024

ORA-39358: Export Dump File Version 12.1.0.2.0 Not Compatible With Target Version 11.2.0.4.0 (Doc ID 2121317.1)

 



Applies to:

Oracle Database - Enterprise Edition - Version 12.1.0.2 to 12.1.0.2 [Release 12.1]
Oracle Database Cloud Schema Service - Version N/A and later
Oracle Database Exadata Cloud Machine - Version N/A and later
Oracle Database Exadata Express Cloud Service - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Information in this document applies to any platform.

Symptoms

After upgrading a database from  11.2.0.4 to 12.1.0.2, the following errors are reported executing a datapump import process:

impdp <user>/<password> tables=<OWNER>.<TABLE NAME>

 

Import: Release 12.1.0.2.0 - Production on Mon Mar 28 09:52:37 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Advanced Analytics
and Real Application Testing options
ORA-39002: invalid operation
ORA-39358: Export dump file version 12.1.0.2.0 not compatible with target version 11.2.0.4.0

 

Changes

 The importing database was migrated from 11.2.0.4 to 12.1.0.2.0

Cause

The issue is because the COMPATIBLE parameter for the 12c importing database is still 11.2.0.4 (value before the upgrade).
 
It was confirmed based on the information provided:

1. Version of the exporting database 12c (12.1.0.2.0), compatible 12c (12.1.0.2.0)
2. Version of the importing database 12c (12.1.0.2.0), compatible 11g (11.2.0.4.0)
 

Solution

In this case, there are next two solutions:

  • Use the parameter VERSION during the export process. It need to be set equal to the COMPATIBLE parameter value in the target database where the import is going to be executed.

    1. Execute the export using parameter version equal to 11.2.0.4:
      $ expdp <user>/<password> tables=<OWNER>.<TABLE NAME> version=11.2.0.4
        
    2. Then, execute the import process:
      $ impdp <user>/<password> tables=<OWNER>.<TABLE NAME>

    Using this option, the version of the dumpfile version will be matching the compatible parameter on target system (11.2.0.4).

    - OR -

  • Set compatible to 12.1.0.2.0 on target system:
    sql> alter system set compatible='12.1.0.2.0' scope=spfile;
    sql> shutdown immediate;
    sql> startup;
    sql> show parameter compatible;

Wednesday, 9 October 2024

RFS[4]: No standby redo logfiles created for T-1 dataguard

 

RFS[4]: No standby redo logfiles created for T-1

In Standby Environment of dataguard Server, We are getting following alert log error:

RFS[4]: No standby redo logfiles created for T-1
RFS[4]: Opened log for T-1.S-57586 dbid 618415567 branch 953210132

It show that No standby redo log file of thread 1 is present as per message, it also mention Thread number as 1.
On checking the Primary and Secondary database with thread column in views:

1. Check Primary Redo Thread Number:

SELECT thread#, group#, sequence#, bytes, archived ,status FROM v$log ORDER BY thread#, group#;

THREAD# GROUP# SEQUENCE# BYTES     ARC STATUS
------- ------ --------- --------- --- ----------
1 1 57586 209715200 NO CURRENT
1 2 57584 209715200 YES INACTIVE
1 3 57585 209715200 YES INACTIVE

2. Check Standby Thread number for Standby redo logs configured for DR Server.

SELECT thread#, group#, sequence#, bytes, archived, status FROM v$standby_log order by thread#, group#;

THREAD# GROUP# SEQUENCE#  BYTES     ARC  STATUS 
------- ------ --------- ---------- --- ----------
0 4 0 52428800 YES UNASSIGNED
0 5 0 52428800 YES UNASSIGNED
0 6 0 52428800 YES UNASSIGNED
0 7 0 52428800 YES UNASSIGNED
0 8 0 52428800 YES UNASSIGNED
0 9 0 52428800 YES UNASSIGNED

Note:Both Thread Number is different in value, so we need to drop and recreate the Standby with specify the THREAD caluse.

3. Stop the dataguard recovery process in Standby database:

-- for standby db which is under recovery, recovery needs to be stopped first
alter database recover managed standby database cancel;

4. Drop the existing Standby redo files:
Note: There is rule to configure standby redo log file Standby redolog file: N+1 and N stand for redo log files in primary.
In this example we configure 6 standby redo files.

ALTER DATABASE DROP STANDBY LOGFILE GROUP 4;
ALTER DATABASE DROP STANDBY LOGFILE GROUP 5;
ALTER DATABASE DROP STANDBY LOGFILE GROUP 6;
ALTER DATABASE DROP STANDBY LOGFILE GROUP 7;
ALTER DATABASE DROP STANDBY LOGFILE GROUP 8;
ALTER DATABASE DROP STANDBY LOGFILE GROUP 9;

5. Create new Standby logfile with THREAD Clauses.

alter database add standby logfile thread 1 group 4 size 200m;

alter database add standby logfile THREAD 1 group 4 (‘D:\ORACLEXE\STANDBYREDO04.log’) SIZE 200M;
alter database add standby logfile THREAD 1 group 5 (‘D:\ORACLEXE\STANDBYREDO05.log’) SIZE 200M;
alter database add standby logfile THREAD 1 group 6 (‘D:\ORACLEXE\STANDBYREDO05.log’) SIZE 200M;
alter database add standby logfile THREAD 1 group 7 (‘D:\ORACLEXE\STANDBYREDO06.log’) SIZE 200M;

–Example for two members
–alter database add standby logfile THREAD 1 group 5 (‘D:\ORACLEXE\STANDBYREDO01A.log’,’D:\ORACLEXE\STANDBYREDO01B.log’) SIZE 200M;

–Example for ASM
–alter database add standby logfile THREAD 1 group 7 (‘+DATA(ONLINELOG)’,’+FRA(ONLINELOG)’) SIZE 200M;

Note: in my case standby redo size(50M) is different with primary redo size(200M). so I corrected that one also while created new standby.

6. Now verify the thread number and redo log

SELECT thread#, group#, sequence#, bytes, archived, status FROM v$standby_log order by thread#, group#;

7. Start the Recovery of the dataguard in oracle.

alter database recover managed standby database disconnect from session;

Sunday, 29 September 2024

Database Restore In Different Server With Different File Location

 

##########################################################################

RMAN Backup Restore on Different Server with Different file Location

##########################################################################

 BACKUP SCRIPT:

#!/bin/bash

export ORACLE_SID=t1sipmdm

export ORACLE_HOME=/u01/oracle/product/12.1.0/db_1

export PATH=$PATH:$ORACLE_HOME/bin:$PATH

cd /exp_backup/rman_28sep2024

mkdir  `date +%d%b%Y`

export todaydir=`date +%d%b%Y`

export timestamp=`date +%d%b%Y_%H%M%S`

export logfile=/exp_backup/rman_28sep2024/level0_$timestamp.out

export backup_location=/exp_backup/rman_28sep2024/$todaydir

rman target / nocatalog <<EOF

spool log to '$logfile' append

run

{

allocate channel ch1 type disk;

allocate channel ch2 type disk;

crosscheck archivelog all;

BACKUP AS COMPRESSED BACKUPSET INCREMENTAL LEVEL 0 FORMAT "$backup_location/${ORACLE_SID}_l0_%U_%T" DATABASE PLUS ARCHIVELOG;

BACKUP CURRENT CONTROLFILE FORMAT "$backup_location/controlfile_%t_dbid_%I.ctl";

release channel ch1;

}

spool log off;

exit;

EOF

***********************************************************************************

Part -1 : With SET NEWNAME

 

Old data/ctl location: /u02/testdb

New data/ctl Location: /data/oradata

old archive location : /u01/app/oracle/arch

new Archive location : /data/arch

 

We Need Below RMAN Backups:

 

a. DB Backup

b. Archive Backup

c. Spfile and CTL file

 

Start Dummy DB from RMAN

--------------------------

 

startup nomount force;

 

Restore Spfile

-------------------

 

restore spfile from '/data/fra/spfile_431leuk3_1_1';

 

Verify Some Important Parameters and Locations

-----------------------------------------------

 

shut abort;

startup nomount;

sho parameter audit_file_dest

sho parameter control_files                --- Modify it in case Diff location

sho parameter log_archive_dest_                  --- Modify it in case DR setup

sho parameter log_archive_dest_1    --- Modify in Case diff location

sho parameter convert

sho parameter db_create_file_dest

 

Update Locations and Relevant Parameters

--------------------------------------------

 

alter system set log_archive_dest_state_2=defer;

alter system set log_archive_dest_1='LOCATION=/data/arch';

alter system

set control_files='/data/oradata/control01.ctl','/data/oradata/control02.ctl'

scope=spfile;

########### OR##########


1) Create the pfile from spfile on SOURCE database & copy to target database.

2) Modify the location of control file , archive log path, audit file path, DB name as per new database .


Bounce DB and Restore CTL

---------------------------

 

shut immediate;

startup nomount;

 

restore controlfile from '/data/fra/ctl_451leuk6_1_1';

alter database mount;

 

Check Old File Locations

----------------------------

 

report schema;

 

Restore RMAN Backup

---------------------

 

catalog backuppiece '/data/fra/DB_BKP_3v1leuh3_1_1';

catalog backuppiece '/data/fra/Arc_BKP_411leujv_1_1';

 

crosscheck backup;

delete noprompt expired backup;

 

a. Generate new file location scritp

 

set echo off pages 0 feed off

spool /home/oracle/file.log;

select 'set newname for datafile '||file#||' to '''||name||''';'

from v$datafile

union all

select 'set newname for tempfile '||file#||' to '''||name||''';'

from v$tempfile;

spool off;

spool /home/oracle/switchfile.log;

select 'switch datafile '||file#||' ;'

from v$datafile

union all

select 'switch tempfile '||file#||' ;'

from v$tempfile;

spool off;

 

cat /home/oracle/file.log

cat /home/oracle/switchfile.log

 

c. Modify File with new location

 

vi /home/oracle/file.log

 

:%s'/u02/testdb'/data/oradata


 b.Create RMAN Sript

 run

{

ALLOCATE CHANNEL C1 TYPE DISK;

ALLOCATE CHANNEL C2 TYPE DISK;

set newname for datafile 1 to '/data/oradata/system01.dbf';---New datafile location

set newname for datafile 3 to '/data/oradata/sysaux01.dbf';

set newname for datafile 4 to '/data/oradata/undotbs01.dbf';

set newname for datafile 7 to '/data/oradata/users01.dbf';

set newname for tempfile 1 to '/data/oradata/temp01.dbf';

set until scn = 16573413867832;

restore database;

switch datafile 1 ;

switch datafile 3 ;

switch datafile 4 ;

switch datafile 7 ;

switch tempfile 1 ;

recover database;

}

 

Confirm TEMPFile Location and Update if needed

-------------------------------------------------

 

select name from v$tempfile;

 

select 'alter database rename file '''||name||

''' to '||'''/data/oradata/'||substr(name,15,20)||''';' from v$tempfile;

 

alter database rename file '/u02/tempdb/temp01.dbf' to '/data/oradata/temp01.dbf';

 

Update logfile locations

---------------------------

 

select member from v$logfile;

 

alter database drop standby logfile group 11;

alter database drop standby logfile group 12;

alter database drop standby logfile group 13;

 

select 'alter database rename file '''||member||

''' to '||'''/data/oradata/'||substr(member,15,20)||''';' from v$logfile;

 

or

 

select 'alter database rename file ''' ||member|| '''

to

'||'''/data/oradata/'|| substr(member,instr(member,'/',-1)+1,

instr(substr(member,instr(member,'/',-1)+1),'.')-1 ) ||'.log'' ;'

file_name

from v$logfile;

 

select member from v$logfile;

 

 

Open DB

---------------

 

alter database open resetlogs;

 

 

Confirm DB and Datafiles Status

-----------------------------------

 

select name,open_mode from v$database;

select name from v$datafile;

select name from v$tempfile;

select member from v$logfile;

 

Done !!!!

How to rename a data file of an Oracle database or move it to a different location

 How to rename a data file of an Oracle database or move it to a different location


 Share

Data file has a different name and / or is in a different location (e.g. partition or directory)


Procedure:

(1)

Shut down the Ex Libris application.

(2)

Log on to the server where the Oracle database of the Ex Libris application is running as the UNIX user that owns the Oracle software (e.g. UNIX user oracle).

3()

Shut down the Oracle database by entering the following commands as UNIX user oracle:


$ echo $ORACLE_SID

(make sure that the value of this variable specifies the correct Oracle database)

$ sqlplus / as sysdba

SQL> shutdown immediate

SQL> exit

(3)

Copy the data file to the new name / location at the operating system level (as UNIX user oracle).


(4)

Start the database and bring it into status MOUNTED by entering the following commands as UNIX user oracle:

$ sqlplus / as sysdba

SQL> startup mount

(5)

Rename the data file within the Oracle database by entering the following SQL statement:


SQL> > ALTER DATABASE RENAME FILE

'/<full_path_of_old_location>/<old_datafile_name.dbf>'

TO

'/<full_path_of_new_location>/<new_datafile_name.dbf>';

(6)

Bring the database into status OPEN by entering the following SQL statement:


SQL> ALTER DATABASE OPEN;

(7)

Verify that the data file is at the correct location and has the correct name by entering the following SQL statement:


SQL> SELECT * FROM V$DBFILE;

8)

Start the Ex Libris application.

(9)

Remove the data file from the old location at the operating system level.

Datafile Backup By RMAN

datafile backup by RMAN 

run {

allocate channel d1 type disk;

allocate channel d2 type disk;

allocate channel d3 type disk;

allocate channel d4 type disk;

backup format

'/exp_backup/rman_28sep2024/28Sep2024/datafile_bckup_%d_%t_%U.bus' 

datafile 115,114,110,94,79,60 ,41 ,40 ,37 ,26;

}


Tuesday, 10 September 2024

Upgrade the Database Time Zone File Using the DBMS_DST Package

 

Check Current Time Zone Version

The V$TIMEZONE_FILE view displays the zone file version being used by the database.

SELECT * FROM v$timezone_file;

FILENAME                VERSION     CON_ID
-------------------- ---------- ----------
timezlrg_26.dat              26          0

SQL>

This can also be queried from the REGISTRY$DATABASE table. It's worth reading Doc ID 1255474.1 before using this though.

SELECT tz_version FROM registry$database;

TZ_VERSION
----------
        26

SQL>

It can also be queried from the DATABASE_PROPERTIES view.

COLUMN property_name FORMAT A30
COLUMN property_value FORMAT A20

SELECT property_name, property_value
FROM   database_properties
WHERE  property_name LIKE 'DST_%'
ORDER BY property_name;

PROPERTY_NAME                  PROPERTY_VALUE
------------------------------ --------------------
DST_PRIMARY_TT_VERSION         26
DST_SECONDARY_TT_VERSION       0
DST_UPGRADE_STATE              NONE

SQL>

The latest available version of the timezone file is displayed using the GET_LATEST_TIMEZONE_VERSION function in the DBMS_DST package.

SELECT DBMS_DST.get_latest_timezone_version
FROM   dual;

GET_LATEST_TIMEZONE_VERSION
---------------------------
                         31

SQL>

We can see there is a difference between the current and latest version of the time zone file, so we should consider performing an upgrade.

Prepare for the Upgrade

The prepare phase is where you check the impact of a time zone file upgrade, including the tables that will be affected by the upgrade. This phase is optional, but it makes sense to at least check what is going to happen.

Use the BEGIN_PREPARE procedure, passing in the file version you want to upgrade to. In this case we are selecting the latest version.

DECLARE
  l_tz_version PLS_INTEGER;
BEGIN
  l_tz_version := DBMS_DST.get_latest_timezone_version;

  DBMS_OUTPUT.put_line('l_tz_version=' || l_tz_version);
  DBMS_DST.begin_prepare(l_tz_version);
END;
/

We can now check the upgrade we are going to attempt. Notice the DST_SECONDARY_TT_VERSION column is now populated.

COLUMN property_name FORMAT A30
COLUMN property_value FORMAT A20

SELECT property_name, property_value
FROM   database_properties
WHERE  property_name LIKE 'DST_%'
ORDER BY property_name;

PROPERTY_NAME                  PROPERTY_VALUE
------------------------------ --------------------
DST_PRIMARY_TT_VERSION         26
DST_SECONDARY_TT_VERSION       31
DST_UPGRADE_STATE              PREPARE

SQL>

Empty the default tables that hold the affected tables list and errors. If you are using custom tables, created with the CREATE_AFFECTED_TABLECREATE_ERROR_TABLECREATE_TRIGGER_TABLE procedures, then empty those instead.

TRUNCATE TABLE sys.dst$affected_tables;
TRUNCATE TABLE sys.dst$error_table;

Find tables affected by the upgrade. Depending on your use of TIMESTAMP WITH TIME ZONE columns, you might not have any.

EXEC DBMS_DST.find_affected_tables;

Check the results of the call.

SELECT * FROM sys.dst$affected_tables;
SELECT * FROM sys.dst$error_table;

When you've identified the affected tables and determined you are happy to continue, you can end the prepare phase.

EXEC DBMS_DST.end_prepare;

Upgrade the Time Zone File (non-CDB)

Individual tables or schemas can be upgraded, but in this example we use UPGRADE_DATABASE procedure to upgrade all affected tables in the database.

Put the database into upgrade mode. This is not necessary from 21c onward. See the note at the bottom of this article.

SHUTDOWN IMMEDIATE;
STARTUP UPGRADE;

Begin the upgrade to the latest version.

SET SERVEROUTPUT ON
DECLARE
  l_tz_version PLS_INTEGER;
BEGIN
  SELECT DBMS_DST.get_latest_timezone_version
  INTO   l_tz_version
  FROM   dual;

  DBMS_OUTPUT.put_line('l_tz_version=' || l_tz_version);
  DBMS_DST.begin_upgrade(l_tz_version);
END;
/

Restart the database. This is not necessary from 21c onward. See the note at the bottom of this article.

SHUTDOWN IMMEDIATE;
STARTUP;

Do the upgrade of the database file zone file.

SET SERVEROUTPUT ON
DECLARE
  l_failures   PLS_INTEGER;
BEGIN
  DBMS_DST.upgrade_database(l_failures);
  DBMS_OUTPUT.put_line('DBMS_DST.upgrade_database : l_failures=' || l_failures);
  DBMS_DST.end_upgrade(l_failures);
  DBMS_OUTPUT.put_line('DBMS_DST.end_upgrade : l_failures=' || l_failures);
END;
/

The {CDB|DBA|ALL|USER}_TSTZ_TABLES views display the tables that are processed by the time zone file upgrade, and their current upgrade status. The following examples show how they could be used for a CDB an non-CDB database.

-- CDB
COLUMN owner FORMAT A30
COLUMN table_name FORMAT A30

SELECT con_id,
       owner,
       table_name,
       upgrade_in_progress
FROM   cdb_tstz_tables
ORDER BY 1,2,3;

-- Non-CDB
COLUMN owner FORMAT A30
COLUMN table_name FORMAT A30

SELECT owner,
       table_name,
       upgrade_in_progress
FROM   dba_tstz_tables
ORDER BY 1,2;

Once the upgrade is complete, check the time zone file version being used.

SELECT * FROM v$timezone_file;

FILENAME                VERSION     CON_ID
-------------------- ---------- ----------
timezlrg_31.dat              31          0

SQL>


COLUMN property_name FORMAT A30
COLUMN property_value FORMAT A20

SELECT property_name, property_value
FROM   database_properties
WHERE  property_name LIKE 'DST_%'
ORDER BY property_name;

PROPERTY_NAME                  PROPERTY_VALUE
------------------------------ --------------------
DST_PRIMARY_TT_VERSION         31
DST_SECONDARY_TT_VERSION       0
DST_UPGRADE_STATE              NONE

SQL>

Upgrade the Time Zone File (Multitenant)

In a multitenant environment, the time zone file upgrade must be performed in all containers. If you follow the example of the non-CDB instance, it will only be upgraded in the root container. What's more, any new PDBs created from the seed will also use the old time zone file. To solve this we run the upgrade in the root container and all PDBs using the "catcon.pl" Perl script.

 This example performs all the steps in all containers, without doing any investigation, and assumes everything will work with no additional intervention. You may not want to make this assumption on a database you care about. You can perform the manual steps described above, including the investigation during the prepare phase and dealing with upgrade failures, for each container individually.

Create a script called "/tmp/upgrade_tzf.sql" with the following contents. The shutdown and startup operations are not necessary from 21c onward. See the note at the bottom of this article.

-- Pre 21c only.
SHUTDOWN IMMEDIATE;
STARTUP UPGRADE;

SET SERVEROUTPUT ON
DECLARE
  l_tz_version PLS_INTEGER;
BEGIN
  SELECT DBMS_DST.get_latest_timezone_version
  INTO   l_tz_version
  FROM   dual;

  DBMS_OUTPUT.put_line('l_tz_version=' || l_tz_version);
  DBMS_DST.begin_upgrade(l_tz_version);
END;
/

-- Pre-21c only.
SHUTDOWN IMMEDIATE;
STARTUP;

SET SERVEROUTPUT ON
DECLARE
  l_failures   PLS_INTEGER;
BEGIN
  DBMS_DST.upgrade_database(l_failures);
  DBMS_OUTPUT.put_line('DBMS_DST.upgrade_database : l_failures=' || l_failures);
  DBMS_DST.end_upgrade(l_failures);
  DBMS_OUTPUT.put_line('DBMS_DST.end_upgrade : l_failures=' || l_failures);
END;
/

Now we run the script in all containers using the "catcon.pl" Perl script. The "-n 1" flag is important. The script does a shutdown and startup. If the script is run in parallel, some of the PDBs will be processed at the same time as the root container, so their upgrade will be interrupted by the root container shutdown/startup. Running with a single worker keeps things sequential.

$ORACLE_HOME/perl/bin/perl \
    -I$ORACLE_HOME/perl/lib \
    -I$ORACLE_HOME/rdbms/admin \
    $ORACLE_HOME/rdbms/admin/catcon.pl \
    -n 1 \
    -l /tmp/ \
    -b upgrade_tzf \
    /tmp/upgrade_tzf.sql

An alternative is to do the root container on its own, then do all the other PDBs in parallel by excluding the root container.

# Root only.
$ORACLE_HOME/perl/bin/perl \
    -I$ORACLE_HOME/perl/lib \
    -I$ORACLE_HOME/rdbms/admin \
    $ORACLE_HOME/rdbms/admin/catcon.pl \
    -l /tmp/ \
    -b upgrade_tzf \
    -c 'CDB$ROOT' \
    /tmp/upgrade_tzf.sql

# Root excluded.
$ORACLE_HOME/perl/bin/perl \
    -I$ORACLE_HOME/perl/lib \
    -I$ORACLE_HOME/rdbms/admin \
    $ORACLE_HOME/rdbms/admin/catcon.pl \
    -l /tmp/ \
    -b upgrade_tzf \
    -C 'CDB$ROOT' \
    /tmp/upgrade_tzf.sql

Check the "upgrade_tzf*.log" files for the result of the upgrade in each container.

Getting the Latest Time Zone File

You can find latest time zone file for your database in this MOS note.

The note is quite long and confusing, but the patches are listed in the section "C.1.d) DST patches list". Each link takes you to a further note describing the patch and providing the relevant download links. The patches are applied using OPatch in the normal way, but this only makes the new time zone file available. Once the patch has been applied you can go through the process described above to apply it to a database.

21c Update

From Oracle 21c onward we can issue to the following command to allow time zone files to be updated without switching the database to upgrade mode. This allows the time zone file to be upgraded without any downtime.

alter system set timezone_version_upgrade_online=true;

With this in place, follow the procedure above, but remove all database shutdown and startup operations.

For more information see:

Hope this helps. Regards Tim...

Open standby database in Readonly state in Active dataguard

 

Open standby database in Readonly state in Active dataguard

Following are the steps involved to open the DR environment in Read-only mode.

1. Check the status of dataguard.

select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------
ORCL      Mounted

2. Cancel the media recovery on Standby database:

alter database recover managed standby database cancel;

3. Open the database in READONLY MODE:
Note: Database is already in Mount state.

ALTER DATABASE OPEN READ ONLY;

4. Start media recovery with real-time log apply on Standby Database:

alter database recover managed standby database using current logfile disconnect from session;

5. Verify the Standby database status:

select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
ORCL      READ ONLY WITH APPLY

--Check MRP process is running
select process,status,sequence# from v$managed_standby;

 
PROCESS   STATUS        SEQUENCE#
--------- ------------ ----------
ARCH      CONNECTED             0
MRP0      APPLYING_LOG       2134 --->>>>  MRP PROCESS 

Wednesday, 28 August 2024

ORA-01111: Name For Data File Is Unknown – Rename To Correct File


Errors in file /u01/oracle/diag/rdbms/mussaib_stdby_2/mussaib/trace/mussaib_mrp0_525556.trc:ORA-01111: name for data file 133 is unknown - rename to correct fileORA-01110: data file 133:'/u01/oracle/product/19.3.0/db_2/dbs/UNNAMED00133'

ORA-01157: cannot identify/lock data file 133 - see DBWR trace file

ORA-01111: name for data file 133 is unknown - rename to correct fileORA-01110: data file 133: '/u01/oracle/product/19.3.0/db_2/dbs/UNNAMED00133'2024-08-T08:19:15.196925-04:00Background Media Recovery process shutdown (mussaib)


STANDBY::

select * from v$recover_file where error like ‘%FILE%’;

select name from v$datafile where name like ‘%UNNAMED%’;


PRIMARY:

Select file#, name from v$datafile where file#=133;

STANDBY:

show parameter standby

alter system set standby_file_management='MANUAL';

alter database create datafile '/u01/oracle/product/19.3.0/db_2/dbs/UNNAMED00133' as '/u09/oradata/mussaib/data/undotbs03.dbf' size 31G;

alter system set standby_file_management='AUTO';

START MRP:

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;