BACKUP & RECOVERY Archives - DBACLASS https://dbaclass.com/article-category/backup-recovery/ database administration Sun, 19 Sep 2021 12:10:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 How to duplicate database using RMAN backup set https://dbaclass.com/article/how-to-duplicate-database-using-rman-backup-set/ https://dbaclass.com/article/how-to-duplicate-database-using-rman-backup-set/#comments Sun, 19 Sep 2021 12:10:12 +0000 https://dbaclass.com/?post_type=article&p=7150 Though we can the active duplication method, to clone a database from live running database. But sometime we may need to clone a database from a RMAN backup set. Below example will cover the steps for that. STEPS: 1. Take full rman backup from source db If you already have any existing latest valid full […]

The post How to duplicate database using RMAN backup set appeared first on DBACLASS.

]]>
Though we can the active duplication method, to clone a database from live running database. But sometime we may need to clone a database from a RMAN backup set. Below example will cover the steps for that.

STEPS:

1. Take full rman backup from source db

If you already have any existing latest valid full backup, then you can use that instead of taking fresh backup.

RMAN>  configure controlfile autobackup format for device type disk to '/archive/RMAN_BKP/%F';

new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/archive/RMAN_BKP/%F';
new RMAN configuration parameters are successfully stored

RMAN> run
 {
 allocate channel c1 type disk format '/archive/RMAN_BKP/%I-%Y%M%D-%U';
 backup  as compressed backupset incremental level 0  check logical database plus archivelog;
 release channel c1 ;
 }



allocated channel: c1
channel c1: SID=23 device type=DISK

Starting backup at 19-SEP-21
current log archived
channel c1: starting compressed archived log backup set
channel c1: specifying archived log(s) in backup set
input archived log thread=1 sequence=457 RECID=453 STAMP=1083592832
input archived log thread=1 sequence=458 RECID=454 STAMP=1083632441
input archived log thread=1 sequence=459 RECID=455 STAMP=1083664917
input archived log thread=1 sequence=460 RECID=456 STAMP=1083665067
channel c1: starting piece 1 at 19-SEP-21
channel c1: finished piece 1 at 19-SEP-21
piece handle=/archive/RMAN_BKP/1307774653-20210919-0409eqlb_1_1 tag=TAG20210919T100427 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:15
Finished backup at 19-SEP-21

Starting backup at 19-SEP-21
channel c1: starting compressed incremental level 0 datafile backup set
channel c1: specifying datafile(s) in backup set
input datafile file number=00002 name=/oradata/clsprod/SND_MAST_DATA_01.dbf
input datafile file number=00008 name=/oradata/clsprod/SND_TRANS_DATA_01.dbf
input datafile file number=00009 name=/oradata/clsprod/mbas_01.dbf
input datafile file number=00010 name=/oradata/clsprod/sm_ts_01.dbf
input datafile file number=00005 name=/oradata/clsprod/ts_prm_1.dbf
input datafile file number=00003 name=/oradata/clsprod/sysaux01.dbf
input datafile file number=00001 name=/oradata/clsprod/system01.dbf
input datafile file number=00004 name=/oradata/clsprod/undotbs01.dbf
input datafile file number=00007 name=/oradata/clsprod/users01.dbf
channel c1: starting piece 1 at 19-SEP-21


channel c1: finished piece 1 at 19-SEP-21
piece handle=/archive/RMAN_BKP/1307774653-20210919-0509eqlq_1_1 tag=TAG20210919T100442 comment=NONE
channel c1: backup set complete, elapsed time: 00:02:45
Finished backup at 19-SEP-21

Starting backup at 19-SEP-21
current log archived
channel c1: starting compressed archived log backup set
channel c1: specifying archived log(s) in backup set
input archived log thread=1 sequence=461 RECID=457 STAMP=1083665247
channel c1: starting piece 1 at 19-SEP-21
channel c1: finished piece 1 at 19-SEP-21
piece handle=/archive/RMAN_BKP/1307774653-20210919-0609eqr0_1_1 tag=TAG20210919T100727 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:01
Finished backup at 19-SEP-21

Starting Control File and SPFILE Autobackup at 19-SEP-21
piece handle=/archive/RMAN_BKP/c-1307774653-20210919-00 comment=NONE
Finished Control File and SPFILE Autobackup at 19-SEP-21

released channel: c1

2. Copy the backup pieces to the target host:If you are duplicating the database on a different host, then copy the backup pieces target  host.


[oracle@RMAN_BKP]$ ls -ltr
total 1120808
-rw-r----- 1 oracle oinstall 155184640 Sep 19 10:04 1307774653-20210919-0409eqlb_1_1
-rw-r----- 1 oracle oinstall 981581824 Sep 19 10:07 1307774653-20210919-0509eqlq_1_1
-rw-r----- 1 oracle oinstall    218624 Sep 19 10:07 1307774653-20210919-0609eqr0_1_1
-rw-r----- 1 oracle oinstall  10715136 Sep 19 10:07 c-1307774653-20210919-00

[oracle@RMAN_BKP]$ scp * oracle@target-host.com:/archive/RMAN_BKP/


3. Prepare pfile for the new db:

The new db_name is ckssdev.

cd $ORACLE_HOME/dbs

vi initckssdev.ora

*.audit_file_dest='/u01/app/oracle/admin/ckssdev/adump'
*.audit_trail='db'
*.compatible='19.0.0'
*.control_files='/u01/app/oracle/oradata/ckssdev/control01.ctl','/u01/app/oracle/oradata/ckssdev/control02.ctl'
*.db_block_size=8192
*.db_name='ckssdev'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=ckssdevXDB)'
*.log_archive_dest_1='LOCATION=/u01/app/arch/ckssdev'
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.open_cursors=300
*.pga_aggregate_target=45174m
*.processes=1500
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=135520m
*.undo_tablespace='UNDOTBS1'
*.db_create_file_dest='/u01/app/oracle/oradata/ckssdev/'
*.db_create_online_log_dest_1='/u01/app/oracle/oradata/ckssdev/'

 

NOTE – db_create_file_dest – > In this location, all datafiles will be restored.

              db_create_online_log_dest_1 -> In this location, all online redolog files will be created/restored.

 

4. Create required directory structure


mkdir -p /u01/app/oracle/oradata/ckssdev/
mkdir -p /u01/app/arch/ckssdev
mkdir -p /u01/app/oracle/oradata/ckssdev/
mkdir -p /u01/app/oracle/admin/ckssdev/adump

5. Start the database in noMount stage:

SQL> startup nomount
ORACLE instance started.

Total System Global Area 2.1475E+10 bytes
Fixed Size                 19524688 bytes
Variable Size            2550136832 bytes
Database Buffers         1.8858E+10 bytes
Redo Buffers               47583232 bytes
SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

6. Run duplicate command

 

SYNTAX – > DUPLICATE DATABASE TO <DB_NEW_NAME> BACKUP LOCATION ‘<<BACKUP_LOC>’ NOFILENAMECHECK;

 

[oracle@dbhost-2 dbs]$ rman auxiliary /

Recovery Manager: Release 19.0.0.0.0 - Production on Wed Sep 1 14:11:55 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to auxiliary database: ckssdev (not mounted)

RMAN> DUPLICATE DATABASE TO ckssdev
BACKUP LOCATION '/archive/RMAN_BKP/'
NOFILENAMECHECK;2> 3>

Starting Duplicate Db at 01-SEP-21
searching for database ID
found backup of database ID 2028559627

contents of Memory Script:
{
   sql clone "create spfile from memory";
}
executing Memory Script

sql statement: create spfile from memory

contents of Memory Script:
{
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area   21474835536 bytes

Fixed Size                    19524688 bytes
Variable Size               2550136832 bytes
Database Buffers           18857590784 bytes
Redo Buffers                  47583232 bytes

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''CLS21COM'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   sql clone "alter system set  db_unique_name =
 ''ckssdev'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   shutdown clone immediate;
   startup clone force nomount
   restore clone primary controlfile from  '/u01/app/CLS_DONTDELETE/c-2028559627-20210901-00';
   alter clone database mount;
}
executing Memory Script

sql statement: alter system set  db_name =  ''CLS21COM'' comment= ''Modified by RMAN duplicate'' scope=spfile

sql statement: alter system set  db_unique_name =  ''ckssdev'' comment= ''Modified by RMAN duplicate'' scope=spfile

Oracle instance shut down

Oracle instance started

Total System Global Area   21474835536 bytes

Fixed Size                    19524688 bytes
Variable Size               2550136832 bytes
Database Buffers           18857590784 bytes
Redo Buffers                  47583232 bytes

Starting restore at 01-SEP-21
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=997 device type=DISK

channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u01/app/oracle/oradata/ckssdev/control01.ctl
output file name=/u01/app/oracle/oradata/ckssdev/control02.ctl
Finished restore at 01-SEP-21

database mounted
released channel: ORA_AUX_DISK_1
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=997 device type=DISK
allocated channel: ORA_AUX_DISK_2
channel ORA_AUX_DISK_2: SID=1281 device type=DISK
allocated channel: ORA_AUX_DISK_3
channel ORA_AUX_DISK_3: SID=1423 device type=DISK
allocated channel: ORA_AUX_DISK_4
channel ORA_AUX_DISK_4: SID=1563 device type=DISK
duplicating Online logs to Oracle Managed File (OMF) location
duplicating Datafiles to Oracle Managed File (OMF) location

contents of Memory Script:
{
   set until scn  42773496;
   set newname for clone datafile  1 to new;
   set newname for clone datafile  2 to new;
   set newname for clone datafile  3 to new;
   set newname for clone datafile  4 to new;
   set newname for clone datafile  5 to new;
   set newname for clone datafile  6 to new;
   set newname for clone datafile  7 to new;
   restore
   clone database
   ;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME


Starting restore at 01-SEP-21
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3
using channel ORA_AUX_DISK_4

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00011 to /u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_ecm_cbu_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00012 to /u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_ecm_cbu__%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/CLS_DONTDELETE/2028559627-20210901-0307vpjo_1_1
channel ORA_AUX_DISK_2: starting datafile backup set restore
channel ORA_AUX_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_2: restoring datafile 00002 to /u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_2: restoring datafile 00009 to /u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_CLS_cbu_%u_.dbf
channel ORA_AUX_DISK_2: restoring datafile 00010 to /u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_CLS_cbu__%u_.dbf
channel ORA_AUX_DISK_2: reading from backup piece /u01/app/CLS_DONTDELETE/2028559627-20210901-0207vpjo_1_1
channel ORA_AUX_DISK_3: starting datafile backup set restore
channel ORA_AUX_DISK_3: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_3: restoring datafile 00003 to /u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_3: restoring datafile 00005 to /u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_CLS_ebu_%u_.dbf
channel ORA_AUX_DISK_3: restoring datafile 00006 to /u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_CLS_ebu__%u_.dbf
channel ORA_AUX_DISK_3: reading from backup piece /u01/app/CLS_DONTDELETE/2028559627-20210901-0407vpjo_1_1
channel ORA_AUX_DISK_4: starting datafile backup set restore
channel ORA_AUX_DISK_4: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_4: restoring datafile 00004 to /u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_users_%u_.dbf
channel ORA_AUX_DISK_4: restoring datafile 00007 to /u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_ecm_ebu_%u_.dbf
channel ORA_AUX_DISK_3: restored backup piece 1
channel ORA_AUX_DISK_3: restore complete, elapsed time: 00:03:06
channel ORA_AUX_DISK_4: piece handle=/u01/app/CLS_DONTDELETE/2028559627-20210901-0507vpjo_1_1 tag=TAG20210901T135815
channel ORA_AUX_DISK_2: restore complete, elapsed time: 00:03:46
Finished restore at 01-SEP-21

contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=13 STAMP=1082125032 file name=/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_system_jlyr2q12_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=14 STAMP=1082125032 file name=/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_sysaux_jlyr2q0o_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=15 STAMP=1082125032 file name=/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_undotbs1_jlyr2q1k_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=16 STAMP=1082125032 file name=/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_users_jlyr2q24_.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=17 STAMP=1082125032 file name=/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_CLS_ebu_jlyr2py1_.dbf
datafile 6 switched to datafile copy

contents of Memory Script:
{
   set until scn  42773496;
   recover
   clone database
    delete archivelog
   ;
}
executing Memory Script

executing command: SET until clause

Starting recover at 01-SEP-21
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3
using channel ORA_AUX_DISK_4

starting media recovery

channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=939
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/CLS_DONTDELETE/2028559627-20210901-0607vplp_1_1
channel ORA_AUX_DISK_1: piece handle=/u01/app/CLS_DONTDELETE/2028559627-20210901-0607vplp_1_1 tag=TAG20210901T135921
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/u01/app/arch/ckssdev/1_939_1075569611.dbf thread=1 sequence=939
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/arch/ckssdev/1_939_1075569611.dbf RECID=1 STAMP=1082125035
media recovery complete, elapsed time: 00:00:01
Finished recover at 01-SEP-21
Oracle instance started

Total System Global Area   21474835536 bytes

Fixed Size                    19524688 bytes
Variable Size               2550136832 bytes
Database Buffers           18857590784 bytes
Redo Buffers                  47583232 bytes

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''ckssdev'' comment=
 ''Reset to original value by RMAN'' scope=spfile";
   sql clone "alter system reset  db_unique_name scope=spfile";
}
executing Memory Script

sql statement: alter system set  db_name =  ''ckssdev'' comment= ''Reset to original value by RMAN'' scope=spfile

sql statement: alter system reset  db_unique_name scope=spfile
Oracle instance started

Total System Global Area   21474835536 bytes

Fixed Size                    19524688 bytes
Variable Size               2550136832 bytes
Database Buffers           18857590784 bytes
Redo Buffers                  47583232 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "ckssdev" RESETLOGS ARCHIVELOG
  MAXLOGFILES     16
  MAXLOGMEMBERS      3
  MAXDATAFILES      100
  MAXINSTANCES     8
  MAXLOGHISTORY      292
 LOGFILE
  GROUP     1  SIZE 200 M ,
  GROUP     2  SIZE 200 M ,
  GROUP     3  SIZE 200 M
 DATAFILE
  '/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_system_jlyr2q12_.dbf'
 CHARACTER SET AL32UTF8


contents of Memory Script:
{
   set newname for clone tempfile  1 to new;
   switch clone tempfile all;
   catalog clone datafilecopy  "/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_sysaux_jlyr2q0o_.dbf",
 "/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_undotbs1_jlyr2q1k_.dbf",
 "/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_users_jlyr2q24_.dbf",
 "/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_CLS_ebu_jlyr2py1_.dbf",
 "/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_CLS_ebu__jlyr2q08_.dbf",
 "/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_ecm_ebu_jlyr2pyn_.dbf",
   switch clone datafile all;
}
executing Memory Script

executing command: SET NEWNAME

renamed tempfile 1 to /u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_temp_%u_.tmp in control file

cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_sysaux_jlyr2q0o_.dbf RECID=1 STAMP=1082125074
cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_undotbs1_jlyr2q1k_.dbf RECID=2 STAMP=1082125074
cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_users_jlyr2q24_.dbf RECID=3 STAMP=1082125074
cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_CLS_ebu_jlyr2py1_.dbf RECID=4 STAMP=1082125074
cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_CLS_ebu__jlyr2q08_.dbf RECID=5 STAMP=1082125074
cataloged datafile copy

datafile 2 switched to datafile copy
input datafile copy RECID=1 STAMP=1082125074 file name=/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_sysaux_jlyr2q0o_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=2 STAMP=1082125074 file name=/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_undotbs1_jlyr2q1k_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=3 STAMP=1082125074 file name=/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_users_jlyr2q24_.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=4 STAMP=1082125074 file name=/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_CLS_ebu_jlyr2py1_.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=5 STAMP=1082125074 file name=/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_CLS_ebu__jlyr2q08_.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=6 STAMP=1082125074 file name=/u01/app/oracle/oradata/ckssdev/ckssdev/datafile/o1_mf_ecm_ebu_jlyr2pyn_.dbf
datafile 8 switched to datafile copy


contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script

database opened
Cannot remove created server parameter file
Finished Duplicate Db at 01-SEP-21

SEE ALSO:

The post How to duplicate database using RMAN backup set appeared first on DBACLASS.

]]>
https://dbaclass.com/article/how-to-duplicate-database-using-rman-backup-set/feed/ 1
Useful flashback related commands https://dbaclass.com/article/useful-flashback-related-commands/ https://dbaclass.com/article/useful-flashback-related-commands/#respond Thu, 14 Nov 2019 12:22:06 +0000 https://dbaclass.com/?post_type=article&p=4428 Below are the collection of useful flashback related commands. 1. How to check whether flashback is enabled or not: select flashback_on from v$database; 2. Enable flashback in database: --- make sure database is in archivelog mode: alter system set db_recovery_file_dest_size=10G scope=both; alter system set db_recovery_file_dest='/dumparea/FRA/B2PMT3' scope=both; alter database flashback on; 3. Disable flashback in database: […]

The post Useful flashback related commands appeared first on DBACLASS.

]]>
Below are the collection of useful flashback related commands.

1. How to check whether flashback is enabled or not:

select flashback_on from v$database;

2. Enable flashback in database:

--- make sure database is in archivelog mode:

alter system set db_recovery_file_dest_size=10G scope=both;
alter system set db_recovery_file_dest='/dumparea/FRA/B2PMT3' scope=both;
alter database flashback on;

3. Disable flashback in database:

alter database flashback off;

4. Create flashback restore point :

create restore point FLASHBACK_PREP guarantee flashback database;

5. Find the list of restore points:

-- From SQL prompt:
SQL>Select * from v$restore_points:

-- From RMAN prompt:
RMAN>LIST RESTORE POINT ALL;

6. Drop restore point:

drop restore point FLASHBACK_PREP;

7. Flashback database to restore point:

--- Below are the steps for flashback database to a guaranteed restore point;

1. Get the restore point name:

SQL> select NAME,time from v$restore_point;

NAME                                                            TIME
--------------------------------          -----------------------------------------------
FLASHBACK_PREP                             21-MAR-17 03.41.33.000000000 PM

2. Shutdown database and start db in Mount stage:

shutdown immediate;
startup mount;

3. flashback db to restore point:

flashback database to restore point FLASHBACK_PREP;

4. Open with resetlog:

alter database open resetlogs;

8. Flashback query as of timestamp:

SELECT * FROM DBACLASS.EMP AS OF TIMESTAMP
TO_TIMESTAMP('2017-01-07 10:00:00', 'YYYY-MM-DD HH:MI:SS');


SELECT * FROM DBACLASS.EMP AS OF TIMESTAMP SYSDATE -1/24;

9. Flashback database to particular SCN or timestamp:

shutdown immediate;
startup mount;
--FLASHBACK DATABASE TO SCN 202381; -- Use this for particular scn
--FLASHBACK DATABASE TO TIMESTAMP (SYSDATE-1/24); - Use for flashback to last one hour 
--FLASHBACK DATABASE TO TIMESTAMP to_timestamp('2018-03-11 16:00:00', 'YYYY-MM-DD HH24:MI:SS');- to specific timestamp:

alter database open resetlogs;

10. Flashback a table from recyclebin:

-- First get whether the table name exists in recyclebin or not:

SELECT object_name, original_name, createtime FROM recyclebin where original_name='EMP';

 -- restore the table as same name:
FLASHBACK TABLE int_admin_emp TO BEFORE DROP;

-- Restore that table to a new name:

FLASHBACK TABLE int_admin_emp TO BEFORE DROP 
   RENAME TO int2_admin_emp;

11. Get flashback are usage info:

SELECT * FROM V$FLASH_RECOVERY_AREA_USAGE; 


12. How far can we flashback:

--How Far Back Can We Flashback To (Time)

select to_char(oldest_flashback_time,’dd-mon-yyyy hh24:mi:ss’) “Oldest Flashback Time”
from v$flashback_database_log;

--How Far Back Can We Flashback To (SCN)

col oldest_flashback_scn format 99999999999999999999999999
select oldest_flashback_scn from v$flashback_database_log;

The post Useful flashback related commands appeared first on DBACLASS.

]]>
https://dbaclass.com/article/useful-flashback-related-commands/feed/ 0
COMPRESSION in datapump oracle https://dbaclass.com/article/compression-datapump-oracle/ https://dbaclass.com/article/compression-datapump-oracle/#respond Mon, 19 Nov 2018 11:42:17 +0000 http://dbaclass.com/?post_type=article&p=3957 COMPRESSION parameter is used with EXPDP, to compress the generated dump file. NOTE – For using compression parameter with datapump, we need to have Oracle advance compression license.   COMPRESSION Reduce the size of a dump file. Valid keyword values are: ALL, DATA_ONLY, [METADATA_ONLY] and NONE. In this below demo, we will compare the dump […]

The post COMPRESSION in datapump oracle appeared first on DBACLASS.

]]>
COMPRESSION parameter is used with EXPDP, to compress the generated dump file.

NOTE – For using compression parameter with datapump, we need to have Oracle advance compression license.

 

COMPRESSION
Reduce the size of a dump file.
Valid keyword values are: ALL, DATA_ONLY, [METADATA_ONLY] and NONE.

In this below demo, we will compare the dump size with or without compression parameter.

 

Get the DUMPSIZE without COMPRESSION parameter :

cat exp_compress.par

dumpfile=dbaclass_wo_compression.dmp
logfile=dbaclass.log
directory=EXPDIR
tables=DBACLASS.OBJECT_LIST

 expdp parfile=exp_compress.par

Export: Release 12.1.0.2.0 - Production on Mon Nov 19 12:56:14 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA parfile=exp_compress.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 29 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "DBACLASS"."OBJECT_LIST"                    24.69 MB  219456 rows ---- >>> 25 MB AROUND
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
  /export/home/oracle/dbaclass__compression.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Mon Nov 19 12:56:19 2018 elapsed 0 00:00:03

So, without compression, the size is around 25 MB. Let’s do the export with compression.

EXPORT WITH COMPRESSION=ALL

 cat exp_compress.par

dumpfile=dbaclass_compression.dmp
logfile=dbaclass.log
directory=EXPDIR
tables=DBACLASS.OBJECT_LIST
compression=ALL
 expdp parfile=exp_compress.par

Export: Release 12.1.0.2.0 - Production on Mon Nov 19 12:56:48 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA parfile=exp_compress.par
Estimate in progress using BLOCKS method...
 Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 29 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "DBACLASS"."OBJECT_LIST"                    2.922 MB  219456 rows --- >> 
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
  /export/home/oracle/dbaclass_compression.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Mon Nov 19 12:56:56 2018 elapsed 0 00:00:05

We can see, from 25 MB ,it came down to 2 MB :). It really saves a lot of disk space.

However, there is a tradeoff. Use of compression will increase the time required for the export job.

 

W/O COMPRESSION WITH COMPRESSION
SPACE REQUIRED 25mb 3 mb
TIME TAKEN 3 second 5 second
CPU USAGE NORMAL bit more than w/o compression
LICENSE? No additional license Advance compression license required

 

COMPRESSION_ALGORITHM:

We can control the compression ratio using COMPRESSION_ALGORITHM parameter. HIGH compression ratio will use more CPU and also increase export job time.

 

COMPRESSION_ALGORITHM
Specify the compression algorithm that should be used.
Valid keyword values are: [BASIC], LOW, MEDIUM and HIGH.

 

PARFILE WILL LOOK AS BELOW:

 cat exp_compress.par

dumpfile=dbaclass_compression.dmp
logfile=dbaclass.log
directory=EXPDIR
tables=DBACLASS.OBJECT_LIST
compression=ALL
COMPRESSION_ALGORITHM=HIGH

The post COMPRESSION in datapump oracle appeared first on DBACLASS.

]]>
https://dbaclass.com/article/compression-datapump-oracle/feed/ 0
REUSE_DUMPFILES parameter in EXPDP https://dbaclass.com/article/reuse_dumpfiles-parameter-expdp/ https://dbaclass.com/article/reuse_dumpfiles-parameter-expdp/#respond Mon, 19 Nov 2018 11:25:58 +0000 http://dbaclass.com/?post_type=article&p=3954 If we try to export a dumpfile with the name, which is already present in that directory. then we will get error like ORA-27038: created file already exists ORA-39001: invalid argument value ORA-39000: bad dump file specification ORA-31641: unable to create dump file “/export/home/oracle/dbaclass_estim.dmp“ ORA-27038: created file already exists Additional information: 1 So if the […]

The post REUSE_DUMPFILES parameter in EXPDP appeared first on DBACLASS.

]]>
If we try to export a dumpfile with the name, which is already present in that directory. then we will get error like ORA-27038: created file already exists

ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file “/export/home/oracle/dbaclass_estim.dmp
ORA-27038: created file already exists
Additional information: 1

So if the requirement is to overwrite the existing dumpfile, then REUSE_DUMPFILES parameter can be used with EXPDP.

REUSE_DUMPFILES
Overwrite destination dump file if it exists [NO].

DEMO:

PARFILE WITH REUSE_DUMPFILES=Y

cat exp_reusedmp.par

dumpfile=dbaclass_estim.dmp
logfile=dbaclass.log
directory=EXPDIR
tables=DBACLASS.OBJECT_LIST
REUSE_DUMPFILES=Y

At this point, we already have the dumpfile dbaclass_estim.dmp . So the EXPDP job should overwrite this dumpfile.

expdp parfile=exp_reusedmp.par

Export: Release 12.1.0.2.0 - Production on Mon Nov 19 12:53:54 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA parfile=exp_reusedmp.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 29 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "DBACLASS"."OBJECT_LIST"                    24.69 MB  219456 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
  /export/home/oracle/dbaclass_estim.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Mon Nov 19 12:54:01 2018 elapsed 0 00:00:03

As expected, we didnt received the error – ORA-27038: created file already exists.

The post REUSE_DUMPFILES parameter in EXPDP appeared first on DBACLASS.

]]>
https://dbaclass.com/article/reuse_dumpfiles-parameter-expdp/feed/ 0
SAMPLE parameter in EXPDP to export subset of data https://dbaclass.com/article/sample-parameter-expdp-export-subset-data/ https://dbaclass.com/article/sample-parameter-expdp-export-subset-data/#respond Mon, 19 Nov 2018 11:17:47 +0000 http://dbaclass.com/?post_type=article&p=3952 Let’s say we have a very big table, and for testing purpose, we need only a subset of data i.e to let’s say 10 percent of data from that table, then SAMPLE parameter can be used in EXPDP. We will explain its use in below demo. SAMPLE Percentage of data to be exported. DEMO: SQL> […]

The post SAMPLE parameter in EXPDP to export subset of data appeared first on DBACLASS.

]]>
Let’s say we have a very big table, and for testing purpose, we need only a subset of data i.e to let’s say 10 percent of data from that table, then SAMPLE parameter can be used in EXPDP. We will explain its use in below demo.

SAMPLE
Percentage of data to be exported.

DEMO:

SQL> select count(*) from DBACLASS.OBJECT_LIST;

  COUNT(*)
----------
    219456

PARFILE WITH SAMPLE OF 10 PERCENT.

 cat exp_sample.par

dumpfile=dbaclass_estim.dmp
logfile=dbaclass.log
directory=EXPDIR
tables=DBACLASS.OBJECT_LIST
sample=10
expdp parfile=exp_sample.par
Export: Release 12.1.0.2.0 - Production on Mon Nov 19 12:52:01 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01": /******** AS SYSDBA parfile=exp_sample.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 2.899 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "DBACLASS"."OBJECT_LIST" 2.472 MB 21890 rows --- >>>
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
/export/home/oracle/dbaclass_estim.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Mon Nov 19 12:52:07 2018 elapsed 0 00:00:04

It has exported around 21890 rows which are around 10 percent of actual rows in the table (219456).

 

 

 

 

 

The post SAMPLE parameter in EXPDP to export subset of data appeared first on DBACLASS.

]]>
https://dbaclass.com/article/sample-parameter-expdp-export-subset-data/feed/ 0
ESTIMATE REQUIRED DISK SPACE FOR EXPORT USING estimate_only https://dbaclass.com/article/estimate-required-disk-space-export-using-estimate_only/ https://dbaclass.com/article/estimate-required-disk-space-export-using-estimate_only/#respond Mon, 19 Nov 2018 11:09:26 +0000 http://dbaclass.com/?post_type=article&p=3950 ESTIMATE_ONLY parameter can be used in EXPDP command, to estimate the disk space required for the export job, without doing the actual export. So when we are not sure what will be the export dump size, then better to use this option first, before generating the export dump. By default ESTIMATE_OPTION=N DEMO: PARFILE WITH ESTIMATE […]

The post ESTIMATE REQUIRED DISK SPACE FOR EXPORT USING estimate_only appeared first on DBACLASS.

]]>
ESTIMATE_ONLY parameter can be used in EXPDP command, to estimate the disk space required for the export job, without doing the actual export. So when we are not sure what will be the export dump size, then better to use this option first, before generating the export dump.

By default ESTIMATE_OPTION=N

DEMO:

PARFILE WITH ESTIMATE OPTION:

cat exp_estimate.par

logfile=dbaclass.log
directory=EXPDIR
tables=DBACLASS.OBJECT_LIST
ESTIMATE_ONLY=Y

EXPDP job:

 expdp parfile=exp_estimate.par

Export: Release 12.1.0.2.0 - Production on Mon Nov 19 12:49:59 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA parfile=exp_estimate.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
.  estimated "DBACLASS"."OBJECT_LIST"                       29 MB ---- >>>>>>>>>>> 
Total estimation using BLOCKS method: 29 MB
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Mon Nov 19 12:50:03 2018 elapsed 0 00:00:02

As it shows the estimation is 29 MB.

The post ESTIMATE REQUIRED DISK SPACE FOR EXPORT USING estimate_only appeared first on DBACLASS.

]]>
https://dbaclass.com/article/estimate-required-disk-space-export-using-estimate_only/feed/ 0
FLASHBACK parameter in DATAPUMP(EXPDP) https://dbaclass.com/article/flashback-parameter-datapumpexpdp/ https://dbaclass.com/article/flashback-parameter-datapumpexpdp/#comments Mon, 19 Nov 2018 10:25:47 +0000 http://dbaclass.com/?post_type=article&p=3947 We can use the flashback parameter ( FLASHBACK_SCN or FLASHBACK_TIMESTAMP) in expdp to take a consistent backup of the table as of a specific point in time. Flashback method depends upon undo data. So if we are trying to take an export of the table with the flashback for which data is not available in […]

The post FLASHBACK parameter in DATAPUMP(EXPDP) appeared first on DBACLASS.

]]>
We can use the flashback parameter ( FLASHBACK_SCN or FLASHBACK_TIMESTAMP) in expdp to take a consistent backup of the table as of a specific point in time. Flashback method depends upon undo data. So if we are trying to take an export of the table with the flashback for which data is not available in UNDO, then export will fail.

DEMO:

SQL> select count(*) from DBACLASS.OBJECT_LIST;
COUNT(*)
----------
109728

SQL> SET numwidth 20

SQL> SELECT dbms_flashback.get_system_change_number FROM DUAL;

GET_SYSTEM_CHANGE_NUMBER
------------------------
11837958118968 -- >>> We will use this SCN value in expdp

Let’s insert some new data:

SQL> insert into DBACLASS.OBJECT_LIST select * from DBACLASS.OBJECT_LIST;

109728 rows created.

SQL> commit;

SQL> SQL>  select count(*) from DBACLASS.OBJECT_LIST;

  COUNT(*)
----------
    219456
	
	

Now prepare PARFILE with FLASHBACK_SCN: ( Use the SCN value from step 1)

cat exp_flash.par

dumpfile=dbaclass.dmp
logfile=dbaclass.log
directory=EXPDIR
tables=DBACLASS.OBJECT_LIST
flashback_scn=11837958118968

expdp parfile=exp_flash.par

Export: Release 12.1.0.2.0 - Production on Mon Nov 19 12:40:59 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA parfile=exp_flash.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 29 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "DBACLASS"."OBJECT_LIST"                    12.35 MB  109728 rows --- >>>> 
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
  /export/home/oracle/dbaclass.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Mon Nov 19 12:41:38 2018 elapsed 0 00:00:31


We are able to take expdp of the table with specific SCN.

Similarly, we can use FLASHBACK_TIME also:

Parfile with FLASHBACK_TIME:

 
cat exp_flash.par

dumpfile=dbaclass.dmp
logfile=dbaclass.log
directory=EXPDIR
tables=DBACLASS.OBJECT_LIST
FLASHBACK_TIME="TO_TIMESTAMP('19-11-2018 12:43:26', 'DD-MM-YYYY HH24:MI:SS')"

 expdp parfile=exp_flash.par

Export: Release 12.1.0.2.0 - Production on Mon Nov 19 12:47:36 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA parfile=exp_flash.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 29 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "DBACLASS"."OBJECT_LIST"                    12.35 MB  109728 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
  /export/home/oracle/dbaclass.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Mon Nov 19 12:47:41 2018 elapsed 0 00:00:04


The post FLASHBACK parameter in DATAPUMP(EXPDP) appeared first on DBACLASS.

]]>
https://dbaclass.com/article/flashback-parameter-datapumpexpdp/feed/ 1
CONTENT parameter in datapump https://dbaclass.com/article/content-parameter-datapump/ https://dbaclass.com/article/content-parameter-datapump/#respond Mon, 19 Nov 2018 08:16:00 +0000 http://dbaclass.com/?post_type=article&p=3942 CONTENT parameter is used in expdp or impdp, to load/unload data only, metadata only or both. CONTENT Specifies data to unload. Valid keyword values are: [ALL], DATA_ONLY and METADATA_ONLY.   CONTENT=ALL is the default value CONTENT=METADATA_ONLY: It will take export only the metadata, i.e skeleton/DDL on the object/schema. It won’t export any of the data/rows. […]

The post CONTENT parameter in datapump appeared first on DBACLASS.

]]>
CONTENT parameter is used in expdp or impdp, to load/unload data only, metadata only or both.

CONTENT
Specifies data to unload.
Valid keyword values are: [ALL], DATA_ONLY and METADATA_ONLY.

 

CONTENT=ALL is the default value

CONTENT=METADATA_ONLY:

It will take export only the metadata, i.e skeleton/DDL on the object/schema. It won’t export any of the data/rows.

dumpfile=emp_enc1.dmp 
logfile=emp_enc.log 
directory=VEN 
tables=DBATEST.EMP_TAB 
CONTENT=METADATA_ONLY
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA dumpfile=emp_enc1.dmp logfile=emp_enc.log directory=VEN tables=DBATEST.EMP_TAB CONTENT=METADATA_ONLY
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded

CONTENT=DATA_ONLY:

It will take export only the ROWS/DATA of the tables, by excluding the DDL(i.e metadata)

dumpfile=emp_enc2.dmp 
logfile=emp_enc.log  
directory=VEN 
tables="DBATEST"."EMP_TAB" 
CONTENT=DATA_ONLY
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA dumpfile=emp_enc2.dmp logfile=emp_enc.log directory=VEN tables=DBATEST.EMP_TAB CONTENT=DATA_ONLY
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 23 MB
. . exported "DBATEST"."EMP_TAB"                         19.16 MB  175340 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:

Before importing the dump into another database, make sure that the table exists in that table, as we have taken export using DATA_ONLY,
which don’t contain the DDL. Else import will fail with an error as below.

Here we tried to load this dump into a database, where table doesn’t exist.

select table_name from dba_tables where table_name='EMP_TAB';
 
no rows selected

Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_FULL_01":  /******** AS SYSDBA dumpfile=emp_enc2.dmp logfile=emp_enc.log directory=VEN
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-39034: Table TABLE_DATA:"DBATEST"."EMP_TAB" does not exist. ---- >>>>>>>>  ERROR 
Job "SYS"."SYS_IMPORT_FULL_01" completed with 1 error(s) at Tue Jan 24 14:45:01 2017 elapsed 0 00:00:54

So it failed.

NOTE – NEVER USE TABLE_EXISTS_ACTION=REPLACE

The post CONTENT parameter in datapump appeared first on DBACLASS.

]]>
https://dbaclass.com/article/content-parameter-datapump/feed/ 0
Shell script to delete old archives using RMAN https://dbaclass.com/article/shell-script-delete-old-archives-using-rman/ https://dbaclass.com/article/shell-script-delete-old-archives-using-rman/#comments Mon, 19 Nov 2018 07:54:38 +0000 http://dbaclass.com/?post_type=article&p=3940 If the requirement is to delete archive log backups automatically (without taking backup), then below shell script can be configured in crontab. prepare the shell script. cat rman_arch_del.sh #!/bin/bash export ORACLE_HOME=/oracle/app/oracle/product/12.1.0.2.0 export ORACLE_SID=PARIS12C export PATH=$ORACLE_HOME/bin:$PATH delBackup () { rman log=/home/oracle/arch_del.log << EOF connect target / DELETE noprompt ARCHIVELOG ALL COMPLETED BEFORE ‘sysdate-1’; CROSSCHECK ARCHIVELOG ALL; […]

The post Shell script to delete old archives using RMAN appeared first on DBACLASS.

]]>
If the requirement is to delete archive log backups automatically (without taking backup), then below shell script can be configured in crontab.

prepare the shell script.

cat rman_arch_del.sh

#!/bin/bash
export ORACLE_HOME=/oracle/app/oracle/product/12.1.0.2.0
export ORACLE_SID=PARIS12C
export PATH=$ORACLE_HOME/bin:$PATH

delBackup () {
rman log=/home/oracle/arch_del.log << EOF
connect target /
DELETE noprompt ARCHIVELOG ALL COMPLETED BEFORE ‘sysdate-1’;
CROSSCHECK ARCHIVELOG ALL;
DELETE EXPIRED ARCHIVELOG ALL;
exit
EOF
}
# Main

delBackup

Now configure in crontab:

00 22 * * * /u01/app/oracle/rman_arch_del.sh > /tmp/rmanarch.log

The post Shell script to delete old archives using RMAN appeared first on DBACLASS.

]]>
https://dbaclass.com/article/shell-script-delete-old-archives-using-rman/feed/ 1
Query clause in oracle datapump expdp https://dbaclass.com/article/query-clause-expdpimpdp/ https://dbaclass.com/article/query-clause-expdpimpdp/#comments Mon, 19 Nov 2018 07:45:32 +0000 http://dbaclass.com/?post_type=article&p=3936              QUERY clause can be used in expdp or impdp to export/import subset of the data or data with specific conditions. DEMO: Export dump of a table from emp_tab WHERE created > sysdate -40 . Filter can be added on any column depending upon the requirement. SQL> select count(*) from […]

The post Query clause in oracle datapump expdp appeared first on DBACLASS.

]]>
             QUERY clause can be used in expdp or impdp to export/import subset of the data or data with specific conditions.

DEMO:

Export dump of a table from emp_tab WHERE created > sysdate -40 . Filter can be added on any column depending upon the requirement.

SQL> select count(*) from “DBACLASS”.”EMP_TAB” WHERE created > sysdate -40;

COUNT(*)
———-
1384

Create a parfile with query clause:

cat expdp_query.par

dumpfile=test.dmp
logfile=test1.log
directory=TEST
tables=dbaclass.EMP_TAB
QUERY=dbaclass.EMP_TAB:"WHERE created > sysdate -40"

Now run the expdp command with parfile. We can see, 1384 rows will be exported.

expdp parfile=expdp_query.par
 
Export: Release 12.1.0.2.0 - Production on Mon Jan 23 14:52:07 2017
 
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
 
Username: / as sysdba
 
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA parfile=expdp_query.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 29 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "DBACLASS"."EMP_TAB"                        199.4 KB    1384 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
  /export/home/oracle/test.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Mon Jan 23 14:53:02 2017 elapsed 0 00:00:23

Same type of conditions can be used in IMPDP also.

The post Query clause in oracle datapump expdp appeared first on DBACLASS.

]]>
https://dbaclass.com/article/query-clause-expdpimpdp/feed/ 1