The Dutch Prutser's Blog

By: Harald van Breederode

  • Disclaimer

    The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.
  • Subscribe

  • Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 309 other followers

Now available: Oracle Certified Master 12c Upgrade exam

Posted by Harald van Breederode on September 13, 2015

The Oracle Database 12c Certified Master upgrade exam was recently released and those who wish to take this upgrade exam can do so on October 9th, December 22nd or December 29th in Utrecht, The Netherlands. Hope to see you soon ;-)
-Harald

Advertisements

Posted in Oracle | 1 Comment »

Cross-platform Transportable Tablespaces made easy

Posted by Harald van Breederode on January 4, 2015

Back in Oracle8i the Transportable Tablespace feature was introduced to make it convenient to transport a large amount of data between databases. In Oracle10g this useful feature was enhanced with cross-platform support which allowed a tablespace, or set of tablespaces, to be transported between databases deployed on different hardware platforms (even between platforms with a different endian format).

So until now the procedure to transport a tablespace set across platforms was as follows:

  1. Place the tablespace set in read-only mode on the source database.
  2. Verify that the tablespace set is self-contained.
  3. Export the tablespace set’s metadata from the source database using the EXPDP utility.
  4. Convert the datafile(s) to the target platform endian format using RMAN on the source database server.
  5. Copy both the metadata dumpset and the (converted) datafile(s) to the target database server.
  6. Convert the datafile(s) to the target database endian format using RMAN on the target database server.
  7. Import the metadata dumpset into the target database using the IMPDP utility.
  8. Place the tablespace set in read-write mode on the target database.

Note: You either perform step 4 or step 6 but not both.

As the list shows, RMAN, EXPDP and IMPDP have to be used to transport tablespace(s) across platforms. Also the RMAN CONVERT statement used in either step 4 or 6 operates only on datafiles.

What if this procedure was simpler and faster?

A cool new Oracle Database 12c feature is the ability to transport tablespaces across platforms using just RMAN and RMAN (compressed) backupsets! Lets see how this works by transporting a tablespace from Oracle Linux to Oracle Solaris.

SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE    12.1.0.2.0      Production
TNS for Linux: Version 12.1.0.2.0 - Production
NLSRTL Version 12.1.0.2.0 - Production

The above output shows that we are running Oracle Database 12c version 12.1.0.2.0 on Linux. Lets see which platforms we can transport data to:

SQL> select platform_id, platform_name, endian_format
  2  from v$transportable_platform order by platform_id;

PLATFORM_ID PLATFORM_NAME                                      ENDIAN_FORMAT
----------- -------------------------------------------------- --------------
          1 Solaris[tm] OE (32-bit)                            Big
          2 Solaris[tm] OE (64-bit)                            Big
          3 HP-UX (64-bit)                                     Big
          4 HP-UX IA (64-bit)                                  Big
          5 HP Tru64 UNIX                                      Little
          6 AIX-Based Systems (64-bit)                         Big
          7 Microsoft Windows IA (32-bit)                      Little
          8 Microsoft Windows IA (64-bit)                      Little
          9 IBM zSeries Based Linux                            Big
         10 Linux IA (32-bit)                                  Little
         11 Linux IA (64-bit)                                  Little
         12 Microsoft Windows x86 64-bit                       Little
         13 Linux x86 64-bit                                   Little
         15 HP Open VMS                                        Little
         16 Apple Mac OS                                       Big
         17 Solaris Operating System (x86)                     Little
         18 IBM Power Based Linux                              Big
         19 HP IA Open VMS                                     Little
         20 Solaris Operating System (x86-64)                  Little
         21 Apple Mac OS (x86-64)                              Little

Quite a list I would say. Now we will create a tablespace with a few objects in it so we have something to transport.

SQL> create tablespace xtransport;

Tablespace created.

SQL> connect sh/sh
Connected.
SQL> create table t1 tablespace xtransport as select * from customers;

Table created.

SQL> create index i1 on t1(cust_email) tablespace xtransport;

Index created.

Creating a transportable backupset

All we have to do to transport tablespace(s) using the new functionality is create a so called transportable backupset on the source database and restore it on the destination database. However the tablespace(s) to transport must be made read-only before calling RMAN.

SQL> alter tablespace xtransport read only;

Tablespace altered.

To create a transportable backupset we can either use the BACKUP FOR TRANSPORT or the BACKUP TO PLATFORM RMAN statement. The difference between these two is where the datafile conversion will take place. The BACKUP FOR TRANSPORT statement should be used if the datafile conversion is to be performed on the target system and the BACKUP TO PLATFORM statement should be used if the datafile conversion is to be performed on the source system. For this demonstration we will perform the datafile conversion on the source system using the BACKUP TO PLATFORM statement specifying the name of the target platform as its argument. We also have to specify where RMAN should place the backupsets. RMAN will create one backupset with the datafile(s) and a second backupset with the metadata dumpset that EXPDP will create on behalf of RMAN. The advantage of the BACKUP FOR TRANSPORT and BACKUP TO PLATFORM RMAN statements is that backupsets are used instead of datafile copies. On top of that, if desired, these backupsets can be compressed, encrypted or both. For this demo we will create a compressed transportable backupset:

$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Sun Dec 28 14:29:27 2014

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: V121 (DBID=3122213224)

RMAN> backup to platform 'Solaris[tm] OE (64-bit)' as compressed backupset
2> tablespace xtransport format '/tmp/datafile.bck'
3> datapump format '/tmp/metadata.bck';
Starting backup at 28-DEC-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=17 device type=DISK
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfully

Performing export of metadata for specified tablespaces...
   EXPDP> Starting "SYS"."TRANSPORT_EXP_V121_pyAn":  
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   EXPDP> Master table "SYS"."TRANSPORT_EXP_V121_pyAn" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TRANSPORT_EXP_V121_pyAn is:
   EXPDP>   /u01/app/oracle/product/12.1.0.2.0db/dbs/backup_tts_V121_30082.dmp
   EXPDP> ******************************************************************************
   EXPDP> Datafiles required for transportable tablespace XTRANSPORT:
   EXPDP>   /u01/db/v121/data/V121/datafile/o1_mf_xtranspo_bb01f5z4_.dbf
   EXPDP> Job "SYS"."TRANSPORT_EXP_V121_pyAn" successfully completed at Sun Dec 28 14:33:14 2014 elapsed 0 00:02:50
Export completed

channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/u01/db/v121/data/V121/datafile/o1_mf_xtranspo_bb01f5z4_.dbf
channel ORA_DISK_1: starting piece 1 at 28-DEC-14
channel ORA_DISK_1: finished piece 1 at 28-DEC-14
piece handle=/tmp/datafile.bck tag=TAG20141228T142931 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed full datafile backup set
input Data Pump dump file=/u01/app/oracle/product/12.1.0.2.0db/dbs/backup_tts_V121_30082.dmp
channel ORA_DISK_1: starting piece 1 at 28-DEC-14
channel ORA_DISK_1: finished piece 1 at 28-DEC-14
piece handle=/tmp/metadata.bck tag=TAG20141228T142931 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 28-DEC-14

Recovery Manager complete.

Based on the above output we can conclude that RMAN performed the following tasks:

  1. Verify that the transport set is self-contained.
  2. Export the tablespace’s metadata using EXPDP.
  3. Create a compressed backupset containing the tablespace’s datafile.
  4. Create a compressed backupset containing the metadata dumpset created in step 2.

Note: Although not explicitly visible, the actual datafile conversion took place in step 3.

RMAN created the following files for us:

$ ls -1hs /tmp/*bck
3.8M /tmp/datafile.bck
200K /tmp/metadata.bck

Normally we would transfer these two files to the target system and restore them there. However for this demo we will continue on the same system and assume these files were created on an Oracle Solaris system. Obviously this is not what you would do in real life, but I had to come up with something to show all steps of using this new feature ;-) Because of this, we now have to drop this tablespace before we can restore it:

SQL> drop tablespace xtransport including contents and datafiles;

Tablespace dropped.

Restoring a transportable backupset

To restore a transportable backupset we must either use the RESTORE or the RESTORE FROM PLATFORM RMAN statement. Which one we should use depends on whether or not the datafile(s) are already in the correct endian format. Because we are about to restore a transportable backupset containing a datafile in another endian format, we have to use the RESTORE FROM PLATFORM RMAN statement specifying the source platform name as its argument. Because we are restoring a tablespace that doesn’t yet belong to the target database, we have to notify RMAN about it by using the FOREIGN TABLESPACE clause. And by specifying the TO NEW clause we ask RMAN to assign a new OMF filename to the datafile it is about to restore. Finally we have to tell RMAN where it can find both the backupset containing the datafile and the backupset containing the metadata dumpset. So, here we go:

$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Sun Dec 28 14:38:55 2014

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: V121 (DBID=3122213224)

RMAN> restore from platform 'Solaris[tm] OE (64-bit)'
2> foreign tablespace xtransport to new
3> from backupset '/tmp/datafile.bck'
4> dump file from backupset '/tmp/metadata.bck';
Starting restore at 28-DEC-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=103 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring all files in foreign tablespace XTRANSPORT
channel ORA_DISK_1: reading from backup piece /tmp/datafile.bck
channel ORA_DISK_1: restoring foreign file 5 to /u01/db/v121/data/V121/datafile/o1_mf_xtranspo_bb01zm1x_.dbf
channel ORA_DISK_1: foreign piece handle=/tmp/datafile.bck
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring Data Pump dump file to /u01/app/oracle/product/12.1.0.2.0db/dbs/backup_tts_V121_30002.dmp
channel ORA_DISK_1: reading from backup piece /tmp/metadata.bck
channel ORA_DISK_1: foreign piece handle=/tmp/metadata.bck
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02

Performing import of metadata...
   IMPDP> Master table "SYS"."TSPITR_IMP_V121_albo" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_V121_albo":  
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   IMPDP> Job "SYS"."TSPITR_IMP_V121_albo" successfully completed at Sun Dec 28 14:40:31 2014 elapsed 0 00:01:13
Import completed

Finished restore at 28-DEC-14

Recovery Manager complete.

Based on the above output we can conclude that RMAN performed the following tasks:

  1. Restore the foreign tablespace’s datafile from the compressed datafile backupset.
  2. Restore the metadata dumpset from the compressed metadata backupset.
  3. Import the tablespace metadata using IMPDP from the dumpset restored in step 2.

Note: Although not explicitly visible, the actual datafile conversion took place in step 1.

Before we wrap up we will verify if transporting this tablespace was successful:

SQL> connect sh/sh
Connected.
SQL> select count(*) from t1;

  COUNT(*)
----------
     55500

This looks good to me ;-) In most real life scenarios we would now probably place this tablespace in read-write mode.

Wrapping up

One thing to keep in mind when using this cool new feature is that the tablespace(s) to be transported must be read-only during the whole procedure. If the amount of data to be transported is large, this could easily lead to data availability problems. In such a case you might want to consider using another cool new Oracle Database 12c procedure that keeps the downtime to a minimum: Incremental Cross-Platform Transportable Tablespaces. It also uses RMAN transportable backupsets but is a slightly more complicated procedure. I will discuss it in my next posting…
-Harald

Posted in Oracle | 6 Comments »

Temporary undo

Posted by Harald van Breederode on July 28, 2013

Back in Oracle 8i the Global Temporary Table was introduced to reduce the overhead of storing temporary data inside the database. A Global Temporary Table stores its data in a Temporary Tablespace where the data is retained for the duration of either a single transaction or the lifetime of a session. Performing DML on a Global Temporary Table doesn’t generate redo, because the data is stored in a temporary tablespace, but it does generate undo which in turn will generate redo since the undo is stored in a permanent tablespace. Because this redo isn’t required for recovery purposes, wouldn’t it be nice if it wasn’t generated in the first place?

Oracle 12c makes it possible to store the undo generated by performing DML on global temporary tables in a temporary tablespace. This feature is called Temporary Undo and is controlled by the temp_undo_enabled database parameter which is modifiable at the system or session level. There are two reasons for enabling this new feature:

  1. To reduce the amount of redo caused by DML on global temporary tables.
  2. To allow DML on global temporary tables in a physical standby database.

Using temporary undo to reduce the amount of redo

One reason for using temporary undo is to reduce the amount of redo caused by DML on global temporary tables. All we need to do to demonstrate this is insert a bunch of rows in a global temporary table. So, let’s get started by creating a global temporary table in an Oracle 12c database:

SQL> connect sh/sh
Connected.
SQL> create global temporary table gtt(c1 number(8), c2 char(100));

Table created.

By default the temp_undo_enabled parameter is set to false, so inserting rows in a global temporary table will generate redo as shown below:

SQL> connect sh/sh
Connected.
SQL> show parameter temp_undo_enabled

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled                    boolean     FALSE

SQL> select name, value from v$mystat natural join v$statname
  2  where name = 'redo size' or name = 'redo entries';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
redo entries                                                              2
redo size                                                               720

SQL> insert into gtt select level, level from dual connect by level <= 10000;

10000 rows created.

SQL> select name, value from v$mystat natural join v$statname
  2  where name = 'redo size' or name = 'redo entries';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
redo entries                                                            287
redo size                                                             63064

Based on this output we can derive that inserting 10,000 rows generated 285 redo entries with a total of 62 KBytes of redo.

Let’s repeat this test with temp_undo_enabled set to true and see what happens:

SQL> connect sh/sh
Connected.
SQL> alter session set temp_undo_enabled = true;

Session altered.

SQL> select name, value from v$mystat natural join v$statname
  2  where name = 'redo size' or name = 'redo entries';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
redo entries                                                              2
redo size                                                               720

SQL> insert into gtt select level, level from dual connect by level <= 10000;

10000 rows created.

SQL> select name, value from v$mystat natural join v$statname
  2  where name = 'redo size' or name = 'redo entries';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
redo entries                                                              3
redo size                                                               992

This time only one redo entry containing 272 bytes of redo was generated by inserting 10,000 rows. Hopefully this is what you expected ;-)

Temporary undo and a physical standby database

Another reason for using temporary undo is to allow DML on global temporary tables stored in a read-only opened physical standby database. To demonstrate this we need an Oracle 12c Data Guard environment as shown below:

DGMGRL> show configuration;

Configuration - Animals

  Protection Mode: MaxPerformance
  Databases:
  dogs  - Primary database
    sheep - Physical standby database 

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database sheep 

Database - sheep

  Role:              PHYSICAL STANDBY
  Intended State:    APPLY-ON
  Transport Lag:     0 seconds (computed 1 second ago)
  Apply Lag:         0 seconds (computed 1 second ago)
  Apply Rate:        19.00 KByte/s
  Real Time Query:   ON
  Instance(s):
    v1210

Database Status:
SUCCESS

Temporary undo is automatically enabled when connected to a physical standby database, so there is no need to set the temp_undo_enabled parameter. Let’s see what happens if we insert rows in a global temporary table in a read-only opened physical standby database:

SQL> connect sh/sh@sheep
Connected.
SQL> select name, value from v$mystat natural join v$statname
  2  where name = 'redo size' or name = 'redo entries';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
redo entries                                                              0
redo size                                                                 0

SQL> insert into gtt select level, level from dual connect by level <= 10000;

10000 rows created.

SQL> select name, value from v$mystat natural join v$statname
  2  where name = 'redo size' or name = 'redo entries';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
redo entries                                                              0
redo size                                                                 0 

Wow, we can insert data in a read-only opened database without generating redo! This used to be impossible on earlier releases of Oracle. The output below was captured on an Oracle 11g Data Guard environment:

SQL> connect sh/sh@sheep11g
Connected.

SQL> insert into gtt select level, level from dual connect by level <= 10000;
insert into gtt select level, level from dual connect by level <= 10000
            *
ERROR at line 1:
ORA-16000: database open for read-only access

In conclusion

Temporary undo stores undo caused by performing DML on global temporary tables in a temporary tablespace thereby eliminating the need to generate redo. This makes it also possible to perform DML on global temporary tables stored in a read-only opened physical standby database.

Temporary undo is good for performance in general and Data Guard performance in particular because DML on global temporary tables in the physical standby database do not generate redo and DML on global temporary tables in the primary database generate less redo to be transmitted over the network to the standby database(s).
-Harald

Posted in Oracle | 3 Comments »