The Dutch Prutser's Blog

By: Harald van Breederode

  • Disclaimer

    The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.
  • Subscribe

  • Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 248 other followers

Dropping interval partitions

Posted by Harald van Breederode on January 11, 2010

One of the nice Oracle11g new features is interval partitioning which is an extension to range partitioning. The advantage of interval partitioning over range partitioning is that new partitions are created automatically when new rows are inserted which don’t belong in an existing partition.

The question, however, is how to get rid of old partitions? This isn’t as straightforward as one might expect. I struggled with this myself and after receiving questions from friends I decided that is was time to do some research and that is what this posting is all about.

Let’s get started by creating a one month interval partitioned table with three predefined range partitions and continue by looking at some metadata in the data dictionary.

SQL> create table ipart
  2  (time_id date
  3  ,cust_id number(4)
  4  ,amount_sold number(5))
  5  partition by range(time_id)
  6  interval(NUMTOYMINTERVAL(1,'month'))
  7  (partition m01cy2010 values less than (to_date('01-feb-2010','DD-MON-YYYY'))
  8  ,partition m02cy2010 values less than (to_date('01-mar-2010','DD-MON-YYYY'))
  9  ,partition m03cy2010 values less than (to_date('01-apr-2010','DD-MON-YYYY')));

Table created.

SQL> select partition_name, interval, high_value
  2  from dba_tab_partitions where table_name = 'IPART'
  3  order by partition_position;

PARTITION_NAME INT HIGH_VALUE
-------------- --- ------------------------------------------------------------
M01CY2010      NO  TO_DATE(' 2010-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
M02CY2010      NO  TO_DATE(' 2010-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
M03CY2010      NO  TO_DATE(' 2010-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N

Note: The high_value column is truncated for formatting reasons.

The above output shows the three partitions we’ve just created and the interval=no column indicates that they are range partitions and not interval partitions as one might expect. Important to remember is that interval partitioning is an extension to range partitioning. The next step is to insert a few rows which don’t belong in the existing partitions to trigger the creation of new partitions.

SQL> insert into ipart values('01-apr-2010',1,2);

1 row created.

SQL> insert into ipart values('11-jun-2010',1,2);

1 row created.

SQL> select partition_name, interval, high_value
  2  from dba_tab_partitions where table_name = 'IPART'
  3  order by partition_position;

PARTITION_NAME INT HIGH_VALUE
-------------- --- ------------------------------------------------------------
M01CY2010      NO  TO_DATE(' 2010-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
M02CY2010      NO  TO_DATE(' 2010-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
M03CY2010      NO  TO_DATE(' 2010-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
SYS_P193       YES TO_DATE(' 2010-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
SYS_P194       YES TO_DATE(' 2010-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N 

The above output shows that two new partitions were created and that they are of type interval as indicated by the interval=yes column. Oracle assigned names to them because the partitions were created automatically.

The boundary between the highest range partition and the lowest interval partition is known as the transition point. Partitions above the transition point are created by Oracle automatically whenever needed, and we cannot add partitions manually using the “alter table add partition” command. However, in order to support partition exchange, a workaround is available using the “lock partition” command using the new “partition for ()” clause. To add the partition for July 2010 we use:

SQL> lock table ipart partition
  2  for (to_date('01-jul-2010','DD-MON-YYYy')) in share mode;

Table(s) Locked.

By querying the data dictionary we can verify that the desired partition was indeed created by the above SQL statement:

SQL> select partition_name, interval, high_value
  2  from dba_tab_partitions where table_name = 'IPART'
  3  order by partition_position;

PARTITION_NAME INT HIGH_VALUE
-------------- --- ------------------------------------------------------------
M01CY2010      NO  TO_DATE(' 2010-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
M02CY2010      NO  TO_DATE(' 2010-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
M03CY2010      NO  TO_DATE(' 2010-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
SYS_P193       YES TO_DATE(' 2010-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
SYS_P194       YES TO_DATE(' 2010-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
SYS_P195       YES TO_DATE(' 2010-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N

Now that we have a couple of range and interval partitions we can start dropping them


SQL> alter table ipart drop partition m01cy2010;

Table altered.

SQL> alter table ipart drop partition
  2  for (to_date('01-feb-2010','DD-MON-YYYy'));

Table altered.

SQL> alter table ipart drop partition
  2  for (to_date('01-mar-2010','DD-MON-YYYy'));
alter table ipart drop partition
*
ERROR at line 1:
ORA-14758: Last partition in the range section cannot be dropped

The first two drop partition SQL statements ran successfully but the third one returned an ORA- 14758 error indicating that we cannot drop the last partition in the range section of an interval partitioned table. The partition in question is the last remaining one under the transition point as shown below:

SQL> select partition_name, interval, high_value
  2  from dba_tab_partitions where table_name = 'IPART'
  3  order by partition_position;

PARTITION_NAME INT HIGH_VALUE
-------------- --- ------------------------------------------------------------
M03CY2010      NO  TO_DATE(' 2010-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
SYS_P193       YES TO_DATE(' 2010-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
SYS_P194       YES TO_DATE(' 2010-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
SYS_P195       YES TO_DATE(' 2010-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N

The only way to get rid of this partition is to move up the transition point. This can be achieved by merging the two partitions around the transition point into a new partition. However merging two partitions results in copying all rows from both partitions into a new partition possibly resulting in generating a large amount of redo. Since the intention is to drop the oldest partition we can safely truncate the partition before merging it, thus eliminating half the number of rows that have to be copied assuming both partitions have an equal number of rows to begin with.

SQL> alter table ipart truncate partition
  2  for (to_date('01-mar-2010','DD-MON-YYYy'));

Table truncated.

SQL> alter table ipart merge partitions
  2  for (to_date('01-mar-2010','DD-MON-YYYy')),
  3  for (to_date('01-apr-2010','DD-MON-YYYy'));

Table altered.

SQL> select partition_name, interval, high_value
  2  from dba_tab_partitions where table_name = 'IPART'
  3  order by partition_position;

PARTITION_NAME INT HIGH_VALUE
-------------- --- ------------------------------------------------------------
SYS_P196       NO  TO_DATE(' 2010-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
SYS_P194       YES TO_DATE(' 2010-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
SYS_P195       YES TO_DATE(' 2010-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N

The output above shows that the transition point has been moved up as a result of merging two partitions after the first partition was truncated. But despite the fact that one of the partitions to be merged is empty, Oracle still copies all rows into a new partition! Thus this way of dropping partitions from an interval partitioned table is quite expensive. If possible one should truncate both partitions before merging them together to eliminate the rows from being copied.

Another way to move up the transition point is to temporarily convert the interval partitioned table into a regular range partitioned table. This results in all adjacent interval partition being converted into range partitions as demonstrated below:

SQL> alter table ipart set interval ();

Table altered.

SQL> alter table ipart set interval (NUMTOYMINTERVAL(1,'month'));

Table altered.

SQL> select partition_name, interval, high_value
  2  from dba_tab_partitions where table_name = 'IPART'
  3  order by partition_position;

PARTITION_NAME INT HIGH_VALUE
-------------- --- ------------------------------------------------------------
SYS_P196       NO  TO_DATE(' 2010-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
SYS_P194       NO  TO_DATE(' 2010-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N
SYS_P195       NO  TO_DATE(' 2010-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'N

The query output shows that all partitions are now of type range because the partition type was temporarily altered from interval partitioning into range partitioning. This is an effective way to move the transition point up to the maximum, but it requires a downtime slot because in the short moment the table is range partitioned no new partition will be added if someone tries to insert a row that doesn’t belong in the existing partitions.

In conclusion, dropping partitions from an interval partitioned table isn’t as straightforward as one might expect and might require the transition point to be moved up. Moving the transition point up can be achieved by either merging partitions together or by temporarily converting a interval range partitioned table into a range partitioned table.
-Harald

Posted in Oracle | 8 Comments »

Book review: Oracle Data Guard 11g Handbook

Posted by Harald van Breederode on November 27, 2009

I just finished reading Oracle Data Guard Handbook written by Larry Carpenter, Joseph Meeks, Charles Kim, Bill Burke, Sonya Carothers, Joydip Kundu, Michael Smith and Nitin Vengurlekar. In 544 pages you will learn everything you need to know about Data Guard.

This well written book begins with a lengthy introduction to the Data Guard architecture followed by a guided tour through the initial implementation process of a physical standby database. This forms the basis for the remaining 12 chapters which dive into redo processing, the logical standby database , the Data Guard broker, using Grid Control, switchovers and failovers, Active Data Guard, the Snapshot Standby, client failover, deployment architectures, backup and recovery, monitoring and troubleshooting.

This book contains a valuable section on optimizing redo transport and clearly explains how to configure both Oracle Net and the underlying operating system to optimize Data Guard network traffic. The Data Guard Broker explanation is beyond everything I have read before and clearly explains how the broker works and which background processes actually make up the broker besides the familiar DMON process.

Despite the fact that I teach Data Guard on a regular basis, I learned a lot by reading this absolutely fabulous book! This is a MUST READ for everyone involved in architecting, setting up and maintaining a Data Guard environment. I rate this book with 6 out of 5 stars. Well done Larry cs ;-)
-Harald

Posted in Oracle | 8 Comments »

Explaining the number of Consistent Gets

Posted by Harald van Breederode on November 6, 2009

Last week I received an email from a friend, who wishes to remain anonymous, with the question why Oracle needed 8 consistent gets to perform a full table scan on a table where all the rows are stored in just one data block. There are several possibilities that can cause this and that is what this posting is all about: Explaining the number of Consistent Gets.

Verifying the claim

Let me first demonstrate that the claim made by my friend is indeed true. I start by setting the STATISTICS_LEVEL parameter to ALL to enable the collection of “Plan Execution Statistics” needed by DBMS_XPLAN to be able to report the number of consistent gets. Next I execute a query to fetch the data followed by a call to DBMS_XPLAN to get and format the execution plan.

SQL> alter session set statistics_level = all;

Session altered.

SQL> select * from foo;

        C1
----------
         1
....
....
        64

64 rows selected.

SQL> select * from table(dbms_xplan.display_cursor( -
> format=>'basic -rows iostats last -rows'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select * from foo

Plan hash value: 1245013993

---------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |     64 |00:00:00.01 |       8 |
|   1 |  TABLE ACCESS FULL| FOO  |      1 |     64 |00:00:00.01 |       8 |
---------------------------------------------------------------------------

We can clearly see that there are 64 rows in table FOO and that Oracle performed 8 consistent gets to fulfill my request for data. The question is why 8 and not something less assuming all 64 rows are indeed stored in the same data block. Making assumptions is risky, so, lets verify the one data block assumption:

SQL> select blocks from dba_tables where table_name = 'FOO';

    BLOCKS
----------
         1

SQL> select count(distinct dbms_rowid.rowid_block_number(rowid)) from foo;

COUNT(DISTINCTDBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID))
---------------------------------------------------
                                                  1

The above shows that according to the data dictionary all data is indeed stored in one data block, but this information could be stale. Hence the verification on the actual data itself. We now know for sure that all rows are indeed stored in one data block. But there is more to a table than data blocks and maybe that is causing the somewhat high number of consistent gets we saw earlier. I recall from memory that Oracle needs 3 consistent gets to perform a full table scan on an empty table, but before jumping to conclusions I better verify my memory”.

SQL> create table bar as select * from foo where 1=0;

Table created.

SQL> select * from bar;

no rows selected

SQL> select * from table(dbms_xplan.display_cursor( -
> format=>'basic -rows iostats last -rows'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select * from bar

Plan hash value: 4224476444

---------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |      0 |00:00:00.01 |       3 |
|   1 |  TABLE ACCESS FULL| BAR  |      1 |      0 |00:00:00.01 |       3 |
---------------------------------------------------------------------------

The above confirms that Oracle indeed used three consistent gets to scan the empty table. This leaves us with 5 consistent gets to fetch the 64 rows. The question remains why 5 gets?

The arraysize

When a database client executes a query it starts fetching rows until there is nothing more to fetch. This fetching is performed in batches and the number of rows to fetch in one batch is known as the arraysize or fetchsize depending on the programming environment. The arraysize can be displayed in SQL*Plus using the ‘show arraysize’ command as shown below:

SQL> show array
arraysize 15

By default SQL*Plus uses an arraysize of 15, which means that each fetch performed requests a batch of 15 rows. Thus if we are about to fetch 64 rows we need 5 batches. This is the key to the answer why Oracle needed 8 consistent gets to scan the 64 row table, 5 are used to fetch the actual rows and the other 3 were already shown.

We can easily show the effect of arraysize by raising it from 15 to 35 using the ‘set arraysize’ command. This will reduce the number of batches from 5 to 2, resulting in a total of 5 consistent gets as demonstrated below:.

SQL> set array 35
SQL> select * from foo;

        C1
----------
         1
....
....
        64

64 rows selected.

SQL> select * from table(dbms_xplan.display_cursor( -
> format=>'basic -rows iostats last -rows'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select * from foo

Plan hash value: 1245013993

---------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |     64 |00:00:00.01 |       5 |
|   1 |  TABLE ACCESS FULL| FOO  |      1 |     64 |00:00:00.01 |       5 |
---------------------------------------------------------------------------

If we double the arraysize from 35 to 70 all rows will be fetched in just one batch resulting in the lowest possible number of consistent gets (in this example).

SQL> set array 70
SQL> select * from foo;

        C1
----------
         1
....
....
        64

64 rows selected.

SQL> select * from table(dbms_xplan.display_cursor( -
> format=>'basic -rows iostats last -rows'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select * from foo

Plan hash value: 1245013993

---------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |     64 |00:00:00.01 |       4 |
|   1 |  TABLE ACCESS FULL| FOO  |      1 |     64 |00:00:00.01 |       4 |
---------------------------------------------------------------------------

The arraysize is quite important when it comes to performance tuning and a good discussion about it can be found in Making Friends written by fellow OakTable member Cary Millsap.

The above answered the question from my friend. Before wrapping up there are a few things to watch out for when it comes to explaining the number of consistent gets.

The aggregate trap

The first thing to be aware of is what I call the “aggregate trap”. The arraysize controls the batch size between the Oracle database server and its client, and does not affect SQL operations which are running completely inside the server. The classic example is an SQL aggregation function as shown below:

SQL> select sum(c1) from foo;

   SUM(C1)
----------
      2080

SQL> select * from table(dbms_xplan.display_cursor( -
> format=>'basic -rows iostats last -rows'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select sum(c1) from foo

Plan hash value: 1342139204

----------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |      1 |00:00:00.01 |       3 |
|   1 |  SORT AGGREGATE    |      |      1 |      1 |00:00:00.01 |       3 |
|   2 |   TABLE ACCESS FULL| FOO  |      1 |     64 |00:00:00.01 |       3 |
----------------------------------------------------------------------------

Although Oracle performed a full table scan on the 64 row table it only needed 3 consistent gets. Because the SUM function ran entirely inside the Oracle server process, which required no interaction with the client, the arraysize has no impact.

The Read Consistency Trap

Another thing to watch out for is known as the Read Consistency Trap, which causes the number of consistent gets to go up whenever the Oracle database server has to apply undo records to a data block to make it read consistent. Each undo record that gets applied increases the number of consistent gets by one as illustrated below:

SQL> show array
arraysize 70
SQL> select * from foo;

        C1
----------
         1
....
....
        64

64 rows selected.

SQL> select * from table(dbms_xplan.display_cursor( -
> format=>'basic -rows iostats last -rows'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select * from foo

Plan hash value: 1245013993

---------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |     64 |00:00:00.01 |      69 |
|   1 |  TABLE ACCESS FULL| FOO  |      1 |     64 |00:00:00.01 |      69 |
---------------------------------------------------------------------------

Suddenly there are 69 consistent gets instead of 5, as shown previously, using an arraysize of 70. The extra 64 consistent gets are caused by applying 64 undo records to make the data block read consistent. The 64 undo records were created in another session by updating each row in a PL/SQL loop (one row at a time) without issuing a COMMIT.

A little quiz to wrap-up

The final example shows yet another number of consistent gets for retrieving the same 64 rows using an arraysize of 70:

SQL> show array
arraysize 70
SQL> select * from foo;

        C1
----------
         1
....
....
        64

64 rows selected.

SQL> select * from table(dbms_xplan.display_cursor( -
> format=>'basic -rows iostats last -rows'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select * from foo

Plan hash value: 1245013993

---------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |     64 |00:00:00.01 |      24 |
|   1 |  TABLE ACCESS FULL| FOO  |      1 |     64 |00:00:00.01 |      24 |
---------------------------------------------------------------------------

If you can explain why there are 24 consistent gets instead of just 4 please leave a comment below and I will let you know if you are correct ;-)
-Harald

Posted in Oracle | 17 Comments »

 
Follow

Get every new post delivered to your Inbox.

Join 248 other followers