The Dutch Prutser's Blog

By: Harald van Breederode

  • Disclaimer

    The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.
  • Subscribe

  • Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 248 other followers

A few words on Index Compression

Posted by Harald van Breederode on April 5, 2010

For the last couple of months I have been quite busy delivering Oracle11g New Features courses and one of the topics covered is table compression. To introduce this topic I ask my students in which version the first compression feature was introduced. Strangely enough almost no one seems to know the correct answer, which is Oracle8i where index compression was introduced. Because table compression is very much like index compression I explain to my students what index compression is all about and quite often they are completely shocked by the fact that such an important feature is missing in their skill set. So it is about time that I post something about index compression.

Index compression works by eliminating duplicated column values in the index leaf blocks thereby allowing the storage of more index entries per leaf block. To demonstrate this feature I will start by creating a non-compressed index on a few columns on the CUSTOMERS table in the SH sample schema:

SQL> connect sh/sh
Connected.
SQL> create index i1 on customers(cust_first_name,cust_last_name);

Index created.

SQL> select leaf_blocks from user_indexes where index_name = 'I1';

LEAF_BLOCKS
-----------
        193
 

As shown above the just created index consists of a total of 193 leaf blocks. To get an estimate about the percentage of leaf blocks that can be saved, by compressing the index, and the optimal compression factor we can use the information Oracle stores in the INDEX_STATS view after validating an index as shown below:

SQL> validate index i1;

Index analyzed.

SQL> select opt_cmpr_count,opt_cmpr_pctsave from index_stats;

OPT_CMPR_COUNT OPT_CMPR_PCTSAVE
-------------- ----------------
             2               47

The above output reveals that the optimal compression factor for this index is 2 and that this will most likely reduce the number of leaf blocks by 47%. So, let’s rebuild the index with compression enabled and verify the given estimate:

SQL> alter index i1 rebuild compress 2;

Index altered.

SQL> select leaf_blocks from user_indexes where index_name = 'I1';

LEAF_BLOCKS
-----------
        101

The above shows that the number of leaf blocks in the index is now 101. We started with 193 leaf blocks and we were expecting a 47% reduction in the number of leaf blocks. 47% of 193 is 91 and 193 minus 91 is equal to 102. So Oracle estimated the reduction pretty good I would say ;-) Executing the validate index command again assures that 2 is indeed the optimal compression factor and no leaf blocks can be eliminated because the index is already compressed as shown below:

SQL> validate index i1;

Index analyzed.

SQL> select OPT_CMPR_COUNT,OPT_CMPR_PCTSAVE from index_stats;

OPT_CMPR_COUNT OPT_CMPR_PCTSAVE
-------------- ----------------
             2                0

Hopefully this little post is enough to trigger you to dive into the world of compressed indexes! For a much more thorough discussion on index compression see Richard Foote’s fabulous Oracle blog. (part 1, part 2, part 3 And part 4).
-Harald

Posted in Oracle | 10 Comments »

Why does the size of my ORACLE_HOME increase?

Posted by Harald van Breederode on February 21, 2010

Recently I discovered that the size of an ORACLE_HOME for a given release varies from machine to machine although the machines are of the same architecture and run the same operating system. A small difference in size can be explained by the fact that one ORACLE_HOME was re-linked in the past while the other wasn’t, but this time the difference was more than 1 GByte. Digging a bit further revealed a sub-directory called .patch_storage which, as the name implies, is used to store patch related data.

I found NOTE:550522.1 on MyOracleSupport which describes what this directory is used for and why it suddenly so big. It turns out that recent versions of OPatch, installed by installing patch 6880880, not only store object files (several kilo byes in size) that are patched in $ORACLE_HOME/.patch_storage but also the library (several tens of mega bytes in size) in which the affected object file belongs. This is to make rolling back installed patches as quick as possible. Thus in effect this is a space-for-time issue. Because OPatch doesn’t clean the .patch_storage directory up whenever files are no longer needed because a more recent patch gets installed the size of an ORACLE_HOME increases without notice.

The good news is that recent versions of OPatch have an option to clean up the .patch_storage directory manually if one doesn’t like this space-for-time optimization.

Let’s have a look at the current disk usage before cleaning up my .patch_storage directory.

$ cd $ORACLE_HOME
$ du -hs .patch_storage
1.2G    .patch_storage

The above shows that I currently have 1.2 GByte in the .patch_storage directory. This is an Oracle 11g Release 1 (11.1.0.7.0) ORACLE_HOME with the latest PSU and a few interim patches applied.

The opatch util cleanup command is used to clean up the .patch_storage but before doing so let’s have a look at the OPatch help output for this command.

$ opatch util cleanup -help
Invoking OPatch 11.2.0.1.1

Oracle Interim Patch Installer version 11.2.0.1.1
Copyright (c) 2009, Oracle Corporation.  All rights reserved.

UTIL session

DESCRIPTION
     This utility cleans up 'restore.sh,make.txt' files and 'rac,scratch,backup' 
     directories of the.patch_storage directory of Oracle Home.If -ps option is used, 
     then, it cleans the above specified areas only for that patch, else for all 
     patches under ORACLE_HOME/.patch_storage. You will be still able to
     rollback patches after this cleanup.

SYNTAX
opatch util cleanup  [-invPtrLoc <Path to oraInst.loc> ]
                     [-jre <LOC> ] [-oh <ORACLE_HOME> ]
                     [-silent] [-report]
                     [-ps <patch ID with time stamp>, this will
                       be located under ORACLE_HOME/.patch_storage/]

OPTIONS   
       -invPtrLoc
              Used to locate the oraInst.loc file. Needed when the
              installation used the -invPtrLoc flag. This should be
              the path to the oraInst.loc file.

       -jre
              This option tells OPatch to use JRE (java) from the
              specified location instead of the default location
              under Oracle Home. Both -jdk and -jre options cannot
              be specified together. OPatch will display error in
              that case.
            
       -oh
              The oracle home to work on. This takes precedence over
              the environment variable ORACLE_HOME.
         
       -ps
              This option is used to specify the Patch ID with timestamp.
              This Patch ID with timestamp should be the same as in
              .patch_storage directory.

              A directory by this name will be present under
              ORACLE_HOME/.patch_storage. If this directory is specified
              and is valid, then the contents specified in the description
              will be cleaned up only for this patch. Otherwise, all patch
              related directories will be acted upon by this utility.

      -silent 
              In silent mode, the cleanup always takes place.

      -report 
              Prints the operations without actually executing them.


OPatch succeeded.

With this knowledge we can proceed with the actual clean up as shown below:

$ opatch util cleanup
Invoking OPatch 11.2.0.1.1

Oracle Interim Patch Installer version 11.2.0.1.1
Copyright (c) 2009, Oracle Corporation.  All rights reserved.

UTIL session

Oracle Home       : /u01/app/oracle/product/11.1.0
Central Inventory : /u01/app/oraInventory
   from           : /etc/oraInst.loc
OPatch version    : 11.2.0.1.1
OUI version       : 11.1.0.7.0
OUI location      : /u01/app/oracle/product/11.1.0/oui
Log file location : /u01/app/oracle/product/11.1.0/cfgtoollogs/opatch/opatch2010-02-20_15-27-15PM.log

Patch history file: /u01/app/oracle/product/11.1.0/cfgtoollogs/opatch/opatch_history.txt

Invoking utility "cleanup"
OPatch will clean up 'restore.sh,make.txt' files and 'rac,scratch,backup' directories.
You will be still able to rollback patches after this cleanup.
Do you want to proceed? [y|n]
y
User Responded with: Y
Size of directory "/u01/app/oracle/product/11.1.0/.patch_storage" before cleanup is 1256452888 bytes.
Size of directory "/u01/app/oracle/product/11.1.0/.patch_storage" after cleanup is 142265128 bytes.

UtilSession: Backup area for restore has been cleaned up. For a complete list of files/directories
deleted, Please refer log file.

OPatch succeeded.

OPatch started by displaying the amount of disk usage in the .patch_storage directory before asking for permission to perform the actual clean up. Once the clean up is finished OPatch reports the new amount of disk usage. We can easily verify that OPatch indeed freed up a fair amount of disk space by running the du –hs command.

$ du -hs .patch_storage
144M    .patch_storage
 

In short: Running opatch util cleanup after installing patches prevents your ORACLE_HOME from eating up too much disk space without losing the ability from rolling back those patches.
-Harald

Posted in Oracle | 6 Comments »

How to setup a private DNS for your virtual cluster

Posted by Harald van Breederode on January 26, 2010

One of the challenges I faced recently was building a virtual cluster based on Oracle 11g Release 2 on top of Oracle Enterprise Linux (OEL) running inside VMware server. Although I have an existing virtual Oracle 11g Release 1 cluster, I decided to build a new one in order to be able to teach both versions of the Oracle University RAC courses I teach. Also, my existing virtual cluster runs OEL4 and for 11gR2 I needed OEL5. I’ll save the story about building the actual cluster for another posting, because I need to perform additional research first, but I’d like to share my solution on how to get around the DNS requirements without making changes to the Oracle corporate DNS server.

Background information

Oracle 11gR2 Clusterware has many new features and two of them require adding resource records to DNS. The features in question are:

  • Single Client Access Name (SCAN)
  • Grid Naming Service (GNS)
Adding resource records to DNS is something that is handled by network administrators in most organizations. However, I consider this a DBA2.0 skill and I think you should gain knowledge in this area if you don’t already have it. A good start would be reading DNS and BIND published by O’Reilly.

Before continuing I will introduce some basic DNS vocabulary. The Domain Name System (DNS) is a client-server architecture which purpose is to translate names into ip-addresses and vice-versa. The server is called the DNS server and the client is called the resolver. The most well-known implementation is the Berkeley Internet Name Domain (BIND) software where the nameserver process is called “named”. As the name suggests DNS is based on domains which are divided into administrative regions which are called zones. A nameserver can be either the master or slave for a zone. The data, called resource records, for a zone is stored in so-called zone files which are maintained on the master nameserver for a particular zone. Slave servers retrieve their zone file from the master server in a process known as a zone transfer. The BIND nameserver is configured by the /etc/named.conf file, and the zone files are usually stored in /var/named.

Making changes to the Oracle corporate DNS server for my virtual cluster is unlikely to happen, nor advisable to do so, and therefore I decided to run a DNS server inside my virtual cluster. This sounds easier than it actually is. The problem is that nameservers are linked to form a tree, thus the nameserver above me links to my nameserver and maybe mine links to a nameserver underneath me. This linking between nameservers is called delegation. Since I am trying to avoid adding resource records to the Oracle corporate nameservers there will be no delegation to my nameserver from the Oracle nameservers. This isn’t a problem as long as clients who wish to connect to my cluster talk to my nameserver and not to the Oracle nameservers. Another problem is that, because of firewalls, my nameserver cannot talk to nameservers outside the corporate Oracle network which means that my nameserver can only resolve names which are within my own zones. Therefore my nameserver should forward any request that it cannot resolve by itself to another nameserver that can.

The implementation

The DNS implementation that I created for my virtual cluster has the following features:

  • I use example.com as my domain.
  • The master DNS server for example.comwill be on my first RAC node.
  • A slave DNS server for example.com will be on my second RAC node.
  • All DNS lookups that fall outside example.com will be forwarded.
  • Each node uses its local nameserver as its primary.
  • Each node uses the other node’s nameserver as its secondary.
  • DNS security will not be implemented.

The first step in implementing the above setup is to install the bind rpm either using the rpm or yum command.

The next step is to configure the DNS server. Without the right tool this can be quite challenging. I found h2n To be the best tool to handle this job for me. Basically h2n generates all DNS configuration and zone files based on /etc/hosts or any other file with a similar structure. Mine is called hosts.dns and is shown below:

192.168.40.141  el5n1.example.com
192.168.40.142  el5n2.example.com
#
192.168.40.101  el5n1-vip.example.com
192.168.40.102  el5n2-vip.example.com
#
192.168.40.110  el5n-cluster-scan.example.com
192.168.40.111  el5n-cluster-scan.example.com
192.168.40.112  el5n-cluster-scan.example.com
#
192.168.180.135 el5n1-priv.example.com
192.168.180.136 el5n2-priv.example.com

My RAC nodes are called el5n1 and el5n2, and the IP addresses for the public network interface are shown on lines 1 and 2. Their Virtual IP addresses are on lines 4 and 5. Lines 7, 8 and 9 have the SCAN IP addresses. Finally lines 11 and 12 show the private interface IP addresses.

I created a ‘dns’ sub-directory in my home-directory and stored the above file in there to act as my dns setup area.

The next step is to let h2n generate the DNS configuration and zone files for me based on the above hosts file. I stored the required h2n command line arguments in a .conf file called h2n.conf, to prevent myself from entering them over and over again.

-H hosts.dns
-d example.com
-n 192.168.40
-n 192.168.180
-u root@example.com
-W /var/named
-M
-y
+O forwarders { 192.168.40.2; };
+O forward only;
-s el5n1
-s el5n2
-z 192.168.40.141

Line 1 tells h2n which host file to use, line 2 specifies the domain name, lines 3 and 4 specify the subnets in use. Line 5 is to specify who to contact if something is wrong with my DNS setup. Line 6 specifies where the working directory for the name server is. Lines 7 and 8 are to suppress the generation of MX records and to select a particular serial number format. Lines 9 and 10 are options that h2n places in the named.conf file. Lines 11, 12 and 13 tells h2n which machines act as nameservers and that I want h2n to generate config files for a slave nameserver that transfers its zones from the master DNS server at the specified IP address.

Now that I have the files to drive h2n, the actual generation of the DNS configuration file is straightforward.

$ h2n
Initializing new database files...
Reading host file `hosts.dns'...
Writing database files...
Generating boot and conf files...
Checking NS, MX, and other RRs for various improprieties...
Done.

There is one file h2n cannot generate and that is db.cache Which I downloaded and stored in my dns setup directory. The generated named.conf file should be copied to /etc and the db.* should be copied to /var/named. The setup is now complete and the next step is to start the nameserver and to check its status.

$ sudo service named start
Starting named: [  OK  ]

$ sudo service named status
number of zones: 4
debug level: 0
xfers running: 0
xfers deferred: 0
soa queries in progress: 0
query logging is OFF
recursive clients: 0/1000
tcp clients: 0/100
server is up and running
named (pid  8656) is running...

The above output shows that my nameserver started successfully and that it runs normally. The next step is to automate the nameserver startup each time the operating system starts using the chkconfig command as shown below:

$ sudo chkconfig named on

$ sudo chkconfig --list named
named           0:off   1:off   2:on    3:on    4:on    5:on    6:off

The DNS server configuration is now complete but before I can test my nameserver I first need to configure the resolver. This step involves editing /etc/resolv.conf and add the IP address of my two nameservers to the nameserver directives. To save typing fully qualified host names I use the search directive to specify my local domain name.

search example.com
nameserver 0.0.0.0
nameserver 192.168.40.142

The final step is to test my master nameserver by resolving several names and IP addresses inside my own domain and by resolving something outside my own domain to test the forwarding part of my setup.

$ host el5n1
el5n1.example.com has address 192.168.40.141

$ host 192.168.40.141
141.40.168.192.in-addr.arpa domain name pointer el5n1.example.com.

$ host el5n-cluster-scan
el5n-cluster-scan.example.com has address 192.168.40.110
el5n-cluster-scan.example.com has address 192.168.40.111
el5n-cluster-scan.example.com has address 192.168.40.112

$ host prutser.wordpress.com
prutser.wordpress.com is an alias for lb.wordpress.com.
lb.wordpress.com has address 72.233.2.58
lb.wordpress.com has address 72.233.2.59
lb.wordpress.com has address 74.200.243.251
lb.wordpress.com has address 74.200.243.253
lb.wordpress.com has address 76.74.254.123
lb.wordpress.com has address 76.74.255.123

According to the above output my master DNS server is working fine and I can now proceed to configure the slave DNS server on my second RAC node. All configuration and zone files for the slave nameserver were already generated by h2n and all I needed to do was to copy the conf.sec.save to /etc/named.conf, and the db.cache and db.127.0.0 to /var/named on my second RAC node. Before starting the slave nameserver I needed to add ENABLE_ZONE_WRITE=yes to the /etc/sysconfig/named file to allow it to save the transferred zone file to disk. The steps to start the slave nameserver manually and to automate the startup after a reboot are the same as for the master nameserver. The /etc/resolv.conf is of course different because it needs the IP address of the master nameserver.

Wrapping up

To add a new host to my DNS configuration all I need to do is to add the hostname and IP address to the hosts.dns file and to let h2n generate the zone files for me. Thereafter I need to copy the new zone files to /var/named on my first RAC node and tell named to re-read the zone files from disk using the rndc command. To ease this process I wrote the following Makefile which will do all the work for me after I edit the hosts.dns file.

named.conf:     hosts.dns h2n.conf
        h2n
        sudo cp named.conf /etc
        sudo cp db.* /var/named
        sudo rndc reload

The above described DNS configuration allows running my own nameservers inside my virtual RAC cluster giving me the possibility to explore the Oracle11gR2 Clusterware. Also, I gained a little bit of experience in setting up and maintaining DNS.
-Harald

Posted in Linux, Oracle | 17 Comments »

 
Follow

Get every new post delivered to your Inbox.

Join 248 other followers