The Dutch Prutser's Blog

By: Harald van Breederode

  • Disclaimer

    The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.
  • Subscribe

  • Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 238 other followers

Clustered ASM and RAC password file maintenance

Posted by Harald van Breederode on January 14, 2011

A recurring question during Grid Infrastructure and RAC courses I teach is “How do you manage Oracle password files in a clustered environment?”. The answer isn’t as straight forward as you might think because there are significant differences between ASM and RAC (==clustered database) environments. Additionally, in recent releases changes were made concerning password file maintenance procedures. The goal of this posting is to shed some light on these matters.

Password file maintenance prior to 11gR2

Before discussing the recent changes, let us first see how Oracle password file maintenance is performed in a clustered environment. An Oracle password file is created by means of the orapwd utility, is named orapw$ORACLE_SID and is stored in the $ORACLE_HOME/dbs directory. Because an $ORACLE_HOME is usually not shared in a clustered environment, this results in each instance having its own password file. This may lead to inconsistencies between the password files.

Before demonstrating such possible inconsistencies, let us first take a look at what things look like when everything is as it should be:

SQL> connect sys/oracle@asm1 as sysasm
Connected.
SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
PL/SQL Release 11.1.0.7.0 - Production
CORE    11.1.0.7.0      Production
TNS for Linux: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production

SQL> select * from gv$pwfile_users order by inst_id;

   INST_ID USERNAME                       SYSDB SYSOP SYSAS
---------- ------------------------------ ----- ----- -----
         1 SYS                            TRUE  TRUE  TRUE
         2 SYS                            TRUE  TRUE  TRUE

The output above, captured on a clustered ASM instance, shows that there is an Oracle 11.1.0.7.0 environment and that each instance has a password file entry for the SYS account. What happens when we add another user to the password file and grant him the SYSASM system privilege?

SQL> create user harald identified by prutser;

User created.

SQL> grant sysasm to harald;

Grant succeeded.

As shown above, the user harald was created successfully and was granted the SYSASM system privilege without problems. The question is however if this password file change was performed on each node.

SQL> select * from gv$pwfile_users order by inst_id;

   INST_ID USERNAME                       SYSDB SYSOP SYSAS
---------- ------------------------------ ----- ----- -----
         1 SYS                            TRUE  TRUE  TRUE
         1 HARALD                         FALSE FALSE TRUE
         2 SYS                            TRUE  TRUE  TRUE

It is clear that the password file was only updated on the cluster node at which the change was executed.

So not sharing Oracle password files between the nodes in a cluster will result in inconsistencies between the password files if a change is made at only one of the associated instances.

In order to avoid this you can either store the password file on a shared filesystem and create symbolic links from $ORACLE_HOME/dbs to this shared filesystem or execute the commands that update the password file at each instance as shown below:

SQL> connect sys/oracle@asm2 as sysasm
Connected.
SQL> create user harald identified by prutser;

User created.

SQL> grant sysasm to harald;

Grant succeeded.

Now that the user harald was also created and granted the SYSASM system privilege on the second instance everything should be consistent again.

SQL> select * from gv$pwfile_users order by inst_id;

   INST_ID USERNAME                       SYSDB SYSOP SYSAS
---------- ------------------------------ ----- ----- -----
         1 SYS                            TRUE  TRUE  TRUE
         1 HARALD                         FALSE FALSE TRUE
         2 SYS                            TRUE  TRUE  TRUE
         2 HARALD                         FALSE FALSE TRUE

As long as we remember to execute password file maintenance commands on each and every instance in a clustered environment, the password files remain consistent as shown below:

SQL> drop user harald;

User dropped.

SQL> connect sys/oracle@asm1 as sysasm
Connected.
SQL> drop user harald;

User dropped.

SQL> select * from gv$pwfile_users order by inst_id;

   INST_ID USERNAME                       SYSDB SYSOP SYSAS
---------- ------------------------------ ----- ----- -----
         1 SYS                            TRUE  TRUE  TRUE
         2 SYS                            TRUE  TRUE  TRUE

Password file maintenance in 11gR2

Wouldn’t it be nice if a password file change on one node would automatically be propagated to all other nodes within the same cluster?

YES it would be nice indeed and this is implemented in Oracle11gR2. So, lets have a look in a 11.2.0.1.0 clustered environment:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE    11.2.0.1.0      Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

SQL> select * from gv$pwfile_users order by inst_id;

   INST_ID USERNAME                       SYSDB SYSOP SYSAS
---------- ------------------------------ ----- ----- -----
         1 SYS                            TRUE  TRUE  TRUE
         2 SYS                            TRUE  TRUE  TRUE

This output was taken from a clustered ASM with local password files stored in $ORACLE_HOME/dbs and shows that each instance has one password file entry for the SYS account. Now see what happens if a new user is created and granted the SYSASM system privilege:

SQL> create user harald identified by prutser;

User created.

SQL> grant sysasm to harald;

Grant succeeded.

The creation of a new user followed by granting it a system privilege succeeds without warnings, but this time the password file change is automatically propagated to the other node as shown below:

SQL> select * from gv$pwfile_users order by inst_id;

   INST_ID USERNAME                       SYSDB SYSOP SYSAS
---------- ------------------------------ ----- ----- -----
         1 SYS                            TRUE  TRUE  TRUE
         1 HARALD                         FALSE FALSE TRUE
         2 SYS                            TRUE  TRUE  TRUE
         2 HARALD                         FALSE FALSE TRUE

Isn’t this a really cool feature? One might ask though what happens if updating a remote password file fails? This can easily be answered by removing the password file on one node and triggering a password file update from another node. Lets give it a try:

SQL> alter user harald identified by Prutser;
alter user harald identified by Prutser
                                *
ERROR at line 1:
ORA-15306: ASM password file update failed on at least one node

It is clear that updating the password file failed on at least one node, but it isn’t clear on which node. How do we figure this out? First lets check what the description is for an ORA-15306 error message:

$ oerr ora 15306
15306, 00000, "ASM password file update failed on at least one node"
// *Cause:  A CREATE USER, ALTER USER, DROP USER, GRANT, or REVOKE
//          command failed on at least one node of the Automatic Storage
//          Management (ASM) cluster.
// *Action: Check the ASM alert logs for more information.

So you need to check the alert log to find out on which node (or nodes) the password file update failed:

$ tail -3 alert_+ASM1.log
Thu Jan 13 14:43:51 2011
SUCCESS: ASM password file update succeeded on node 0
ERROR: ASM password file update failed on node 1

This shows that the password file update failed on node 1. Note: the nodes are numbered 0 and 1, while the instances are named +ASM1 and +ASM2 respectively.

If updating all password files succeeds without errors, this is what you will see in the alert log:

$ tail -3 alert_+ASM1.log
Thu Jan 13 14:44:19 2011
SUCCESS: ASM password file update succeeded on node 0
SUCCESS: ASM password file update succeeded on node 1

It goes without saying that updating password files automatically on remote nodes isn’t restricted to adding new entries, but also works for deleting (or changing) entries as shown below:

SQL> drop user harald;

User dropped.

SQL> select * from gv$pwfile_users order by inst_id;

   INST_ID USERNAME                       SYSDB SYSOP SYSAS
---------- ------------------------------ ----- ----- -----
         1 SYS                            TRUE  TRUE  TRUE
         2 SYS                            TRUE  TRUE  TRUE

RAC password file maintenance

Unfortunately the recent change in password file maintenance in a cluster is currently only implemented for clustered ASM and not (yet?) for RAC.

New password file naming conventions

Another change that was made in Oracle 11g release 2 is a new password file name convention. In prior releases the password file name was based on $ORACLE_SID including the node suffix, but since Oracle 11g release 2 the password file name is based on $ORACLE_SID without the node suffix. Thus orapw+ASM instead of orapw+ASM1 for an ASM instance or orapwORCL instead of orapwORCL_1 for a RAC instance. However, this change is only implemented for RAC databases that are policy based managed. Policy based managed instances have an underscore between the instance name and the node suffix.

In summary

For ASM releases prior to 11gR2 password files should either be placed on a shared filesystem (and symlinked from $ORACLE_HOME/dbs) or password file changes should be performed manually on all instances in order to keep the password file contents synchronized on all nodes.

For ASM releases from 11gR2 onwards there is no need to place the password files on a shared filesystem because password file changes are automatically propagated to remote instances.

For RAC environments password files should either be placed on a shared filesystem (and symlinked from $ORACLE_HOME/dbs) or password file changes should be performed manually on all instances in order to keep the password file contents synchronized on all nodes.

Hopefully you enjoyed reading this article and learned something new ;-)
-Harald

About these ads

8 Responses to “Clustered ASM and RAC password file maintenance”

  1. Job Oprel said

    Harald, thanks for this post. In addition to this: I tested this feature in a 11.2.0.2 – environment at home, and also in this version it is only implemented for clustered ASM and not yet for the RAC-database. It’s a pity. Kind regards, Job.

    • Harald van Breederode said

      Hi Job,

      Thanx for the confirmation. Hopefully this will be added in a future release or maybe adding support to store password files inside ASM is a better solution. We’ll see what the future will bring us.
      -Harald

  2. jianing Shu said

    I have a problem to access ASM instances in our RAC cluster. node 1 and 2 can access but 3 and 4. I have changed password file based on your way. But it still doesnot work.

    In Grid control, when I try to access asm it will ask me the password. After I insert the password as sysasm, I can access node 1 and node 2, however no matter how I try to access 3 and 4 it doesn’t work.

    Can you help this?

    • Harald van Breederode said

      Hi Jianing,

      There could be dozens of reasons why you have this probelm. Unfortunatly I am unable to help you figure out what is wrong in your setup deu to time constraints. I suggest that you closely inspect your setup and check if there are any differences between the nodes that do work and the ones that do not work properly or maybe you can log a service request by Oracle Support.
      -Harald

  3. [...] will propagate to all cluster nodes automatically, a fact I have first seen mentioned on the Dutch Prutser’s weblog, Harald van [...]

  4. Chandu said

    Hi

    This is really a nice feature, and good information. will try to find information from oracle documentation how this is done, i would like to know what how does this propagation is implemented or how is works in the background.
    if you find you can post it here
    Thanks

  5. Nice post Harald. Can you send some supporting documentation or demonstration for your claim that password file naming convention for RAC database includes “orapwORCL instead of orapwORCL1 for a RAC instance.” My testing indicates this is not true and I want to be sure I am not missing something.

    • Harald van Breederode said

      Hi Seth,

      Many thanx for your question. The password file name change is only implemented for policy based managed databases. Policy based managed databases have an underscore between the database name and the node number. I.e. ORCL_1 instead of ORCL1. This wasn’t completely clear in the posting. I have updated the posting to reflect this change better. Thanx for catching this.
      -Harald

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

Join 238 other followers

%d bloggers like this: