jump to navigation

ASMCMD Wrapper … For the Frequent ASM administrator December 6, 2011

Posted by tamnau in 11g, General.
Tags: , , , , ,
add a comment

In the pre-11g days of ASM, I was hesitant to use ASMCMD (command line utility to perform ASM tasks)… perhaps in its earlier days, it was a combination of limited functionality and my comfort in connecting to the ASM instance via sqlplus to perform administrative tasks. But now in 11g, I have grown the desire to use ASMCMD and have found it to be more useful then I have first envisaged.

In 11g, some of the new functionality now offered through ASMCMD includes the ability to perform:

  • startup and shutdown of ASM instance
  • backup (md_backup) and restore (md_restore) of asm metadata
  • list open files (lsof)
  • verify disk usage (du)
  • change/alter (chdg) disks in disk groups

For a comprehensive list of ASMCMD commands, you can refer to Oracle 11.2 Database Storage Administrator’s Guide.

With the more frequent use of ASMCMD, I got tired of switching between the ASM environment and DB environment settings, as a result, I have created an ASM wrapper script which calls asmcmd (ie. works when ASM is mounted or running) with the ability to preserve your DB environment setting.

Example of using this script are provided below.

Change to the directory to where the script sos_asm.sh was downloaded to and set executable permission accordingly:

[oracle@lnx01] cd /home/oracle/scripts
[oracle@lnx01] chmod +x sos_asm.sh

Example 1:  Using the sos_asm.sh script, I performed an ASM metadata backup using asmcmd but also was able to preserve database environment settings for the database called BLUE.

[oracle@lnx01] export ORACLE_SID=BLUE; . oraenv
[oracle@lnx01] cd /home/oracle/scripts
[oracle@lnx01] ./sos_asm.sh md_backup /home/oracle/asm_backup/dgbk_DG_DATA -G DG_DATA
[oracle@lnx01] sqlplus / as sysdba
SQL*Plus: Release 11.2.0.2.0 Production on Wed Jul 20 23:09:31 2011
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select sys_context('USERENV','DB_NAME') DB from dual;
DB
-----------
BLUE

Advertisements

Cloning a RAC Database May 20, 2010

Posted by tamnau in General, RAC, RMAN.
Tags: , , , , ,
7 comments

Cloning a RAC Database may appear somewhat challenging, but its actually cloning a database as if it was a single instance and then enabling it as a cluster post to the duplication.

In the below scenario, I will demonstrate how this can be achieved by cloning a RAC database and have it manage by the same cluster. Please note that the following was tested for version 11.1 RAC with ASM.

The RAC configuration used for this example:

example_conifig_clone_rac

NOTE*:  Control file and online redo log copies, archived redo log files can be stored in the Flash Recovery Area (FRA) Disk Group.

1. Log onto source RAC database:

[oracle@lnx01] export ORACLE_SID=SALT1; . oraenv
[oracle@lnx01] sqlplus / as sysdba
SQL> create pfile='/app/oracle/scripts/clone/initPEPPER.ora';

2. Edit /tmp/initPEPPER.ora and change the appropriate values.
Remove the following entries:

<SOURCE_INSTANCE1>.__db_cache_size=7650410496
<SOURCE_INSTANCE2>.__db_cache_size=7583301632
<SOURCE_INSTANCE1>.__java_pool_size=67108864
<SOURCE_INSTANCE2>.__java_pool_size=67108864
<SOURCE_INSTANCE1>.__large_pool_size=67108864
<SOURCE_INSTANCE2>.__large_pool_size=67108864
<SOURCE_INSTANCE1>.__oracle_base='/app/oracle'
<SOURCE_INSTANCE2>.__oracle_base='/app/oracle'
<SOURCE_INSTANCE1>.__pga_aggregate_target=6643777536
<SOURCE_INSTANCE2>.__pga_aggregate_target=6643777536
<SOURCE_INSTANCE1>.__sga_target=9462349824
<SOURCE_INSTANCE2>.__sga_target=9462349824
<SOURCE_INSTANCE1>.__shared_io_pool_size=0
<SOURCE_INSTANCE2>.__shared_io_pool_size=0
<SOURCE_INSTANCE1>.__shared_pool_size=1409286144
<SOURCE_INSTANCE2>.__shared_pool_size=1476395008
<SOURCE_INSTANCE1>.__streams_pool_size=134217728
<SOURCE_INSTANCE2>.__streams_pool_size=134217728

3. Adjust Memory Target accordingly for your environment

*.memory_target=2G

4. Set spfile parameter:

*.spfile='+DG_DATA/PEPPER/spfilePEPPER.ora'

5. Disable the cluster_database parameter:

*.cluster_database=FALSE

6. Create init parameter under dbs:

[oracle@lnx01] export ORACLE_HOME=/app/oracle/product/11.1
[oracle@lnx01] cd $ORACLE_HOME/dbs
[oracle@lnx01] echo "SPFILE='+DG_DATA/PEPPER/spfilePEPPER.ora'" > $ORACLE_HOME/dbs/initPEPPER.ora

7. Edit initPEPPER.ora and add the following convert parameters:

*.log_file_name_convert=('+DG_DATA/SALT','+DG_DATA/PEPPER')
*.db_file_name_convert=('+DG_DATA/SALT','+DG_DATA/PEPPER')

8. Create the directories:

a) From one node only, create the ASM path for PEPPER database:

[oracle@lnx01] export ORACLE_SID=+ASM1; . oraenv
[oracle@lnx01] asmcmd mkdir +DG_DATA/PEPPER
[oracle@lnx01] asmcmd mkdir +DG_DATA/PEPPER/brsadata

b) From one node only, create the OS path for PEPPER admin directories:

[oracle@lnx01] mkdir -p /app/oracle/admin/PEPPER/adump
[oracle@lnx01] mkdir -p /app/oracle/diag/rdbms/PEPPER/PEPPER1/cdump
[oracle@lnx01] mkdir -p /app/oracle/diag/rdbms/PEPPER/PEPPER1/trace

c) Repeat step b on node 2 and other nodes in RAC:

[oracle@lnx02] mkdir -p /app/oracle/admin/PEPPER/adump
[oracle@lnx02] mkdir -p /app/oracle/diag/rdbms/PEPPER/PEPPER2/cdump
[oracle@lnx02] mkdir -p /app/oracle/diag/rdbms/PEPPER/PEPPER2/trace

9. Create a TNS entry for target auxiliary database and place in the tnsnames.ora. Add these entries on all nodes in cluster:

PEPPER =
 (DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx01-vip)(PORT = 1521))
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx02-vip)(PORT = 1521))
 (CONNECT_DATA =
 (SERVICE_NAME = PEPPER)
 )
 )

LISTENERS_PEPPER =
 (ADDRESS_LIST =
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx01-vip)(PORT = 1521))
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx02-vip)(PORT = 1521))
 )

LISTENER_PEPPER1 =
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx01-vip)(PORT = 1521))

LISTENER_PEPPER2 =
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx02-vip)(PORT = 1521))

10. If no listener.ora exists, create one as follows

# Node 1
 LISTENER_lnx01 =
 (DESCRIPTION_LIST =
 (DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx01-vip)(PORT = 1521)(IP = FIRST))
 (ADDRESS = (PROTOCOL = TCP)(HOST = 207.23.2.122)(PORT = 1521)(IP = FIRST))
 )
 )
# Node 2
 LISTENER_lnx02 =
 (DESCRIPTION_LIST =
 (DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = lnx02-vip)(PORT = 1521)(IP = FIRST))
 (ADDRESS = (PROTOCOL = TCP)(HOST = 207.23.2.126)(PORT = 1521)(IP = FIRST))
 )
 )

11. Create an SPFILE from the parameter file, and start the auxiliary instance:

[oracle@lnx01] export ORACLE_HOME=/app/oracle/product/11.1
[oracle@lnx01] export ORACLE_SID=PEPPER
SQL> CONNECT / as sysdba
SQL> CREATE SPFILE='+DG_DATA/PEPPER/spfilePEPPER.ora'
FROM PFILE='/app/oracle/scripts/clone/initPEPPER.ora';
SQL> STARTUP NOMOUNT;

12. Then run the following RMAN commands to duplicate the database:

[oracle@lnx01] rman target sysdba/<password>@SALT auxiliary /
RMAN> DUPLICATE TARGET DATABASE TO 'PEPPER';

14. Add the entry for oratab file (located under /etc or /var/opt/oracle) for target auxiliary db:

a) Add in For node 1 oratab file:

PEPPER1:/app/oracle/product/11.1:N

b) Add in For node 2 oratab file:

PEPPER2:/app/oracle/product/11.1:N

15. Enable the cluster parameter in the target auxiliary database from one node only

[oracle@lnx01] export ORACLE_HOME=/app/oracle/product/11.1
[oracle@lnx01] export ORACLE_SID=PEPPER
[oracle@lnx01] sqlplus / as sysdba
SQL> alter system set cluster_database=TRUE scope=spfile;
SQL> shutdown immediate;

16. Rename the init file and then copy the init file across to the

[oracle@lnx01] export ORACLE_SID=PEPPER1; . oraenv
[oracle@lnx01] mv $ORACLE_HOME/dbs/initPEPPER.ora $ORACLE_HOME/dbs/initPEPPER1.ora
[oracle@lnx01] scp $ORACLE_HOME/dbs/initPEPPER1.ora \
lnx02:/app/oracle/product/11.1/dbs/initPEPPER2.ora

17. Add the second thread of online redo logs and enable that thread;

SQL> startup

18. Start up the second instance:

SQL> ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 5 '+DG_DATA' SIZE 250M reuse;
SQL> ALTER DATABASE ADD LOGFILE MEMBER '+DG_REDO' TO GROUP 5 ;
SQL> ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 6  '+DG_REDO' SIZE 250M reuse;
SQL> ALTER DATABASE ADD LOGFILE MEMBER '+DG_DATA' TO GROUP 6;
SQL> ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 7 '+DG_REDO' SIZE 250M reuse;
SQL> ALTER DATABASE ADD LOGFILE MEMBER '+DG_DATA' TO GROUP 7;
SQL> ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 8 '+DG_REDO' SIZE 250M reuse;
SQL> ALTER DATABASE ADD LOGFILE MEMBER '+DG_DATA' TO GROUP 8;
SQL> ALTER DATABASE ENABLE PUBLIC THREAD 2;

19. Stop all instances on each node for the target auxiliary database

a) From node 1 perform the following:

[oracle@lnx01] export ORACLE_SID=PEPPER1; . oraenv
[oracle@lnx01] sqlplus / as sysdba
SQL> shutdown immediate;

b) Then perform for other instances in the cluster:

[oracle@lnx01] export ORACLE_SID=PEPPER2; . oraenv
[oracle@lnx01] sqlplus / as sysdba
SQL> shutdown immediate;

20. To add into the cluster, add the new database to srvctl:

[oracle@lnx01] srvctl add database -d PEPPER -o /app/oracle/product/11.1
[oracle@lnx01] srvctl modify database -d PEPPER -p '+DG_DATA/PEPPER/spfilePEPPER.ora' -s open
[oracle@lnx01] srvctl add instance -d PEPPER -i PEPPER1 -n lnx01
[oracle@lnx01] srvctl add instance -d PEPPER -i PEPPER2 -n lnx02

20. Start the RAC database

[oracle@lnx01] srvctl start database -d PEPPER

21. Now verify that the RAC database has now registered with the cluster.

For pre-11.2 run:

crs_stat –t

Or for 11.2 run:

crsctl stat res -t