Search This Blog

Wednesday, February 6, 2013

Collect Diagnostic Data in 11gR2 RAC Using DiagCollection.pl Script.


All the log files related to clusterware processes can be found in the "$GRID_HOME/logs"
directory. So, if there is some problem with the clusterware you could check
all the log files in the directory.


But mining data from all the log files is difficult. So, in order
to ease of this difficulty in 11gR2 you can use the "diagcollection.pl"
script to collect diagnostic data.

Run the diagcollection.pl script as the root user to collect diagnostic information 
from an Oracle Clusterware installation. The diagnostics provide additional information
so that Oracle Support Services can resolve problems. Run this script from the 
operating system prompt as follows, where CRS_home is the home directory of your Oracle 
Clusterware installation:

# Invoke this utility as the root user.
# CRS_home/bin/diagcollection.pl --collect

[root@rac1 ~]# /u01/app/11.2.0/grid/bin/diagcollection.pl --collect
Production Copyright 2004, 2008, Oracle.  All rights reserved
Cluster Ready Services (CRS) diagnostic collection tool
The following CRS diagnostic archives will be created in the local directory.
crsData_rac1_20130206_0017.tar.gz -> logs,traces and cores from CRS home. 
Note: core files will be packaged only with the --core option.
ocrData_rac1_20130206_0017.tar.gz -> ocrdump, ocrcheck etc
coreData_rac1_20130206_0017.tar.gz -> contents of CRS core files in text format

osData_rac1_20130206_0017.tar.gz -> logs from Operating System
Collecting crs data
/bin/tar: log/rac1/cssd/ocssd.log: file changed as we read it
/bin/tar: log/rac1/ctssd/octssd.log: file changed as we read it
Collecting OCR data
Collecting information from core files
No corefiles found
Collecting OS logs

[root@rac1 ~]# ll
total 14720
-rw------- 1 root root     2946 Aug 23 21:21 anaconda-ks.cfg
-rw-r--r-- 1 root root 14785218 Feb  6 00:17 crsData_rac1_20130206_0017.tar.gz
drwxr-xr-x 4 root root     4096 Feb  3 16:21 Desktop
-rw-r--r-- 1 root root    42129 Aug 23 21:20 install.log
-rw-r--r-- 1 root root     5259 Aug 23 21:20 install.log.syslog
-rw-r--r-- 1 root root    10960 Feb  6 00:18 ocrData_rac1_20130206_0017.tar.gz
-rw-r--r-- 1 root root   171884 Feb  6 00:18 osData_rac1_20130206_0017.tar.gz
-rwxr-x--- 1 root root      696 Jan 26 14:07 root.sh.racnode1.AFTER_INSTALL

To clean the collected data use the "--clean" option.

[root@rac1 ~]# /u01/app/11.2.0/grid/bin/diagcollection.pl --clean
Production Copyright 2004, 2008, Oracle.  All rights reserved
Cluster Ready Services (CRS) diagnostic collection tool
Cleaning up tar and gzip files
Done


[root@rac1 ~]# ll
total 88
-rw------- 1 root root  2946 Aug 23 21:21 anaconda-ks.cfg
drwxr-xr-x 5 root root  4096 Feb  6 00:19 cfgtoollogs
drwxr-xr-x 4 root root  4096 Feb  3 16:21 Desktop
drwxr-xr-x 2 root root  4096 Feb  6 00:19 install
-rw-r--r-- 1 root root 42129 Aug 23 21:20 install.log
-rw-r--r-- 1 root root  5259 Aug 23 21:20 install.log.syslog
drwxr-xr-x 3 root root  4096 Feb  6 00:19 log
-rwxr-x--- 1 root root   696 Jan 26 14:07 root.sh.racnode1.AFTER_INSTALL


Read more >>

Oracle RAC 11gR2 Policy Managed Database Creation using DBCA.

Policy managed databases depend upon server pools in order to work.


Starting from 11gR2, Oracle RAC has three kinds of server pools.

1) Free Server Pool - This pool contains servers which are not assigned to 
   any server pools.
2) Generic Server Pool - This pool contains Pre 11gR2 databases as well as administrator-managed
   databases. 
3) User Created pool - This server pool is created by the user and is used for running 
   policy managed databases.




I have a test three node cluster, Which will be used for creating 
policy managed database.

# Make sure that clusterware services are running on all the nodes
# before starting starting with the installation.

[root@rac1 ~]# su - grid
[grid@rac1 ~]$ crsctl check cluster -all
**************************************************************
rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac3:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

Also, make sure that you have appropriate servers in the free pool.

[grid@rac1 ~]$ crsctl status serverpool
NAME=Free
ACTIVE_SERVERS=rac1 rac2 rac3

NAME=Generic
ACTIVE_SERVERS=
 

In my test cluster i did not have servers in the free pool as they were being
used in the generic pool because of the administrator managed database that i had
created earlier.

So, i had to drop that database in order get servers in the free pool. 


Start with the database creation.

[root@rac1 ~]# xhost +
access control disabled, clients can connect from any host
[root@rac1 ~]# su - oracle
[oracle@rac1 ~]$ dbca


In the server pool creation step i have set the cardinality to "2"
this means that out of three nodes my database will run only on 2 nodes
at a time. 
[grid@rac1 ~]$ crsctl status serverpool NAME=Free ACTIVE_SERVERS=rac1 NAME=Generic ACTIVE_SERVERS= NAME=ora.myserverpool ACTIVE_SERVERS=rac2 rac3 [oracle@rac1 ~]$ srvctl status database -d dell Instance dell_1 is running on node rac2 Instance dell_2 is running on node rac3
Read more >>