Home > Database, ESX, ESXi, Linux, Oracle, SLES, SSH, VMware > Oracle 10g RAC on ESXi3 using SLES9 SP5 – Part 8

Oracle 10g RAC on ESXi3 using SLES9 SP5 – Part 8

February 16th, 2009

Oracle Clusterware Installation

Install the xntpd service and configure it.
You can use the Yast management console to do so.
It is extremely important that both nodes are configured to use ntp server and that they are regularly being updated.
If there is any difference at all within the date of all nodes this could result into inoperable cluster.

1. Copy the cpio.gz file to the first node and unzip the contents of the cpio file


#gunzip 10201_clusterware_linux_x86_64.cpio.gz
#cpio -idmv < 10201_clusterware_linux_x86_64.cpio

1.2 install the cvuqdisk rpm required for the cluvfy [cluster verification utility]


# cd /u01/clusterware/rpm
# rpm -iv cvuqdisk-1.0.1-1.rpm

Copy the rpm file to the second node and perform the same installation on rac02.

1.3 Run the cluvfy utility as oracle user


oracle@rac01:~>cd /u01/clusterware/cluvfy
oracle@rac01:~>export CV_NODE_ALL=rac03.searchdomain,rac04.searchdomain
oracle@rac01:~>./runcluvfy.sh stage -post hwos -n all -verbose

This will check the reach ability to all nodes.
I have to remind again to retest ssh connectivity between the hosts based on their hostnames.


oracle@rac01:~>ssh rac02
oracle@rac01:~>ssh rac02-priv
oracle@rac01:~>ssh rac02.searchdomain
oracle@rac01:~>ssh rac02-priv.searchdomain

You must be sure that you can login between the nodes and you have added all the different hostnames to the ssh known_hosts configuration.

1.4 Login to KDE/GNOME and using the terminal start the clusterware installer.

As root run the rootpre.sh script that is located in the rootpre subdirectory of the clusterware installation folder.

You should copy this script to rac02 and run it there as well.
After you have done this it’s time to start the GUI by executing runInstaller command from the clusterware subdirectory.


oracle@rac01:~>/u01/clusterware/runInstaller

This will ask you if you have ran the rootpre.sh script as root on all nodes before continuing.

Answer with ‘y’

On the welcome screen select Next

Specify Inventory directory and credentials.

Enter the full path of the inventory directory:

/u01/app/oracle/oraInventory

Specify Operating System group name:

oinstall

Select Next

Specify Home Details.


Name: OraCrs10g_home
Path: /u01/crs1020

Select Next

Product-Specific Prerequisite Checks.

The overall result should be “Passed”

Click the Next button to proceed.

Cluster configuration.

Cluster name crs1 [here you type the name of the oracle cluster]

You will see only one node [rac01] available.

I suggest you do edit the node and make sure you use the hostname.searchdomain for public, private and virtual host name such as:


Public Node Name Private Node Name Virtual Node Name
rac01.test.soteks.org rac01-priv.test.soteks.org rac01-vip.test.soteks.org

Then add the second node as:

Public Node Name Private Node Name Virtual Node Name
rac02.test.soteks.org rac02-priv.test.soteks.org rac02-vip.test.soteks.org

Click next.
Edit the Network Interfaces that are not marked correctly.

In my case I had to edit the eth0 and change it from Private to Public where eth1 was already selected as Private.

Interface name Subnet Interface Type
eth0 192.168.128.0 Public eth1 10.10.128.0 Private

Select Next to continue the installation.

Oracle Cluster Registry File

I use External redundancy due to the LUN configuration of the ESXi host.
Specify OCR Location: /dev/raw/raw4

Click Next to continue.

Voting Disk File.

Specify Voting Disk Location

External Redundancy
/dev/raw/raw5

Click Next to continue.

Summary and install
Click Install..

When installation is done on the local and remote node you should get to “Execute Configuration Scripts” window.


root@rac01:~#/u01/app/oracle/oraInventory/orainstRoot.sh

root@rac02:~#/u01/app/oracle/oraInventory/orainstRoot.sh

root@rac01:~#/u01/crs1020/root.sh

Wait for the script to finish successfully before starting the script on rac02 and make sure you run in on rac01 first!
Once the script takes care of the OCR raw device and adds the CRS services.

root@rac02:~#/u01/crs1020/root.sh

Wait for the script to finish it’s execution, then get back to the GUI windows and click OK.

2. Configuration Assistants

If any of the listed configuration assistant scripts fail you must review the logs located in /u01/app/oracle/oraInventory/logs and troubleshoot before you retry the failed step.


I always have issue with the VIP node application that fails. The error message is:

Checking existence of VIP node application (required)
If this check fails as it is expected to happen you should configure the vip ip addresses manually.
Leave the GUI window as it is since we are about to retry the configuration assistants and in terminal run the following command (make sure oracle user is added to the /etc/sudoers file before that!)


oracle@rac01:~>sudo /u01/crs1020/bin/vipca
Password:[rootpassword here]

The VIP configuration assistant window will launch and you have to configure the vip network.

Hostname will be predefined so you have to add just the ip-alias and ip address.
I have used the following configuration:

hostname IP alias VIP IP VIP subnet
rac01-vip rac01-vip.searchdomain 192.168.128.161 192.168.128.0
rac02-vip rac02-vip.searchdomain 192.168.128.162 192.168.128.0

If the above information is in your /etc/hosts it should get detected after you fill out the VIP IP address for rac01.

The VIPCA will then verify and configure the IP addresses on both nodes.

Once you successfully finish the VIPCA exit it and get back to the GUI window of the Clusterware Configuration Assistant and retry to failed component.

Once it finishes successfully you will get to the End Of Installation window where you Exit.

If all tasks have completed successfully you should have the Clusterware installed and configured on both nodes!

  1. evgeniuzz
    October 11th, 2009 at 00:58 | #1

    Hi !
    your manual is very usable !
    but on the page 8 i`ve run Orainstroot.sh, and then root.sh, then i`ve going to second node and gave message:
    /bin/cp: cannot create regular file `/dev/raw/raw1`;no such device or address

    i`m using vmware server 2.0.1. dev/raw/raw1 – is the OCR in my plan.

    if you can, help please

  2. November 9th, 2009 at 10:09 | #2

    Hello Evgeniuzz,

    Accept my sincere apology for the delay in my reply. I have been quite busy lately but this is not an excuse.. Are you still experiencing this issue? If you have fixed it, could you please share details so other people could also benefit from your solution?

    Kind regards,

    — St0ma

  3. December 16th, 2009 at 05:22 | #3

    Hello Evgeniuzz,

    Sorry for the late reply! I believe that upon a reboot your second node did not attach the raw devices properly and this is the reason why you ran on the issue you mentioned. I bet that you have already resolved it!
    Cheers
    — S

Comments are closed.