Home > Database, ESX, ESXi, Linux, Oracle, SLES, VMware > Oracle 10g RAC on ESXi3 using SLES9 SP5 – Part 2

Oracle 10g RAC on ESXi3 using SLES9 SP5 – Part 2

February 13th, 2009

Network and hostname configuration of the nodes

Once you have converted the the first virtual machine that you have installed, you have a mirrored copy of it.
This means that the hostnames of the two machines will be identical and this has to be fixed.

Each Node must have at least 2 network adapters: one for the public and one for the private interconnect. In addition , the interface names associated with the network adapters for each network must be the same on all nodes.

For the public network each adapter must support TCP/IP. For the private network, the interconnect must support UDP for Linux. Gigabit ethernet or an equivalent is recommended.

Each Node requires an IP address and hostname registered in DNS or the /etc/hosts file for each public network interface.

One unused virtual IP address and associated VIP name registered with DNS or /etc/hosts file are needed for each node.

The virtual IP address must be in the same subnet as the associated public IP address.

For the configuration I take the following steps.

First boot rac01 [the original host virtual machine that you have installed].
Once you login as root in GNOME or KDE whatever you have selected to install, start the Yast manager.
Edit the network configuration of the first network card from top to bottom.

This network card should be part of your Public network segment that you have previously created in the ESXi configuration management.
You could always check this based on the Network Interface MAC address that you see in Yast and in the Settings window of ESXi for rac01.

Once you are positive that you have selected the correct NIC, apply a static IP address , subnet mask, hostname, search domain and routing for each network interface

The IP addresses that you could use should be usable and routed within the ESXi host. It is recommended that the two networks for the Private and Public interconnect are routed separately for performance and resiliency reasons.

Here is my configuration in which searchdomain should be changed to whatever your searchdomain is:

Hostname Private Hostname Vip Hostname Public IP Private IP VIP IP
rac01. searchdomain rac01-priv.searchdomain rac01-vip.searchdomain 192.168.128.151 10.10.128.151 192.168.128.161
rac02. searchdomain rac02-priv.searchdomain rac02-vip.searchdomain 192.168.128.152 10.10.128.152 192.168.128.162

The purpose of the private IP address is to be used exclusively for the cluster interconnect. No other traffic will be present on this interface after we configure it properly.
The public interface would allow external connections to the server for general administration and configuration.

After installation of clusterware you can configure clients to use the VIP name or IP address of a node. If a node fails, its virtual IP address fails over to another node that is reachable and functions.

I will edit both host files and I will make sure I describe the presence of the remaining nodes inside it as follows:

#Public network ip/hostname of the oracle RAC

192.168.128.151 rac01.searchdomain rac01
192.168.128.152 rac02.searchdomain rac02

#Private network ip/hostname of the oracle RAC

10.10.128.151 rac01-priv.searchdomain rac01-priv
10.10.128.152 rac02-priv.searchdomain rac02-priv

#VIP network ip/hostname of the oracle RAC

192.168.128.161 rac01-vip.searchdomain rac01-vip
192.168.128.162 rac02-vip.searchdomain rac02-vip

Next we will deal with Hardware and package requirements for the Oracle Clusterware installation.

Categories: Database, ESX, ESXi, Linux, Oracle, SLES, VMware Tags: , , , ,
  1. No comments yet.
Comments are closed.