Home > Database, ESX, ESXi, Linux, Oracle, SLES, VMware > Oracle 10g RAC on ESXi3 using SLES9 SP5 – Part 7

Oracle 10g RAC on ESXi3 using SLES9 SP5 – Part 7

February 15th, 2009

Configuring and Using Raw Partitions for the Oracle Shared Storage

For the purpose of my cluster I will use raw partitions that are shared disks on the ESXi host.
First I will identify my needs for shared disks and them will create them and format them accordingly.

After some high level overview of my requirements I have created the following list of required files:

asm01.vmdk = 6GB [ ORADATA ]
asm02.vmdk = 2GB [Application data]
asm03.vmdk = 4GB [FLASH]
ocr.vmdk = 256MB [Cluster Registry]
voting.vmdk = 40MB [Voting disk]
spfile.vmdk = 16MB [Parameter configuration]

If your requirements are different simply consider this when creating the vmdk files under the ESXi host.

1. Creating the shared disks

Login to the ESXi host as root and create folder to store the shared Oracle RAC disks.
Then create the disks in the designated folder.

This is the command to use when creating the vmdk files under ESX:


/sbin/vmkfstools -c 6G -d eagerzeroedthick /vmfs/volumes/working.lun.1/RACASM/asm01.vmdk
/sbin/vmkfstools -c 2G -d eagerzeroedthick /vmfs/volumes/working.lun.1/RACASM/asm02.vmdk
/sbin/vmkfstools -c 4G -d eagerzeroedthick /vmfs/volumes/working.lun.1/RACASM/asm03.vmdk
/sbin/vmkfstools -c 256M -d eagerzeroedthick /vmfs/volumes/working.lun.1/RACASM/ocr.vmdk
/sbin/vmkfstools -c 40M -d eagerzeroedthick /vmfs/volumes/working.lun.1/RACASM/voting.vmdk
/sbin/vmkfstools -c 16M -d eagerzeroedthick /vmfs/volumes/working.lun.1/RACASM/spfile.vmdk

After you have created the virtual disks you have to shutdown the Linux nodes and attach the new disks.

2. Adding the disks to the nodes in ESXi

Important Note: You must remove all snapshots of the virtual machines before you follow the next instructions.
Otherwise you will most probably get an error message when configuring the second hard disk controller in ESXi

Invalid Configuration for device ‘1’

http://communities.vmware.com/thread/188180

Using VMware Infrastructure Client connect to the ESXi host and edit the nodes.

Add new hard disk >>
Use an existing virtual disk >> Browse to the vmdk files that you have created on the previous step and select the asm01.vmdk
Virtual Device >> Here select SCSI (1:0) This way you will add a new controller in addition to the existing one that handles the OS disks.
Select Finish and you will be back to the Virtual Machine Properties screen.

Perform the same action and add all the other vmdk files on virtual devices (1:1), (1:2), (1:3), (1:4), (1:5) before you select OK or do anything else…

After adding all disks and Before you click OK you should edit the SCSI Controller settings.

On the New SCSI Controller change the Type from the current None to Virtual!
Also make sure the type is LSI Logic!

If you plan to share the files among different ESX hosts you can select Physical. I will go with Virtual.

Double check your configuration and make sure you have added all shared disks on the new controller 1:x

3. Using the raw partitions

Once you have booted the sytem you could check with fdisk for the new devices.
You have to create partitions on them using fdisk.

Here is the list of devices that I have created.

Disk /dev/sdc: 6442 MB, 6442450944 bytes
255 heads, 63 sectors/track, 783 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 783 6289416 83 Linux

Disk /dev/sdd: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 1 261 2096451 83 Linux

Disk /dev/sde: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sde1 1 522 4192933+ 83 Linux

Disk /dev/sdf: 134 MB, 134217728 bytes
64 heads, 32 sectors/track, 128 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System
/dev/sdf1 1 128 131056 83 Linux

Disk /dev/sdg: 33 MB, 33554432 bytes
64 heads, 32 sectors/track, 10 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System
/dev/sdg1 1 32 32752 83 Linux

Disk /dev/sdh: 10 MB, 10485760 bytes
64 heads, 32 sectors/track, 32 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System
/dev/sdh1 1 10 10224 83 Linux

The fdisk operation needs to be done only on one of the nodes whereas the addition of the raw devices needs to be done on all nodes.

Some of you will notice a difference in the size of the disks above from what I have listed previously with the /sbin/vmkfstools command.

Stick to my recommendations for the /sbin/vmkfstools command when creating the shared disks in ESXi.

Now it’s time to add the disks as raw partition in SuSE Linux Enterprise Server.

For this purpose you have to edit the file /etc/raw

The example is:

# example:
# ———
# raw1:hdb1
#
# this means: bind /dev/raw/raw1 to /dev/hdb1

I am adding the devices in the following order :


#/etc/raw
raw1:sdc1
raw2:sdd1
raw3:sde1
raw4:sdf1
raw5:sdg1
raw6:sdh1

Then I start the raw service:

rac01:~ # /etc/rc.d/raw start
bind /dev/raw/raw1 to /dev/sdc1… done
bind /dev/raw/raw2 to /dev/sdd1… done
bind /dev/raw/raw3 to /dev/sde1… done
bind /dev/raw/raw4 to /dev/sdf1… done
bind /dev/raw/raw5 to /dev/sdg1… done
bind /dev/raw/raw6 to /dev/sdh1… done

Checking the status of the raw devices:


rac01:~ # raw -qa
/dev/raw/raw1: bound to major 8, minor 33
/dev/raw/raw2: bound to major 8, minor 49
/dev/raw/raw3: bound to major 8, minor 65
/dev/raw/raw4: bound to major 8, minor 81
/dev/raw/raw5: bound to major 8, minor 97
/dev/raw/raw6: bound to major 8, minor 113

Adding /etc/rc.d/raw to the boot configuration

rac03:~ # chkconfig raw on

I usually test with a reboot since there are sometimes surprises even though I have taken care of all the minor details.

For the OCR raw device you have to edit the ownership and permissions. In the current case this is the raw4 device.

rac01:~ # chown root:dba /dev/raw/raw4
rac01:~ # chmod 640 /dev/raw/raw4

For each additional device the permissions and ownersihp should be oracle:oinstall and 660

rac01:~ # chown oracle:oinstall /dev/raw/raw1
rac01:~ # chown oracle:oinstall /dev/raw/raw2
rac01:~ # chown oracle:oinstall /dev/raw/raw3
rac01:~ # chown oracle:oinstall /dev/raw/raw5
rac01:~ # chown oracle:oinstall /dev/raw/raw6
rac01:~ # chmod 660 /dev/raw/raw1
rac01:~ # chmod 660 /dev/raw/raw2
rac01:~ # chmod 660 /dev/raw/raw3
rac01:~ # chmod 660 /dev/raw/raw5
rac01:~ # chmod 660 /dev/raw/raw6

2. Creating directories and mapping raw devices.

rac01:~ # mkdir -p /u01/app/oracle/product/10.2.0/db_1/oradata/databasename
rac01:~ # chown -R oracle:oinstall /u01/app/oracle
rac01:~ # chmod -R 775 /u01/app/oracle
rac01:~ # mkdir /u01/crs1020
rac01:~ # chown root:oinstall /u01/crs1020
rac01:~ # chmod 775 /u01/crs1020

Congratulations, You are done with the system configuration before installing Clusterware!

In the next article expect Clusterware installation!

  1. December 4th, 2009 at 09:30 | #1

    You cite using command line in ESXi. I am running ESXi and from everything I read you cannot use any command line tools in ESXi, only in ESX. Am I wrong?

  2. December 15th, 2009 at 14:17 | #2

    Hello Jayson,

    The ESXi supports command line through ssh on the default port 22. By default only the root user has ssh access. With some additional play you could grant this privilege to other ESXi users but upon a reboot this privilege is revoked (the configuration is stored temporarily and it’s removed upon reboot of the ESXi).
    I tend to use VMware VI Toolkit which gives you a very advanced set of operations through the API provided by VMware.
    Cheers – S

Comments are closed.