September 9th, 2009
st0ma
I noticed an IP conflict today on a windows box hosted on the ESXi. In the events viewer under system I checked the MAC address of the system trying to hijack my IP address. I wanted to find a quick way and check if this mac address is in my existing ESXi Virtual Machines or it’s outside somewhere..
I wanted to find a quick and dirty way to check this since there is number of machines on the ESXi host.
Here is what I did…
I opened VMWare VI-TOOLKIT. After I connected I decided to try some commands that I regularly use such as get-vm and get-vmguest. After I found nothing interesting I checked out the CI Toolkit Cmdlets Reference Document.
And there it was… the perfect command for what I wanted..
Read more…
In some rare cases you can notice resource allocated to virtual machines that don’t appear to be running.
esxtop
can help you find this out.
Using vm-support you can identify the world id of the virtual machine and then using the same command you can generate some support logs and abort the virtual machine.
Read more…
How to extract the IP addresses in use by the Guests in our VMware ESX host?
This was the question that I asked myself yesterday, when I had to go over a long list of IPs and check if any of the listed IPs was in use by any Guest Operating system on our ESX server. I immediately thought of the VMware VI Toolkit and all the nice commands that I have seen there, but none of those came to me immediately. A colleague was swifter and used python to get any matches after he copied all IPs manually using the Infrastructure Client, but since I had few spare minutes today, I decided to solve this one and post the answer. Here it is:
[VI Toolkit] C:\Program Files\VMware\Infrastructure\VIToolkitForWindows> get-vmg
uest -vm (get-vm *) |select IPAddress
IPAddress
———
{192.168.128.110}
{192.168.128.113}
{}
{192.168.128.127}
{}
{}
{192.168.128.125}
{}
{}
{192.168.128.175}
{}
{}
{}
{}
{192.168.128.186}
{192.168.128.153, 192.168.128.163, 10.10.128.153}
{}
{192.168.128.154, 192.168.128.164, 10.10.128.154}
{192.168.128.102}
{192.168.128.196}
{192.168.128.236}
{192.168.128.213}
{192.168.128.103}
{192.168.128.254}
Read more…
1. ESXi 3.5 Extending the VMDK file [Virtual Machine Hard Disk]
The main reason behind this is the fact that I needed more space for 2 more Oracle databases on a SLES10 Linux system. The partition mounted on /u01 was initially created as 21GB but I quickly depleted the space with 3 oracle 10g databases that took more than 17GB and the space left was not sufficient for the 2 new databases that I had to create.
Before I advanced with the extension of the VMDK file I connected to each separate instance and performed “shutdown immediate” command as sysdba.
Then I stopped the listener, dbconsole, isqlplus and once I confirmed that no oracle related processes were present on the system I performed a shutdown “shutdown -h now”.
In order to extend an existing hard drive attached to a virtual machine you have to make sure no snapshots of the virtual machine are present. I know that this is quite uncomfortable considering the risky operation that you are about to perform but there is a work around. (The work around is not in this post, so please let me know if you are interested or simply search for it. There is a good chance that I will have an article concerning that topic)
Read more…
Categories: Database, ESX, ESXi, Linux, Oracle, SLES, VI, VMware Tags: ESXi, fdisk, Linux, Oracle, SLES, VI, vmdk, VMware
February 20th, 2009
st0ma
When I finished the clusterware install in the lab environment I have continued with the database software itself but it appears that I needed a VMware ESXi Update 3 required! The kernel version of SLES failed verification and upon an upgrade of the kernel through YAST both machines were unusable since they would not boot.
This is an obstacle until completed since I can’t continue with the How-to until this is done.
In Part 9 of the Oracle RAC How-to we have completed successfully the installation of the Oracle Clusterware services on the shared storage for the two Suse Linux Enterprise Server 9 SP5 nodes.
Read more…
Categories: Database, ESX, ESXi, Linux, Oracle, SLES, VI, VMware Tags: Database, ESX, ESXi, Kernel, Oracle, VMware
February 18th, 2009
st0ma
VMware VI toolkit (for Windows)
1. Overview and download
The VMware VI toolkit for windows allows you to script administer and manage you virtual infrastructure in command line from your Windows Operating system machine. VMware VI Toolkit requires Microsoft PowerShell to run. If you didn’t have a chance to download those two applications, here are the download links:
Download and install Microsoft PowerShell
http://www.microsoft.com/windowsserver2003/technologies/management/powershell/default.mspx
Read more…
February 16th, 2009
st0ma
Verify the Oralce Clusterware Installation
With the introduction of Oracle RAC 10g, cluster management is controller by the evmd, ocssd and crsd processes.
Run the ps command on both nodes to make sure that the processes are running.
rac01:/u01/clusterware/cluvfy # ps -ef |grep d.bin
root 4694 1 0 Feb13 ? 00:00:00 /u01/crs1020/bin/crsd.bin reboot
oracle 5242 4692 0 Feb13 ? 00:00:00 /u01/crs1020/bin/evmd.bin
oracle 5344 5326 0 Feb13 ? 00:00:00 /u01/crs1020/bin/ocssd.bin
root 20078 10946 0 09:44 pts/1 00:00:00 grep d.bin
Next you should check the /etc/inittab file, which is processed whenever the runlevel changes:
Read more…
Categories: Database, ESX, ESXi, Linux, Oracle, SLES, SSH, VMware Tags: Clusterware, ESXi, Linux, Oracle, SLES, VMware
February 16th, 2009
st0ma
Oracle Clusterware Installation
Install the xntpd service and configure it.
You can use the Yast management console to do so.
It is extremely important that both nodes are configured to use ntp server and that they are regularly being updated.
If there is any difference at all within the date of all nodes this could result into inoperable cluster.
1. Copy the cpio.gz file to the first node and unzip the contents of the cpio file
#gunzip 10201_clusterware_linux_x86_64.cpio.gz
#cpio -idmv < 10201_clusterware_linux_x86_64.cpio
Read more…
Categories: Database, ESX, ESXi, Linux, Oracle, SLES, SSH, VMware Tags: Clusterware, Database, ESXi, Linux, Oracle, SLES, VIPCA
February 15th, 2009
st0ma
Configuring and Using Raw Partitions for the Oracle Shared Storage
For the purpose of my cluster I will use raw partitions that are shared disks on the ESXi host.
First I will identify my needs for shared disks and them will create them and format them accordingly.
After some high level overview of my requirements I have created the following list of required files:
asm01.vmdk = 6GB [ ORADATA ]
asm02.vmdk = 2GB [Application data]
asm03.vmdk = 4GB [FLASH]
ocr.vmdk = 256MB [Cluster Registry]
voting.vmdk = 40MB [Voting disk]
spfile.vmdk = 16MB [Parameter configuration]
Read more…
Categories: Database, ESX, ESXi, Linux, Oracle, SLES, VMware Tags: ESXi, Linux, Oracle, RAC, raw devices, shared storage, SLES, vmdk
February 14th, 2009
st0ma
Linux OS Parameters
Here is the list of the required parameters for clusterware and oracle database 10g
Parameter |
Value |
File |
semmsl semmns semopm semmni |
25 032 000 100 128 |
/proc/sys/kernel/sem |
shmmax |
The minimum of the following (4 GB – 1 byte), or half the size of physical memory (in bytes), whichever is lower. |
/proc/sys/kernel/shmmax |
shmmni |
4096 |
/proc/sys/kernel/shmmni |
shmall |
2097152 |
/proc/sys/kernel/shmall |
file-max |
65536 |
/proc/sys/fs/file-max |
ip_local_port_range |
Minimum: 1024 Maximum: 65000 |
/proc/sys/net/ipv4/ip_local_port_range |
rmem_default |
262144 |
/proc/sys/net/core/rmem_default |
rmem_max |
4194304 |
/proc/sys/net/core/rmem_max |
wmem_default |
262144 |
/proc/sys/net/core/wmem_default |
wmem_max |
4194304 |
/proc/sys/net/core/wmem_max |
In order to check the values in your system use the sysctl command.
You will probably get the following results from the default kernel configuration:
Read more…
Categories: Database, ESX, ESXi, Linux, Oracle, SLES, VMware Tags: ESXi, Linux, Oracle, SLES, sysctl, VMware