[IQUG] MPX on VM question

Steve Shen sshen at sscinc.com
Fri Aug 4 08:02:27 MST 2017


Hi Chris,

Thank you very much for the detailed information. They are very helpful.

Kind regards,
Steve
From: Baker, Chris [mailto:c.baker at sap.com]
Sent: Friday, August 04, 2017 10:45 AM
To: Steve Shen; iqug at iqug.org
Subject: Re: MPX on VM question

Steve,

The following links should help, although some of the information is dated (IQ 15.4).
https://blogs.sap.com/2013/06/06/sap-sybase-iq-multiplex-running-on-vmware-vsphere-validation-and-test-results-complete/
https://archive.sap.com/documents/docs/DOC-42439
https://archive.sap.com/documents/docs/DOC-58998

The major difference for IQ 16 would be the memory changes for -iqlm as well as, now, the ability to use shared-nothing (IQ DAS) which would obviously allow removal of the VSAN layer of VMWare (or similar technology).  i.e. as long as you map the devices correctly from storage to VM on the host, you can use shared-disk MPX.  Otherwise you can go with a simpler disk configuration and use DAS for MPX.

My suggestion would be to mount storage partitions to the host -> map the storage partition to a block devices on the guest -> create raw devices from block devices -> create symlinks on the guest -> use symlinks for IQ files.  Ensure your storage mapping to the host allows concurrent RW by the host to the guest (the whitepapers mention what is required for VMWare licensing, etc to allow this, but if you use a different technology, then your mileage may vary).

Use the 'iqheader' utility to confirm the correct mounts at each symlink - remember, the absolute path is what is used in MPX.  Symlinks help a lot here by reducing the path length and can be named the same on each guest (except for IQ TEMP devices which must be uniquely named with a different device logical name across the MPX for local devices).

Obviously, if you use DAS, then you can worry less about shared RW access to the IQ MAIN devices.

The primary thing is that the VM needs 1:1 mapping for memory and CPU resources (which really only gives manageability as the reason for using a VM - as dataservers are never 99% idle like desktops are).  For me this is the critical issue with virtualization - if you want performance, you need to provide the resources and not be cheap.

Network is also an issue here.  Obviously, you would not run 2 IQ VMs on the same host (it would be pointless - you might as well just run a bigger simplex VM).  So you will have an interconnect.  With IQ MPX that can be on the same network as the client connections or a 'private' network.  If you choose to use separate connections/ports for the communications between the MPX instances (e.g. to pass intermediate results instead of shared temp), then you will want to configure this over a separate NIC on each host.  Again - using virtual networking to carve out a different network on the same host NIC (for use by the MPX interconnect) may be 'simple' but in the end just maps to the same physical connection, so can affect performance and would require additional work on the IQ side for no net benefit.

Just be aware of the physical topology, not just the virtual configuration available, if you want performance from your VM-based IQ MPX configuration, and provide resources properly.

One of the cool things with putting IQ on VM when this was tested, was that the host was mounted to the SAN with all the drivers, but the VM only needed to be presented SCSI devices for the IQ devices.  The VM did not need any drivers.  This can make management/migration when using something like vMotion simpler, as long as the devices are mounted the same on the host and presented the same to the guest as SCSI devices.

Attached are some example RHEL .rules (edit and place with other rules in /etc/udev/rules.d in each guest) files for managing the mapping of the block devices to raw devices by the Linux udev system.  This ensures that the raw devices are created before the IQ server starts.  The 60... file creates the raw devices from the block devices (instead of using the 'rawservice').  The 90... file changes the ownership to that of the user that runs the IQ server.  You do not need to change the ownership of the block devices from 'root'.

For SuSE, you may only have the rawservice to create the devices.  The 90... rule will re-fire after the raw devices are available and change the ownership.  For both RHEL and SuSE, you may have to add the 'raw' module to the Linux kernel to properly manage raw devices (run the 'raw -qa' command - if you get an error, then load the raw module) using modprobe or rebuild a kernel with the raw module as part of it so it is available at boot time.

HTH
Chris

Chris Baker | Database Engineering Evangelist | CERT | PI HANA Platform Data Management | SAP
T +1 416-226-7033<tel:+1%20416-226-7033> | M +1 647-224-2033<tel:+1%20647-224-2033> | TF +1 866-716-8860<tel:+1%20866-716-8860>
SAP Canada Inc. 445 Wes Graham Way, Waterloo, N2L 6R2<x-apple-data-detectors://17/1>
c.baker at sap.com<mailto:c.baker at sap.com> | www.sap.com<http://www.sap.com/>

https://sap.na.pgiconnect.com/I826572
Conference tel: 1-866-312-7353,,9648565377#<tel:1-866-312-7353,,9648565377%23>

From: iqug-bounces at iqug.org<mailto:iqug-bounces at iqug.org> [mailto:iqug-bounces at iqug.org] On Behalf Of Steve Shen
Sent: Friday, August 4, 2017 10:03 AM
To: iqug at iqug.org<mailto:iqug at iqug.org>
Subject: [IQUG] IQUG Digest, Vol 54, Issue 6

Hi all,

Originally what I proposed to Unix SAs was to use two physical Linux machines to build up an IQ Multiplex using Raw Partitions; but they had no hardware available.

My UNIX SAs were suggesting to build up two Linux hosts from VMWARE for my POC testing.  If it's feasible, they will not need to procure new hardware.

Is it going to work using two VMWARE based Linux hosts to do the Multiplex testing with Raw Partitions?  Please advise.

Thank you very much.

All the best,
Steve Shen

t: (646) 827-2102

This email with all information contained herein or attached hereto may contain confidential and/or privileged information intended for the addressee(s) only. If you have received this email in error, please contact the sender and immediately delete this email in its entirety and any attachments thereto.
This email with all information contained herein or attached hereto may contain confidential and/or privileged information intended for the addressee(s) only. If you have received this email in error, please contact the sender and immediately delete this email in its entirety and any attachments thereto.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://iqug.org/pipermail/iqug/attachments/20170804/59350a79/attachment-0001.html>


More information about the IQUG mailing list