[IQUG] IQUG Digest, Vol 47, Issue 10

Steve Shen sshen at sscinc.com
Wed Nov 23 08:59:49 MST 2016


Hi all,

I want to update you and conclude this issue. The root cause was that one SQL caused the IQ server to be like being hung.

It had nothing to do with any of the database options or the server configurations.

Thank you all.

Kind regards,

Steve Shen

t: (646) 827-2102

-----Original Message-----
From: iqug-bounces at iqug.org [mailto:iqug-bounces at iqug.org] On Behalf Of iqug-request at iqug.org
Sent: Thursday, November 03, 2016 2:11 PM
To: iqug at iqug.org
Subject: IQUG Digest, Vol 47, Issue 10

Send IQUG mailing list submissions to
        iqug at iqug.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://iqug.org/mailman/listinfo/iqug
or, via email, send a message with subject or body 'help' to
        iqug-request at iqug.org

You can reach the person managing the list at
        iqug-owner at iqug.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of IQUG digest..."


Today's Topics:

   1. Re: IQUG Digest, Vol 47, Issue 9 (Steve Shen)


----------------------------------------------------------------------

Message: 1
Date: Thu, 3 Nov 2016 18:10:02 +0000
From: Steve Shen <sshen at sscinc.com>
To: "iqug at iqug.org" <iqug at iqug.org>
Subject: Re: [IQUG] IQUG Digest, Vol 47, Issue 9
Message-ID:
        <0C03FF7E7FA66E41A61525750FD6533966B977AC at YKT1EMXPRD1.globeop.com>
Content-Type: text/plain; charset="iso-8859-1"

The O/S is Solaris. It's not AIX.

In addition to the failures in creating tables, many table loads also failed.  The IQ was definitely up and running.

Based on these failures, the SAP Technical Support was recommending the wrong thing to do on the incident that I created.

Regards,

Steve Shen

Steve Shen
SS&C Technologies Inc.
Associate Director DBA

t: (646) 827-2102  |  f: (646) 827-1850
sshen at sscinc.com  |  www.sscinc.com
Follow us: Twitter  |  Facebook  |  LinkedIn


-----Original Message-----
From: iqug-bounces at iqug.org [mailto:iqug-bounces at iqug.org] On Behalf Of iqug-request at iqug.org
Sent: Thursday, November 03, 2016 1:27 PM
To: iqug at iqug.org
Subject: IQUG Digest, Vol 47, Issue 9

Send IQUG mailing list submissions to
        iqug at iqug.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://iqug.org/mailman/listinfo/iqug
or, via email, send a message with subject or body 'help' to
        iqug-request at iqug.org

You can reach the person managing the list at
        iqug-owner at iqug.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of IQUG digest..."


Today's Topics:

   1. Re: IQUG Digest, Vol 46, Issue 33 (Bhandari, Shashikant)


----------------------------------------------------------------------

Message: 1
Date: Thu, 3 Nov 2016 17:24:55 +0000
From: "Bhandari, Shashikant" <shashikant.bhandari at sap.com>
To: Steve Shen <sshen at sscinc.com>, "iqug at iqug.org" <iqug at iqug.org>
Subject: Re: [IQUG] IQUG Digest, Vol 46, Issue 33
Message-ID:
        <518bdf364e6f483290f9718073726945 at USPHLE13US14.global.corp.sap>
Content-Type: text/plain; charset="iso-8859-1"

Hi Steve,

   pstack command would not cause the error you reported. In place of IQ-PID you have given the actual IQ server process ID, correct?

   If I am not mistaken you are on AIX, right?  Then on AIX, command is "procstack" and not "pstack". The script which you will get form support will run properly on ANY Unix platform. I think earlier script you got had an error, which caused command not to run.

   The error below said while connecting the client could not find database TARGET, which probably indicative that server is not running.

  Regards

Shashikant Bhandari
Shashi.Bhandari at sapns2.com
http://www.sapns2.com
???? Office: +1 301 896 1427
?? Please consider the impact on the environment before printing this e-mail.

-----Original Message-----
From: Steve Shen [mailto:sshen at sscinc.com]
Sent: Thursday, November 3, 2016 12:55 PM
To: iqug at iqug.org; Bhandari, Shashikant <shashikant.bhandari at sap.com>
Subject: RE: IQUG Digest, Vol 46, Issue 33

Hi Shashikant,

I have to keep you informed of the following:

I ran "pstack IQ-PID >OUTPUT_FILE" early today. The production application ran into issues at the exact time when I ran the command:

All of sudden the application failed in creating tables. Let me share the complete detailed messages with all of you:

"
UDA-SQL-0031 Unable to access the "TARGET" database. Check that the connection parameters to the database are configured correctly. For example, ensure that the data source connection contains the signon information, such as a password, to connect to the database.

UDA-SQL-0129 Invalid login information was detected by the underlying database.

[Sybase][ODBC Driver][SQL Anywhere]Database server not found

[PROGRESS   - 08:21:29] SQL Node 3 'CreateTable'; reported the following:

DM-DBM-0306 UDA driver error connecting to 'TARGET'.
"

I am waiting for SAP Technical Support to come up with an answer.

Do you know this impact?

Regards,

Steve Shen

-----Original Message-----
From: iqug-bounces at iqug.org [mailto:iqug-bounces at iqug.org] On Behalf Of iqug-request at iqug.org
Sent: Monday, October 31, 2016 3:18 PM
To: iqug at iqug.org
Subject: IQUG Digest, Vol 46, Issue 33

Send IQUG mailing list submissions to
        iqug at iqug.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://iqug.org/mailman/listinfo/iqug
or, via email, send a message with subject or body 'help' to
        iqug-request at iqug.org

You can reach the person managing the list at
        iqug-owner at iqug.org

When replying, please edit your Subject line so it is more specific than "Re: Contents of IQUG digest..."


Today's Topics:

   1. Re: IQUG Digest, Vol 46, Issue 26 (Mumy, Mark)


----------------------------------------------------------------------

Message: 1
Date: Mon, 31 Oct 2016 19:17:36 +0000
From: "Mumy, Mark" <mark.mumy at sap.com>
To: "Bhandari, Shashikant" <shashikant.bhandari at sap.com>, Steve Shen
        <sshen at sscinc.com>, "iqug at iqug.org" <iqug at iqug.org>
Subject: Re: [IQUG] IQUG Digest, Vol 46, Issue 26
Message-ID: <726F32DB-93EA-4618-9B2F-F64F6FD5B92F at sap.com>
Content-Type: text/plain; charset="utf-8"

Having more sweeper threads will only help if 1) you are starved for sweeper threads and 2) your IO subsystem can perform more random IO that IQ can currently push.  I don?t know that I?ve seen data to support either of those, so this is  not all that surprising.

As with anything related to IQ performance, you need to first determine what the hardware can support.  I don?t think that?s been done here.  Do a baseline with the original storage then rerun on the new stuff.  That will tell you if the SSDs are actually any faster.  It will also tell you what the limit will be for IQ or any other software using that storage.

Can you run a tool like ?fio? on your IQ storage to gauge the IO throughput?  Best to test random access as sequential access is generally not how IQ behaves.  When you run these tests it will isolate the storage and eliminate IQ from the picture.  If your storage throughput is low or the access times high, then the issue isn?t with IQ but rather within the end to end hardware system.

Mark

Mark Mumy
Customer Innovation and Enterprise Platform |  SAP M +1 347-820-2136 | E mark.mumy at sap.com<mailto:mark.mumy at sap.com>
My Blogs: https://blogs.sap.com/author/markmumy/

https://sap.na.pgiconnect.com/I825063
Conference tel: 18663127353,,8035340905#

[cid:image001.png at 01D23381.85436530]

From: Shashikant Bhandari <shashikant.bhandari at sap.com>
Date: Monday, October 31, 2016 at 11:56
To: Steve Shen <sshen at sscinc.com>, "iqug at iqug.org" <iqug at iqug.org>, Mark Mumy <mark.mumy at sap.com>
Subject: RE: IQUG Digest, Vol 46, Issue 26

Hi Steve,

    What is EXACT ESD of IQ 15.4? Also if this happens again can you please take at least 3 pstacks, about 2 minutes apart and send me those 3 files?

   pstack IQ_SERVER_PID  > pstack_##.out

Regards

Shashikant Bhandari
Shashi.Bhandari at sapns2.com<mailto:Shashi.Bhandari at sapns2.com>
http://www.sapns2.com<http://www.sapns2.com/>
     Office: +1 301 896 1427
   Please consider the impact on the environment before printing this e-mail.

From: iqug-bounces at iqug.org [mailto:iqug-bounces at iqug.org] On Behalf Of Steve Shen
Sent: Monday, October 31, 2016 12:44 PM
To: iqug at iqug.org; Mumy, Mark <mark.mumy at sap.com>
Subject: Re: [IQUG] IQUG Digest, Vol 46, Issue 26

Hi Mark and all,

I increased ?Sweeper_Threads_Percent? from 10 to 20 last Saturday.  I was hoping that it would speed up flushing out the temporary cache dirtied faster; but I ran into the same ?like hang? performance issue again this morning.  The issue last for 6 minutes from 08:27 EDT to 08:33 EDT today.

I have to look into the feasibility in changing ?Wash_Area_Buffers_Percent?. The technical manual advised us to consult with the SAP Technical Support before modifying this database option. I am really doubting that changing this option will help resolve the issue. I am assuming that this one was static; but it could be allocated dynamically.

Based on the spreadsheet data below, can you kindly identify which of the following were the major culprit(s)?
1.      IQ Resource Governor waiting?  Was it waiting because of being limited by ?-iqgovern 84? or some things else?
2.      The active concurrent operations admitted by IQ resource governor? I set ?-iqgovern? to 84 with 40 CPU cores; but it did not seem limiting the maximum number of active concurrent operations. The maximum was 121.  Was the IQ Resource Governor waiting for the dirty temporary cache to be flushed and cleaned?  There were still 100 GB of temporary cache available during those 6 minutes.  Was the formula on setting up ?-iqgovern?, (2 * # of CPUs + 10) or (2 * # of CPUs + 4), really working as the optimal number?
3.      Percentage of temporary cache pages dirtied?  Could it be that it was waiting for the IQ Resource Governor?
4.      Other Versions? It?s very normal for the data selected and then the data was modified.

I am sharing the spreadsheet data that I have collected today.  Please share your insights with me if you know where to fix the performance issue.  Thank you.

Descriptions / 2016-10-31 Time

8:20

8:21

8:22

8:23

8:24

8:25

8:26

8:27

8:28

8:29

8:30

8:31

8:32

8:33

08;34

8:35


Number of operations waiting for IQ resource governor

0

0

0

0

0

0

0

18

26

34

41

49

61

0

0

0


Number of active concurrent operations admitted by IQ resource governor

18

17

5

4

33

55

75

104

118

120

120

121

120

12

21

8


Number of active LOAD TABLE statements

0

0

0

0

0

0

0

1

1

1

1

3

3

0

0

0


Percentage of temporary cache pages dirtied

0.53

0.28

0.23

0.13

3.16

5.92

7.98

11.34

13.08

13.2

13.26

13.45

13.3

0.18

0.23

0.01


Other Versions

7

6

3

2

3

8

14

16

15

15

15

15

15

10

8

5


Percentage of main cache pages dirtied

0.02

0.02

0.01

0.01

0.01

0.01

0.01

0.01

0.01

0.01

0.01

0.01

0.01

0.01

0.02

0.02


Kind regards,

Steve Shen

t: (646) 827-2102


-----Original Message-----
From: iqug-bounces at iqug.org<mailto:iqug-bounces at iqug.org> [mailto:iqug-bounces at iqug.org] On Behalf Of iqug-request at iqug.org<mailto:iqug-request at iqug.org>
Sent: Thursday, October 27, 2016 9:13 AM
To: iqug at iqug.org<mailto:iqug at iqug.org>
Subject: IQUG Digest, Vol 46, Issue 26

Send IQUG mailing list submissions to
        iqug at iqug.org<mailto:iqug at iqug.org>

To subscribe or unsubscribe via the World Wide Web, visit
        http://iqug.org/mailman/listinfo/iqug
or, via email, send a message with subject or body 'help' to
        iqug-request at iqug.org<mailto:iqug-request at iqug.org>

You can reach the person managing the list at
        iqug-owner at iqug.org<mailto:iqug-owner at iqug.org>

When replying, please edit your Subject line so it is more specific than "Re: Contents of IQUG digest..."


Today's Topics:

   1. Re: IQUG Digest, Vol 46, Issue 18 (Steve Shen)


----------------------------------------------------------------------

Message: 1
Date: Thu, 27 Oct 2016 13:13:01 +0000
From: Steve Shen <sshen at sscinc.com<mailto:sshen at sscinc.com>>
To: "'Mumy, Mark'" <mark.mumy at sap.com<mailto:mark.mumy at sap.com>>, "iqug at iqug.org<mailto:iqug at iqug.org>"
        <iqug at iqug.org<mailto:iqug at iqug.org>>
Subject: Re: [IQUG] IQUG Digest, Vol 46, Issue 18
Message-ID:
        <0C03FF7E7FA66E41A61525750FD6533966B4E3EF at YKT1EMXPRD1.globeop.com<mailto:0C03FF7E7FA66E41A61525750FD6533966B4E3EF at YKT1EMXPRD1.globeop.com>>
Content-Type: text/plain; charset="utf-8"

Hi Mark and all,

Good morning!

You made a very good point: They were static parameters; but they were allocated dynamically.

That explained the reason why I did not observe the thread usage changes even after I doubled the default values from 144 to 288 for both "Max_IQ_Threads_Per_Connection " and "Max_IQ_Threads_Per_Team " last weekend.

Since there was no way for the applications in coming up the same workloads in the UAT as the ones in the production, there was no way of testing the changes and their potential impacts in the UAT. Nor could our developers reproduce the weekdays' performance issues over weekends.

So I will try to change one or two parameters each weekend. I will update you on whether the "like hang" performance issue is resolved or not next week.

Thank you very much for your feedback.

All the best,

Steve

t: (646) 827-2102


-----Original Message-----
From: Mumy, Mark [mailto:mark.mumy at sap.com]
Sent: Thursday, October 27, 2016 7:50 AM
To: Steve Shen; iqug at iqug.org<mailto:iqug at iqug.org>
Subject: Re: [IQUG] IQUG Digest, Vol 46, Issue 18

I don?t know that those are going to help at all.  While the percentages are static, the thread allocation is not.  We can use up to 10% or 20% of the threads for sweeper/washer, per cache (main/temp).  That?s on the order of 130-260 threads each.  130 to sweep main, 130 to sweep temp, 260 to wash main, and 260 to wash temp.  Unless you are seeing a ?used? spike by that much, we are likely not starved for threads there.

In fact, with a move to faster storage you should see a drop in prefetch, wash, and sweeper threads as the IOs will now be faster which will free up the threads quicker.

Mark


Mark Mumy

Customer Innovation and Enterprise Platform |  SAP

M +1 347-820-2136 | E mark.mumy at sap.com<mailto:mark.mumy at sap.com>

My Blogs: https://blogs.sap.com/author/markmumy/



https://sap.na.pgiconnect.com/I825063

Conference tel: 18663127353,,8035340905#




On 10/26/16, 10:47, "Steve Shen" <sshen at sscinc.com<mailto:sshen at sscinc.com>> wrote:

    Hi Mark,

    The same performance issue happened again today.

    I am guessing that the IQ would have to be recycled to make the changes on "Sweeper_Threads_Percent" and "Wash_Area_Buffers_Percent" effective. I could not recycle the production IQ server until this weekend.

    I also verified the default values for these two database options based on "SYSOPTIONDEFAULTS" at the version 15.4.x:
            1. The default value of "Wash_Area_Buffers_Percent" was 20%. It's likely that this might be changed to 10% at the version 16.x.
            2. The default value of " Sweeper_Threads_Percent" was 10%.

    So I will know for sure whether the issue was fixed or not next week after I recycle the server this weekend.

    Thank you very much.

    Kind regards,

    Steve

    -----Original Message-----
    From: Mumy, Mark [mailto:mark.mumy at sap.com]
    Sent: Wednesday, October 26, 2016 8:02 AM
    To: Steve Shen; iqug at iqug.org<mailto:iqug at iqug.org>
    Subject: Re: [IQUG] IQUG Digest, Vol 46, Issue 18

    OK, so it isn?t thread starvation.  You have 2341 threads with 317, roughly, held in reserve for connections and other SA stuff.  Then you have 950, roughly, in use.  That leaves you with 1074 free threads available for processing.  You could also take your ?ThrNumFree? and subtract the ThrReserved and get to the same number.

    Mark


    Mark Mumy

    Customer Innovation and Enterprise Platform |  SAP

    M +1 347-820-2136 | E mark.mumy at sap.com<mailto:mark.mumy at sap.com>

    My Blogs: https://blogs.sap.com/author/markmumy/



    https://sap.na.pgiconnect.com/I825063

    Conference tel: 18663127353,,8035340905#




    On 10/25/16, 14:00, "Steve Shen" <sshen at sscinc.com<mailto:sshen at sscinc.com>> wrote:

        Hi Mark,

        I just restored both "SWEEPER_THREADS_PERCENT" and "WASH_AREA_BUFFER_PERCENT" early this morning. I am unsure of whether these two options were static or dynamic. Were they effective right away?

        As requested, I am providing you with the following information from executing sp_iqsysmon on October 14th:

        sybase at ykt1siquat27: /home/sybase/log/SIQCGSPRD ==> grep -i ThreadLimit cognosdw.174-sysmon.2016-10-14-08*
        cognosdw.174-sysmon.2016-10-14-08-24:ThreadLimit              2342
        cognosdw.174-sysmon.2016-10-14-08-25:ThreadLimit              2342
        cognosdw.174-sysmon.2016-10-14-08-26:ThreadLimit              2342
        cognosdw.174-sysmon.2016-10-14-08-27:ThreadLimit              2342
        cognosdw.174-sysmon.2016-10-14-08-28:ThreadLimit              2342
        cognosdw.174-sysmon.2016-10-14-08-29:ThreadLimit              2342
        cognosdw.174-sysmon.2016-10-14-08-30:ThreadLimit              2342
        cognosdw.174-sysmon.2016-10-14-08-31:ThreadLimit              2342
        cognosdw.174-sysmon.2016-10-14-08-32:ThreadLimit              2342
        cognosdw.174-sysmon.2016-10-14-08-32:ThreadLimit              2342

        sybase at ykt1siquat27: /home/sybase/log/SIQCGSPRD ==> grep -i ThrNumThreads cognosdw.174-sysmon.2016-10-14-08*
        cognosdw.174-sysmon.2016-10-14-08-24:ThrNumThreads            2341    ( 100.0 %)
        cognosdw.174-sysmon.2016-10-14-08-25:ThrNumThreads            2341    ( 100.0 %)
        cognosdw.174-sysmon.2016-10-14-08-26:ThrNumThreads            2341    ( 100.0 %)
        cognosdw.174-sysmon.2016-10-14-08-27:ThrNumThreads            2341    ( 100.0 %)
        cognosdw.174-sysmon.2016-10-14-08-28:ThrNumThreads            2341    ( 100.0 %)
        cognosdw.174-sysmon.2016-10-14-08-29:ThrNumThreads            2341    ( 100.0 %)
        cognosdw.174-sysmon.2016-10-14-08-30:ThrNumThreads            2341    ( 100.0 %)
        cognosdw.174-sysmon.2016-10-14-08-31:ThrNumThreads            2341    ( 100.0 %)
        cognosdw.174-sysmon.2016-10-14-08-32:ThrNumThreads            2341    ( 100.0 %)
        cognosdw.174-sysmon.2016-10-14-08-32:ThrNumThreads            2341    ( 100.0 %)

        sybase at ykt1siquat27: /home/sybase/log/SIQCGSPRD ==> grep -i ThrReserved cognosdw.174-sysmon.2016-10-14-08*
        cognosdw.174-sysmon.2016-10-14-08-24:ThrReserved              315     (  13.5 %)
        cognosdw.174-sysmon.2016-10-14-08-25:ThrReserved              317     (  13.5 %)
        cognosdw.174-sysmon.2016-10-14-08-26:ThrReserved              315     (  13.5 %)
        cognosdw.174-sysmon.2016-10-14-08-27:ThrReserved              317     (  13.5 %)
        cognosdw.174-sysmon.2016-10-14-08-28:ThrReserved              317     (  13.5 %)
        cognosdw.174-sysmon.2016-10-14-08-29:ThrReserved              317     (  13.5 %)
        cognosdw.174-sysmon.2016-10-14-08-30:ThrReserved              317     (  13.5 %)
        cognosdw.174-sysmon.2016-10-14-08-31:ThrReserved              317     (  13.5 %)
        cognosdw.174-sysmon.2016-10-14-08-32:ThrReserved              316     (  13.5 %)
        cognosdw.174-sysmon.2016-10-14-08-32:ThrReserved              291     (  12.4 %)

        sybase at ykt1siquat27: /home/sybase/log/SIQCGSPRD ==> grep -i ThrNumFree cognosdw.174-sysmon.2016-10-14-08*
        cognosdw.174-sysmon.2016-10-14-08-24:ThrNumFree               1394    (  59.5 %)
        cognosdw.174-sysmon.2016-10-14-08-25:ThrNumFree               1396    (  59.6 %)
        cognosdw.174-sysmon.2016-10-14-08-26:ThrNumFree               1394    (  59.5 %)
        cognosdw.174-sysmon.2016-10-14-08-27:ThrNumFree               1396    (  59.6 %)
        cognosdw.174-sysmon.2016-10-14-08-28:ThrNumFree               1396    (  59.6 %)
        cognosdw.174-sysmon.2016-10-14-08-29:ThrNumFree               1396    (  59.6 %)
        cognosdw.174-sysmon.2016-10-14-08-30:ThrNumFree               1396    (  59.6 %)
        cognosdw.174-sysmon.2016-10-14-08-31:ThrNumFree               1396    (  59.6 %)
        cognosdw.174-sysmon.2016-10-14-08-32:ThrNumFree               1394    (  59.5 %)
        cognosdw.174-sysmon.2016-10-14-08-32:ThrNumFree               1370    (  58.5 %)

        sybase at ykt1siquat27: /home/sybase/log/SIQCGSPRD ==> grep -i NumThrUsed cognosdw.174-sysmon.2016-10-14-08*
        cognosdw.174-sysmon.2016-10-14-08-24:NumThrUsed               948     (  40.5 %)
        cognosdw.174-sysmon.2016-10-14-08-25:NumThrUsed               946     (  40.4 %)
        cognosdw.174-sysmon.2016-10-14-08-26:NumThrUsed               948     (  40.5 %)
        cognosdw.174-sysmon.2016-10-14-08-27:NumThrUsed               946     (  40.4 %)
        cognosdw.174-sysmon.2016-10-14-08-28:NumThrUsed               946     (  40.4 %)
        cognosdw.174-sysmon.2016-10-14-08-29:NumThrUsed               946     (  40.4 %)
        cognosdw.174-sysmon.2016-10-14-08-30:NumThrUsed               946     (  40.4 %)
        cognosdw.174-sysmon.2016-10-14-08-31:NumThrUsed               946     (  40.4 %)
        cognosdw.174-sysmon.2016-10-14-08-32:NumThrUsed               948     (  40.5 %)
        cognosdw.174-sysmon.2016-10-14-08-32:NumThrUsed               972     (  41.5 %)

        Thank you.

        Regards,

        Steve

        t: (646) 827-2102

        -----Original Message-----
        From: Mumy, Mark [mailto:mark.mumy at sap.com]
        Sent: Tuesday, October 25, 2016 12:22 PM
        To: Steve Shen; iqug at iqug.org<mailto:iqug at iqug.org>
        Subject: Re: [IQUG] IQUG Digest, Vol 46, Issue 18

        Both the thread options are in IQ 15 and IQ 16.  The defaults should be 10 for each.  10% of all threads.  And that is for each pool.  Please look at the Threads section of my IQ 16 sizing guide as it explains all of this in more detail.

        http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/c0836b4f-429d-3010-a686-c35c73674180?QuickLink=index&overridelayout=true&58385785468058


        Also note that when ?Free Threads? equals ?Reserved Threads? the only threads that are left are those kept back for connection use. Consequently, there will be no threads left over for parallel operations.  This will then drive performance down and can cause resource governor build up.  That?s the point, though.  If things run slow, you don?t want to put more workload on IQ and cause the issue to spin out of control.

        Can you send the entire thread section from each interval?  Need to see these:
          ThreadLimit
          ThrNumThreads
          ThrReserved
          ThrNumFree
          NumThrUsed

        Mark


        Mark Mumy

        Customer Innovation and Enterprise Platform |  SAP

        M +1 347-820-2136 | E mark.mumy at sap.com<mailto:mark.mumy at sap.com>

        My Blogs: https://blogs.sap.com/author/markmumy/



        https://sap.na.pgiconnect.com/I825063

        Conference tel: 18663127353,,8035340905#




        On 10/24/16, 10:34, "iqug-bounces at iqug.org on behalf of Steve Shen<mailto:iqug-bounces at iqug.org%20on%20behalf%20of%20Steve%20Shen>" <iqug-bounces at iqug.org on behalf of sshen at sscinc.com<mailto:iqug-bounces at iqug.org%20on%20behalf%20of%20sshen at sscinc.com>> wrote:

            Hi all,

            I want to update you the following:

            1. Increasing both "Max_IQ_Threads_Per_Connection" and "Max_IQ_Threads_Per_Team " from 144 to 288 last weekend did not help me resolve the "like hang" performance issue.

            2. The IQ was still waiting for IQ Resource Governor today.

            3. I had a new discovery today on the "Percentage of temporary cache pages dirtied" from the output files of executing sp_iqstatistics:
                    3.1 The users did not experience the performance issues if the values for this parameter were less than 10%.
                    3.2 The users ran into the "like hang" performance issues whenever the values for this parameter exceeded 10%.
                    3.3 Was this "catalog cache" or "data cache" or the combination of both?

            4. I am assuming that the IQ Resource Governor was waiting for the temporary dirtied cache pages to be flushed out and to become clean.

            I am still at the version 15.4.x. I noticed that there were many "TRUNCATE TABLE", "CREATE TABLE", "LOAD TABLE" and SELECT happening concurrently.

            I could not find "SWEEPER_THREADS_PERCENT" and "WASH_AREA_BUFFER_PERCENT" from the database options anymore.

            Less than 30% of the total "Temp Buffers" were used.

            What IQ start-up parameters or the database options can I change to speed up reducing "Percentage of temporary cache pages dirtied"?

            Thanks and regards,

            Steve

            t: (646) 827-2102


            -----Original Message-----
            From: iqug-bounces at iqug.org<mailto:iqug-bounces at iqug.org> [mailto:iqug-bounces at iqug.org] On Behalf Of iqug-request at iqug.org<mailto:iqug-request at iqug.org>
            Sent: Friday, October 21, 2016 3:00 PM
            To: iqug at iqug.org<mailto:iqug at iqug.org>
            Subject: IQUG Digest, Vol 46, Issue 18

            Send IQUG mailing list submissions to
                    iqug at iqug.org<mailto:iqug at iqug.org>

            To subscribe or unsubscribe via the World Wide Web, visit
                    http://iqug.org/mailman/listinfo/iqug
            or, via email, send a message with subject or body 'help' to
                    iqug-request at iqug.org<mailto:iqug-request at iqug.org>

            You can reach the person managing the list at
                    iqug-owner at iqug.org<mailto:iqug-owner at iqug.org>

            When replying, please edit your Subject line so it is more specific than "Re: Contents of IQUG digest..."


            Today's Topics:

               1. Re: IQUG Digest, Vol 46, Issue 16 (Steve Shen)


            ----------------------------------------------------------------------

            Message: 1
            Date: Fri, 21 Oct 2016 16:17:26 +0000
            From: Steve Shen <sshen at sscinc.com<mailto:sshen at sscinc.com>>
            To: "'iqug at iqug.org'" <iqug at iqug.org<mailto:iqug at iqug.org>>, "'Rittenhouse, David'"
                    <d.rittenhouse at sap.com<mailto:d.rittenhouse at sap.com>>
            Subject: Re: [IQUG] IQUG Digest, Vol 46, Issue 16
            Message-ID:
                    <0C03FF7E7FA66E41A61525750FD6533966B177B6 at YKT1EMXPRD1.globeop.com<mailto:0C03FF7E7FA66E41A61525750FD6533966B177B6 at YKT1EMXPRD1.globeop.com>>
            Content-Type: text/plain; charset="iso-8859-1"

            Based on one of the prior discussions with Mr. Chris Baker, IQ knew to allocate threads dynamically.

            So I really did not know the significance of changing the following two database options from 144 to a higher value:

                    6. So the following two database options using the default values seemed under-allocated:
                            6.1 Max_IQ_Threads_Per_Connection = 144;
                            6.2 Max_IQ_Threads_Per_Team = 144;

            It's also a myth to most Sybase DBAs why the default value was 144.

            Thanks and regards,

            Steve Shen

            t: (646) 827-2102

            -----Original Message-----
            From: Steve Shen
            Sent: Friday, October 21, 2016 10:05 AM
            To: 'iqug at iqug.org'; 'Rittenhouse, David'
            Cc: 'Mumy, Mark'; 'Baker, Chris'
            Subject: RE: IQUG Digest, Vol 46, Issue 16

            Hi David and all,

            I have observed the following so far today:

            1. Developers reported the "like-hang" performance state happening between 08:26 EDT and 08:32 EDT today.

            2. The changes in "Number of operations waiting for IQ resource governor" are provided below:
                    2.1 08:26 EDT - 0;
                    2.2 08:27 EDT - 12
                    2.3 08:28 EDT - 24;
                    2.4 08:29 EDT - 29;
                    2.5 08:30 EDT - 36;
                    2.2 08:31 EDT - 46;
                    2.3 08:32 EDT - 52;
                    2.4 08:33 EDT - 0;

            3. The changes in "Number of IQ threads in use" are listed below:
                    3.01 08:23 EDT - 945;
                    3.02 08:24 EDT - 1075;
                    3.03 08:25 EDT - 944;
                    3.04 08:26 EDT - 1077;
                    3.05 08:27 EDT - 944;
                    3.06 08:28 EDT - 945;
                    3.07 08:29 EDT - 945;
                    3.08 08:30 EDT - 945;
                    3.09 08:31 EDT - 945;
                    3.10 08:32 EDT - 945;
                    3.11 08:33 EDT - 944;

            4. The changes in "Number of IQ threads free" are also provided below:
                    4.01 08:23 EDT - 1397;
                    4.02 08:24 EDT - 1267;
                    4.03 08:25 EDT - 1398;
                    4.04 08:26 EDT - 1265;
                    4.05 08:27 EDT - 1398;
                    4.06 08:28 EDT - 1397;
                    4.07 08:29 EDT - 1397;
                    4.08 08:30 EDT - 1397;
                    4.09 08:31 EDT - 1397;
                    4.10 08:32 EDT - 1395;
                    4.11 08:33 EDT - 1398;

            5. Some of the tables loaded had more than 300 columns. The top ones had 369 columns.
                    All the indexes being loaded during those 5 minutes used the default FP index only.

            6. So the following two database options using the default values seemed under-allocated:
                    6.1 Max_IQ_Threads_Per_Connection = 144;
                    6.2 Max_IQ_Threads_Per_Team = 144;

            If I bump up the value of "Max_IQ_Threads_Per_Connection" and "Max_IQ_Threads_Per_Team" to 400, are there any potential pitfalls?

            Is it a good idea for changing the value so drastically from 144 to 400 for these two database options?

            Please advise.  Thank you.

            Kind regards,

            Steve Shen

            t: (646) 827-2102

            -----Original Message-----
            From: Steve Shen
            Sent: Thursday, October 20, 2016 9:57 PM
            To: iqug at iqug.org<mailto:iqug at iqug.org>; 'Rittenhouse, David'; Baker, Chris
            Subject: RE: IQUG Digest, Vol 46, Issue 16

            Hi David,

            Thank you very much for sharing your expertise.

            I had 2343 threads defined at the IQ start-up time. It's based on the formula of "60*(min(numCores,4)) + 50*(numCores - 4) + numConnections + 3".

            The number of IQ threads free dropped from 1381 to 1166 during the "like hang" interval. In other words the number of IQ threads in use increased by 215.

            I leaved both "Max_IQ_Threads_Per_Connection" and "Max_IQ_Threads_Per_Team" to the default value, 144. The technical manuals did not seem recommending DBAs to bump up the values for these two database options.

            The daily workloads are always different every day; but you had good suggestions that I can pursue next:
            1. I will monitor the thread changes minute by minute.
            2. I will also look into whether there are wide loads with many columns and indexes.

            Thank you again.

            Kind regards,

            Steve Shen

            t: (646) 827-2102

            -----Original Message-----
            From: iqug-bounces at iqug.org<mailto:iqug-bounces at iqug.org> [mailto:iqug-bounces at iqug.org] On Behalf Of iqug-request at iqug.org<mailto:iqug-request at iqug.org>
            Sent: Thursday, October 20, 2016 7:36 PM
            To: iqug at iqug.org<mailto:iqug at iqug.org>
            Subject: IQUG Digest, Vol 46, Issue 16

            Send IQUG mailing list submissions to
                    iqug at iqug.org<mailto:iqug at iqug.org>

            To subscribe or unsubscribe via the World Wide Web, visit
                    http://iqug.org/mailman/listinfo/iqug
            or, via email, send a message with subject or body 'help' to
                    iqug-request at iqug.org<mailto:iqug-request at iqug.org>

            You can reach the person managing the list at
                    iqug-owner at iqug.org<mailto:iqug-owner at iqug.org>

            When replying, please edit your Subject line so it is more specific than "Re: Contents of IQUG digest..."


            Today's Topics:

               1. Re: IQUG Digest, Vol 46, Issue 13 (Rittenhouse, David)


            ----------------------------------------------------------------------

            Message: 1
            Date: Thu, 20 Oct 2016 23:35:57 +0000
            From: "Rittenhouse, David" <d.rittenhouse at sap.com<mailto:d.rittenhouse at sap.com>>
            To: Steve Shen <sshen at sscinc.com<mailto:sshen at sscinc.com>>, "iqug at iqug.org<mailto:iqug at iqug.org>" <iqug at iqug.org<mailto:iqug at iqug.org>>,
                    "Baker, Chris" <c.baker at sap.com<mailto:c.baker at sap.com>>
            Subject: Re: [IQUG] IQUG Digest, Vol 46, Issue 13
            Message-ID:
                    <80acf7e58681486aa4883a6f98ebc420 at DEWDFE13DE04.global.corp.sap<mailto:80acf7e58681486aa4883a6f98ebc420 at DEWDFE13DE04.global.corp.sap>>
            Content-Type: text/plain; charset="us-ascii"

            Hi Steve,

            Check if there was a wide load (table with many columns and/or indexes) happening at the time in question...   (fact tables the usual suspect..)

            A wide load will try to grab a thread for every index in the table (plus a handful of other threads) - I have seen a situation like this cause "hanging-like" behaviour - especially if you or one of your team has "fully parallelised" a wide load to maximise its use of threads.
            That would account for the sudden drop in available threads (and also in temp buffers if there are a lot of HG indexes in the load..) A sudden drop in available threads will cause a behaviour such as you describe...

            How many threads does the IQ process have ?  (either in your -iqmt setting, or by default it will be reported to the stdout file on startup...)

            It might be a good idea to run a thread monitor report during one of the time periods
            you expect this "hanging" to happen and see if you can catch it as it happens...     eg   iq utilities main into iq_dummy start monitor '-threads -file_suffix threads.txt -append -interval 20'
            In this monitor report look at the ratio of threads in use to threads free...   (threads in use = ThreadLimit - ThrNumFree.)

            Mark has a great blurb on threads in his IQ Sizing Guide...

            It could be you need to manually add additional worker threads (via -iqmt) or decrease the amount each connection/team can grab in one go....  (controlled by MAX_IQ_THREADS_PER_CONNECTION and MAX_IQ_THREADS_PER_TEAM - it is standard practice to bump up these values if you have intentionally tuned a wide load to use more threads.)

            HTH,

            David Rittenhouse
            Senior Consultant, SAP Data & Technology Services Service & Support SAP (UK) Limited Clockhouse Place, Bedfont Road, Feltham, TW14 8HD Middlesex, UK
            E:  d.rittenhouse at sap.com<mailto:d.rittenhouse at sap.com<mailto:d.rittenhouse at sap.com%3cmailto:d.rittenhouse at sap.com>>
            M: +44 (0) 7899 948 295
            www.sap.com/uk<http://www.sap.com/uk>
            Please consider the environment before printing this email.




            From: iqug-bounces at iqug.org<mailto:iqug-bounces at iqug.org> [mailto:iqug-bounces at iqug.org] On Behalf Of Steve Shen
            Sent: 19 October 2016 17:07
            To: iqug at iqug.org<mailto:iqug at iqug.org>; Baker, Chris <c.baker at sap.com<mailto:c.baker at sap.com>>
            Subject: Re: [IQUG] IQUG Digest, Vol 46, Issue 13

            Hi Chris,

            I am providing you with some answers to your questions.

            Q1: Can you give more details about your migration from SAN to SSD - devices, IQ storage, etc?
            A1: It's transparent to me because I used the soft-linked files to the raw partitions. The UNIX SA used SnapCopy from the old SAN to new SSD LUNs.

            Q2: Where are the SSD devices located?  How are they connected to the IQ server (controller)?
            A2: The SSD devices are located in our data center. I am waiting for an Unix SA to give me further details.

            Q3: How many SSD devices are there compared to SAN devices (did you reduce the number of 'spindles')?
            A3: I am waiting for an Unix SA to give me further details.

            Q4: What is the throughput of the SSD devices compared to the SAN devices (MB/sec)?
            A4: The SSD was out-performing SAN by around between 60% and 75% in terms of I/O only. I had no reasons to doubt the performance in SSD.

            Q5: Filesystem or RAW?
            A5: RAW

            Q6: What else may be sharing the SSD devices? and were the same applications/systems sharing SAN disks/partitions?
            A6: We had no luxury for using dedicated SAN or SSD for hosting IQ or ASE. They were all shared among servers.

            Q7: What is the CPU utilization on the node at the time of the hangs (idle, busy and system usage)?
            A7: The O/S CPU usages were not high. It's in single digits.

            I did observe the following during those 5 minutes of almost hang state:

            1. The CPU usages were relatively low. The usages were in single digits.
            2. The Disk I/Os were relatively low.
            3. The Network I/Os were relatively low.
            4. The Main Buffers were always 100% used.
            5. The Running Processes were rapidly increasing from around 20 at 08:20 EDT to around 90 at 08:27 EDT.
            6. The Temp Buffers were increasing from around 100 GB at 08:20 EDT to 115 GB at 08:27 EDT. I still had 10 GB of Temp Buffer unused at the peak time.
            7. The Threads jumped from 980 at 08:20:00 to 1200 at 08:20:30 and to 980 at 08:21:00 EDT. Then it increased from 980 at 08:21 EDT to 1230 at 08:27 EDT.
            9. The hang state disappeared at 08:27 EDT and afterwards.

            It seemed to me that IQ Resource Governor was waiting for Threads to be fully allocated and the Threads allocations were waiting for Temp Buffers to become available. Was this possible?

            If it's possible, how can I speed up the Temp Buffer allocation and the Thread allocation?

            Thanks and regards,

            Steve Shen

            t: (646) 827-2102

            -----Original Message-----
            From: iqug-bounces at iqug.org<mailto:iqug-bounces at iqug.org<mailto:iqug-bounces at iqug.org%3cmailto:iqug-bounces at iqug.org>> [mailto:iqug-bounces at iqug.org] On Behalf Of iqug-request at iqug.org<mailto:iqug-request at iqug.org<mailto:iqug-request at iqug.org%3cmailto:iqug-request at iqug.org>>
            Sent: Wednesday, October 19, 2016 9:49 AM
            To: iqug at iqug.org<mailto:iqug at iqug.org<mailto:iqug at iqug.org%3cmailto:iqug at iqug.org>>
            Subject: IQUG Digest, Vol 46, Issue 13

            Send IQUG mailing list submissions to
                    iqug at iqug.org<mailto:iqug at iqug.org<mailto:iqug at iqug.org%3cmailto:iqug at iqug.org>>

            To subscribe or unsubscribe via the World Wide Web, visit
                    http://iqug.org/mailman/listinfo/iqug
            or, via email, send a message with subject or body 'help' to
                    iqug-request at iqug.org<mailto:iqug-request at iqug.org<mailto:iqug-request at iqug.org%3cmailto:iqug-request at iqug.org>>

            You can reach the person managing the list at
                    iqug-owner at iqug.org<mailto:iqug-owner at iqug.org<mailto:iqug-owner at iqug.org%3cmailto:iqug-owner at iqug.org>>

            When replying, please edit your Subject line so it is more specific than "Re: Contents of IQUG digest..."


            Today's Topics:

               1. Re: IQUG Digest, Vol 46, Issue 12 (Baker, Chris)


            ----------------------------------------------------------------------

            Message: 1
            Date: Wed, 19 Oct 2016 13:48:23 +0000
            From: "Baker, Chris" <c.baker at sap.com<mailto:c.baker at sap.com<mailto:c.baker at sap.com%3cmailto:c.baker at sap.com>>>
            To: Steve Shen <sshen at sscinc.com<mailto:sshen at sscinc.com<mailto:sshen at sscinc.com%3cmailto:sshen at sscinc.com>>>, "Mumy, Mark" <mark.mumy at sap.com<mailto:mark.mumy at sap.com<mailto:mark.mumy at sap.com%3cmailto:mark.mumy at sap.com>>>,
                    "'iqug at iqug.org'" <iqug at iqug.org<mailto:iqug at iqug.org<mailto:iqug at iqug.org%3cmailto:iqug at iqug.org>>>
            Subject: Re: [IQUG] IQUG Digest, Vol 46, Issue 12
            Message-ID:
                    <6d8a5894c9dd4079ab3ef1e823c18673 at DEWDFE13DE02.global.corp.sap<mailto:6d8a5894c9dd4079ab3ef1e823c18673 at DEWDFE13DE02.global.corp.sap<mailto:6d8a5894c9dd4079ab3ef1e823c18673 at DEWDFE13DE02.global.corp.sap%3cmailto:6d8a5894c9dd4079ab3ef1e823c18673 at DEWDFE13DE02.global.corp.sap>>>
            Content-Type: text/plain; charset="us-ascii"

            Steve,

            Before looking at IQ, let's back up and look at the I/O performance you had with the SAN compared to SSD.

            Can you give more details about your migration from SAN to SSD - devices, IQ storage, etc?
            Where are the SSD devices located?  How are they connected to the IQ server (controller)?
            How many SSD devices are there compared to SAN devices (did you reduce the number of 'spindles')?
            What is the throughput of the SSD devices compared to the SAN devices (MB/sec)?
            Filesystem or RAW?
            What else may be sharing the SSD devices? and were the same applications/systems sharing SAN disks/partitions?
            What is the CPU utilization on the node at the time of the hangs (idle, busy and system usage)?

            Chris

            Chris Baker | Platform Architect | STIG | Customer Innovation & Enterprise Platform | SAP T +1 416-226-7033<tel:+1%20416-226-7033> | M +1 647-224-2033<tel:+1%20647-224-2033> | TF +1 866-716-8860<tel:+1%20866-716-8860>
            SAP Canada Inc. 4120 Yonge Street, Suite 600, Toronto, M2P 2B8<x-apple-data-detectors://17/1>
            c.baker at sap.com<mailto:c.baker at sap.com<mailto:c.baker at sap.com%3cmailto:c.baker at sap.com<mailto:c.baker at sap.com%3cmailto:c.baker at sap.com%3cmailto:c.baker at sap.com%3cmailto:c.baker at sap.com>>> | www.sap.com<http://www.sap.com/<http://www.sap.com%3chttp:/www.sap.com/<http://www.sap.com%3chttp:/www.sap.com/%3chttp:/www.sap.com%3chttp:/www.sap.com/>>>

            https://sap.na.pgiconnect.com/I826572
            Conference tel: 1-866-312-7353,,9648565377#<tel:1-866-312-7353,,9648565377%23>

            From: Steve Shen [mailto:sshen at sscinc.com]
            Sent: Wednesday, October 19, 2016 9:27 AM
            To: Baker, Chris <c.baker at sap.com<mailto:c.baker at sap.com<mailto:c.baker at sap.com%3cmailto:c.baker at sap.com>>>; Mumy, Mark <mark.mumy at sap.com<mailto:mark.mumy at sap.com<mailto:mark.mumy at sap.com%3cmailto:mark.mumy at sap.com>>>; 'iqug at iqug.org' <iqug at iqug.org<mailto:iqug at iqug.org<mailto:iqug at iqug.org%3cmailto:iqug at iqug.org>>>
            Cc: Steve Shen <sshen at sscinc.com<mailto:sshen at sscinc.com<mailto:sshen at sscinc.com%3cmailto:sshen at sscinc.com>>>
            Subject: IQUG Digest, Vol 46, Issue 12


            Subject: IQ Governor at version 15.4.x



            Hello Chris, Mark and all,



            I ran into "the hang" performance situations almost daily after migrating the IQ storage from SAN to SSD. It last for around 5 or 6 minutes each time.



            Based on one of the cron jobs that executed "sp_iqstatistics" every 10 minutes, I got the following numbers from the output file at 08:25 EDT on 2016-10-12 (last Wednesday):

            1. Number of operations waiting for IQ resource governor = 74;

            2. Number of active concurrent operations admitted by IQ resource governor = 150; 3. Users reported almost hang state for the IQ server for around 6 minutes. That's between 08:20 EDT and 08:26 EDT on 2016-10-12.



            sybase at hrs1siqprd27: /home/sybase/log/SIQCGSPRD ==> grep 'OperationsWaiting' SIQCGSPRD_sp_iqstatistics.sql_2016-10-12-08-25.log

            28         OperationsWaiting                                                                                                                                                                                                                                               Number of operations waiting for IQ resource governor                                                                                                                                                                                                           74



            sybase at hrs1siqprd27: /home/sybase/log/SIQCGSPRD ==> grep 'OperationsActive' SIQCGSPRD_sp_iqstatistics.sql_2016-10-12-08-25.log

            29         OperationsActive                                                                                                                                                                                                                                                Number of active concurrent operations admitted by IQ resource governor                                                                                                                                                                                         150

            30         OperationsActiveLoadTableStatements                                                                                                                                                                                                                             Number of active LOAD TABLE statements                                                                                                                                                                                                                          3



            Notes: (1)  The numbers 28 and 29 were for the column of "stat_num".

                   (2)  The Numbers 74 was for the column of "Number of operations waiting for IQ resource governor".

                  (3)  The number 150 was for the column of "Number of active concurrent operations admitted by IQ resource governor".

                   (4)  I got zero for " Number of operations waiting for IQ resource governor" for all the other jobs scheduled before 08:25 EDT and after 08:25 EDT.



            I set "-iqnumbercpus 40" at the IQ start up configuration file. This number is derived from the executing "cpuinfo":

            sybase at hrs1siqprd27: /home/sybase/log/SIQCGSPRD ==> cpuinfo

            License hostid:        31c911a5

            Detected 80 logical processor(s), 40 core(s), in 4 chip(s)



            So I set "-iqgovern 90" in the IQ start up configuration file based on the formula of (40 CPU * 2 + 10).



            I expected that the "Number of active concurrent operations admitted by IQ resource governor" should be less than or equal to 90; but it's 150. Was the IQ Resource Governor doing its job in enforcing the number of concurrent operations?



            I did not expect the production IQ in waiting for the IQ resource governor; but it's obvious that it's waiting.



            Am I over-allocating or under-allocating the value for "-iqgovern"?



            I saw technical documents recommending to reduce the value to improve the performance for high number of concurrent connections with the other formula (40 CPU * 2 + 4). Do I have the third choice other than setting it to 90 or 84?  Please share your expertise with me.



            Thank you.



            Kind regards,



            Steve Shen

            SS&C Technologies Inc.

            Associate Director DBA



            t: (646) 827-2102

            sshen at sscinc.com<mailto:sshen at sscinc.com<mailto:sshen at sscinc.com%3cmailto:sshen at sscinc.com<mailto:sshen at sscinc.com%3cmailto:sshen at sscinc.com%3cmailto:sshen at sscinc.com%3cmailto:sshen at sscinc.com>>>  |  www.sscinc.com<http://www.sscinc.com<http://www.sscinc.com%3chttp:/www.sscinc.com<http://www.sscinc.com%3chttp:/www.sscinc.com%3chttp:/www.sscinc.com%3chttp:/www.sscinc.com>>>

            Follow us: Twitter  |  Facebook  |  LinkedIn

            This email with all information contained herein or attached hereto may contain confidential and/or privileged information intended for the addressee(s) only. If you have received this email in error, please contact the sender and immediately delete this email in its entirety and any attachments thereto.
            -------------- next part --------------
            An HTML attachment was scrubbed...
            URL: <http://iqug.org/pipermail/iqug/attachments/20161019/49874660/attachment.html>

            ------------------------------

            _______________________________________________
            IQUG mailing list
            IQUG at iqug.org<mailto:IQUG at iqug.org<mailto:IQUG at iqug.org%3cmailto:IQUG at iqug.org>>
            http://iqug.org/mailman/listinfo/iqug

            End of IQUG Digest, Vol 46, Issue 13
            ************************************

            This email with all information contained herein or attached hereto may contain confidential and/or privileged information intended for the addressee(s) only. If you have received this email in error, please contact the sender and immediately delete this email in its entirety and any attachments thereto.
            -------------- next part --------------
            An HTML attachment was scrubbed...
            URL: <http://iqug.org/pipermail/iqug/attachments/20161020/b545cd9b/attachment.html>

            ------------------------------

            _______________________________________________
            IQUG mailing list
            IQUG at iqug.org<mailto:IQUG at iqug.org>
            http://iqug.org/mailman/listinfo/iqug

            End of IQUG Digest, Vol 46, Issue 16
            ************************************
            This email with all information contained herein or attached hereto may contain confidential and/or privileged information intended for the addressee(s) only. If you have received this email in error, please contact the sender and immediately delete this email in its entirety and any attachments thereto.


            ------------------------------

            _______________________________________________
            IQUG mailing list
            IQUG at iqug.org<mailto:IQUG at iqug.org>
            http://iqug.org/mailman/listinfo/iqug

            End of IQUG Digest, Vol 46, Issue 18
            ************************************
            This email with all information contained herein or attached hereto may contain confidential and/or privileged information intended for the addressee(s) only. If you have received this email in error, please contact the sender and immediately delete this email in its entirety and any attachments thereto.
            _______________________________________________
            IQUG mailing list
            IQUG at iqug.org<mailto:IQUG at iqug.org>
            http://iqug.org/mailman/listinfo/iqug


        This email with all information contained herein or attached hereto may contain confidential and/or privileged information intended for the addressee(s) only. If you have received this email in error, please contact the sender and immediately delete this email in its entirety and any attachments thereto.


    This email with all information contained herein or attached hereto may contain confidential and/or privileged information intended for the addressee(s) only. If you have received this email in error, please contact the sender and immediately delete this email in its entirety and any attachments thereto.


This email with all information contained herein or attached hereto may contain confidential and/or privileged information intended for the addressee(s) only. If you have received this email in error, please contact the sender and immediately delete this email in its entirety and any attachments thereto.

------------------------------

_______________________________________________
IQUG mailing list
IQUG at iqug.org<mailto:IQUG at iqug.org>
http://iqug.org/mailman/listinfo/iqug

End of IQUG Digest, Vol 46, Issue 26
************************************

This email with all information contained herein or attached hereto may contain confidential and/or privileged information intended for the addressee(s) only. If you have received this email in error, please contact the sender and immediately delete this email in its entirety and any attachments thereto.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://iqug.org/pipermail/iqug/attachments/20161031/2713c321/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 1204 bytes
Desc: image001.png
URL: <http://iqug.org/pipermail/iqug/attachments/20161031/2713c321/attachment.png>

------------------------------

_______________________________________________
IQUG mailing list
IQUG at iqug.org
http://iqug.org/mailman/listinfo/iqug

End of IQUG Digest, Vol 46, Issue 33
************************************
This email with all information contained herein or attached hereto may contain confidential and/or privileged information intended for the addressee(s) only. If you have received this email in error, please contact the sender and immediately delete this email in its entirety and any attachments thereto.


------------------------------

_______________________________________________
IQUG mailing list
IQUG at iqug.org
http://iqug.org/mailman/listinfo/iqug

End of IQUG Digest, Vol 47, Issue 9
***********************************
This email with all information contained herein or attached hereto may contain confidential and/or privileged information intended for the addressee(s) only. If you have received this email in error, please contact the sender and immediately delete this email in its entirety and any attachments thereto.


------------------------------

_______________________________________________
IQUG mailing list
IQUG at iqug.org
http://iqug.org/mailman/listinfo/iqug

End of IQUG Digest, Vol 47, Issue 10
************************************
This email with all information contained herein or attached hereto may contain confidential and/or privileged information intended for the addressee(s) only. If you have received this email in error, please contact the sender and immediately delete this email in its entirety and any attachments thereto.


More information about the IQUG mailing list