Forum
Welcome, Guest
Username: Password: Remember me

TOPIC: e-mail alert: xe_wrapper: COMMAND: xe host-list hostname=IT1XENSLAVE1 --minimal

e-mail alert: xe_wrapper: COMMAND: xe host-list hostname=IT1XENSLAVE1 --minimal 3 months 1 week ago #1834

Today we have received an alarm message from our Ha-Lizard 2-Node no San cluster, running HA-Lizard 2.1.4 on XenServer 6.5SP1 since a couple of years without any problems, except last Monday and now this message:
xe_wrapper: COMMAND: xe host-list hostname=IT1XENSLAVE1 --minimal Has reached the maximum allowable time of 10 seconds. Killing all processes now!

What does this message mean?

The begin of something wrong?

Last Monday on the primary host all four running Windows Server VMs were suddenly frozen after nearly one year XenServer and HA-Lizard uptime. Only a hard shutdown of the host has brought down the VMs and finally the primary host.

After a reboot, everything came back without a problem. Ha-Lizard had not reported any problem, nor XenServer itself has alerted us.

The only sign of problem was visible in the daemon.log of the primary XenServer:
Jul  8 05:33:37 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|1||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:33:43 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|2||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:33:43 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|2||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:33:51 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|1||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:34:22 IT1XENMASTER1 last message repeated 10 times
Jul  8 05:35:36 IT1XENMASTER1 last message repeated 18 times
Jul  8 05:35:37 IT1XENMASTER1 last message repeated 3 times
Jul  8 05:35:43 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|2||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:35:43 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|2||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:35:51 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|1||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:36:22 IT1XENMASTER1 last message repeated 10 times
Jul  8 05:37:07 IT1XENMASTER1 last message repeated 13 times
Jul  8 05:37:08 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__PdoPauseDataPath:Target[0] : Waiting for 3 Submitted requests 
Jul  8 05:37:08 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__PdoPauseDataPath:Target[2] : Waiting for 8 Submitted requests 
Jul  8 05:37:08 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__PdoPauseDataPath:Target[0] : Waiting for 11 Submitted requests 
Jul  8 05:37:09 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__PdoPauseDataPath:Target[0] : Waiting for 15 Submitted requests 
Jul  8 05:37:22 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|1||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:37:22 IT1XENMASTER1 last message repeated 3 times
Jul  8 05:37:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__PdoPauseDataPath:Target[0] : Waiting for 1 Submitted requests 
Jul  8 05:37:37 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|1||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:37:37 IT1XENMASTER1 last message repeated 3 times
Jul  8 05:37:43 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|2||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:37:43 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|2||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:37:52 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|1||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:37:53 IT1XENMASTER1 last message repeated 3 times
Jul  8 05:38:02 IT1XENMASTER1 tapdisk[4047]: tapdisk-syslog: 17 messages, 1622 bytes, xmits: 18, failed: 0, dropped: 0
Jul  8 05:38:02 IT1XENMASTER1 tapdisk[13894]: tapdisk-syslog: 16 messages, 1464 bytes, xmits: 17, failed: 0, dropped: 0
Jul  8 05:38:02 IT1XENMASTER1 tapdisk[15321]: tapdisk-syslog: 17 messages, 1642 bytes, xmits: 18, failed: 0, dropped: 0
Jul  8 05:38:02 IT1XENMASTER1 tapdisk[20681]: tapdisk-syslog: 16 messages, 1464 bytes, xmits: 17, failed: 0, dropped: 0
Jul  8 05:38:02 IT1XENMASTER1 tapdisk[23161]: tapdisk-syslog: 16 messages, 1464 bytes, xmits: 17, failed: 0, dropped: 0
Jul  8 05:38:04 IT1XENMASTER1 tapdisk[13729]: tapdisk-syslog: 16 messages, 1463 bytes, xmits: 17, failed: 0, dropped: 0
Jul  8 05:38:04 IT1XENMASTER1 tapdisk[14065]: tapdisk-syslog: 16 messages, 1474 bytes, xmits: 17, failed: 0, dropped: 0
Jul  8 05:38:04 IT1XENMASTER1 tapdisk[15480]: tapdisk-syslog: 16 messages, 1464 bytes, xmits: 17, failed: 0, dropped: 0
Jul  8 05:38:04 IT1XENMASTER1 tapdisk[20891]: tapdisk-syslog: 16 messages, 1464 bytes, xmits: 17, failed: 0, dropped: 0
Jul  8 05:38:04 IT1XENMASTER1 tapdisk[23512]: tapdisk-syslog: 16 messages, 1462 bytes, xmits: 17, failed: 0, dropped: 0
Jul  8 05:38:04 IT1XENMASTER1 tapdisk[23683]: tapdisk-syslog: 16 messages, 1464 bytes, xmits: 17, failed: 0, dropped: 0
Jul  8 05:38:07 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|1||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:38:38 IT1XENMASTER1 last message repeated 10 times
Jul  8 05:38:53 IT1XENMASTER1 last message repeated 5 times
Jul  8 05:39:01 IT1XENMASTER1 tapdisk[4047]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:39:01 IT1XENMASTER1 tapdisk[13894]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:39:01 IT1XENMASTER1 tapdisk[15321]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:39:01 IT1XENMASTER1 tapdisk[20681]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:39:01 IT1XENMASTER1 tapdisk[23161]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:39:04 IT1XENMASTER1 tapdisk[13729]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:39:04 IT1XENMASTER1 tapdisk[14065]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:39:04 IT1XENMASTER1 tapdisk[15480]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:39:04 IT1XENMASTER1 tapdisk[20891]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:39:04 IT1XENMASTER1 tapdisk[23512]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:39:04 IT1XENMASTER1 tapdisk[23683]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:39:07 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|1||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:39:38 IT1XENMASTER1 last message repeated 10 times
Jul  8 05:39:38 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|1||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:39:43 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|2||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:39:43 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|2||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:39:52 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|1||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:39:53 IT1XENMASTER1 last message repeated 3 times
Jul  8 05:40:01 IT1XENMASTER1 tapdisk[4047]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:40:01 IT1XENMASTER1 tapdisk[13894]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:40:01 IT1XENMASTER1 tapdisk[15321]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:40:01 IT1XENMASTER1 tapdisk[20681]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:40:01 IT1XENMASTER1 tapdisk[23161]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:40:04 IT1XENMASTER1 tapdisk[13729]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:40:04 IT1XENMASTER1 tapdisk[14065]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:40:04 IT1XENMASTER1 tapdisk[15480]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:40:04 IT1XENMASTER1 tapdisk[20891]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:40:04 IT1XENMASTER1 tapdisk[23512]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:40:04 IT1XENMASTER1 tapdisk[23683]: tapdisk-syslog: 0 messages, 0 bytes, xmits: 0, failed: 0, dropped: 0
Jul  8 05:40:07 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|1||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:40:08 IT1XENMASTER1 last message repeated 3 times
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__PdoPauseDataPath:Target[0] : 3/3 Submitted requests left (180001 iterrations) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|PdoReset:Target[0] : backend has 3 outstanding requests after a PdoReset 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__FrontendSetState:Target[0] : ENABLED ----> CLOSING 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__FrontendSetState:Target[0] : in state CONNECTED 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[15321]: received 'sring disconnect' message (uuid = 0) 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[15321]: disconnecting domid=63, devid=768 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[15321]: 63/768, ring=0x1c4b650: disconnect from ring with 3 pending requests 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[15321]: sending 'sring disconnect rsp' message (uuid = 0) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__FrontendSetState:Target[0] : in state CLOSING 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFFA800B5FF890 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFFA800B9FC330 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFFA800B4F3640 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__FrontendSetState:Target[0] : CLOSING ----> CLOSED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__FrontendSetState:Target[0] : in state CLOSED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__FrontendSetState:Target[0] : CLOSED ----> ENABLED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|FrontendWriteUsage:Target[0] : DUMP NOT_HIBER PAGE 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|PdoUpdateInquiryData:Target[0] : VDI-UUID = {64491487-f6e0-4fad-ba22-036574baf16a} 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|FrontendPrepare:Target[0] : BackendId 0 (/local/domain/0/backend/vbd3/63/768) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|FrontendPrepare:Target[0] : RingFeatures  
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__FrontendSetState:Target[0] : in state PREPARED 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[15321]: received 'sring connect' message (uuid = 6) 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[15321]: connecting VBD 6 domid=63, devid=768, pool (null), evt 13 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[15321]: ring 0x1c4c210 connected 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[15321]: sending 'sring connect rsp' message (uuid = 6) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|FrontendConnect:Target[0] : VBDFeatures BARRIER  
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__FrontendSetState:Target[0] : in state CONNECTED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__FrontendSetState:Target[0] : in state ENABLED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__PdoPauseDataPath:Target[0] : Waiting for 0 Submitted requests 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__PdoPauseDataPath:Target[0] : 0/0 Submitted requests left (0 iterrations) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__FrontendSetState:Target[0] : ENABLED ----> CLOSING 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__FrontendSetState:Target[0] : in state CONNECTED 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[15321]: received 'sring disconnect' message (uuid = 0) 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[15321]: disconnecting domid=63, devid=768 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[15321]: sending 'sring disconnect rsp' message (uuid = 0) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__FrontendSetState:Target[0] : in state CLOSING 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__FrontendSetState:Target[0] : CLOSING ----> CLOSED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__FrontendSetState:Target[0] : in state CLOSED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__FrontendSetState:Target[0] : CLOSED ----> ENABLED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|FrontendWriteUsage:Target[0] : DUMP NOT_HIBER PAGE 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|PdoUpdateInquiryData:Target[0] : VDI-UUID = {64491487-f6e0-4fad-ba22-036574baf16a} 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|FrontendPrepare:Target[0] : BackendId 0 (/local/domain/0/backend/vbd3/63/768) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|FrontendPrepare:Target[0] : RingFeatures  
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__FrontendSetState:Target[0] : in state PREPARED 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[15321]: received 'sring connect' message (uuid = 6) 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[15321]: connecting VBD 6 domid=63, devid=768, pool (null), evt 13 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[15321]: ring 0x1c4ce10 connected 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[15321]: sending 'sring connect rsp' message (uuid = 6) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|FrontendConnect:Target[0] : VBDFeatures BARRIER  
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__FrontendSetState:Target[0] : in state CONNECTED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__FrontendSetState:Target[0] : in state ENABLED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-63[15621]: XENVBD|__PdoPauseDataPath:Target[1] : Waiting for 7 Submitted requests 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__PdoPauseDataPath:Target[2] : 8/8 Submitted requests left (180001 iterrations) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|PdoReset:Target[2] : backend has 8 outstanding requests after a PdoReset 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__FrontendSetState:Target[2] : ENABLED ----> CLOSING 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__FrontendSetState:Target[2] : in state CONNECTED 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[13894]: received 'sring disconnect' message (uuid = 0) 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[13894]: disconnecting domid=60, devid=5632 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[13894]: 60/5632, ring=0xae2a10: disconnect from ring with 8 pending requests 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[13894]: sending 'sring disconnect rsp' message (uuid = 0) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__FrontendSetState:Target[2] : in state CLOSING 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__PdoCleanupSubmittedReqs:Target[2] : SubmittedReq 0xFFFFFA800E70AB50 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__PdoCleanupSubmittedReqs:Target[2] : SubmittedReq 0xFFFFFA800FAFB7C0 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__PdoCleanupSubmittedReqs:Target[2] : SubmittedReq 0xFFFFFA800E4386B0 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__PdoCleanupSubmittedReqs:Target[2] : SubmittedReq 0xFFFFFA8010D0A390 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__PdoCleanupSubmittedReqs:Target[2] : SubmittedReq 0xFFFFFA800D811810 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__PdoCleanupSubmittedReqs:Target[2] : SubmittedReq 0xFFFFFA8011EB6010 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__PdoCleanupSubmittedReqs:Target[2] : SubmittedReq 0xFFFFFA800EDA2B50 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__PdoCleanupSubmittedReqs:Target[2] : SubmittedReq 0xFFFFFA800DE39010 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__FrontendSetState:Target[2] : CLOSING ----> CLOSED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__FrontendSetState:Target[2] : in state CLOSED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__FrontendSetState:Target[2] : CLOSED ----> ENABLED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|FrontendWriteUsage:Target[2] : NOT_DUMP NOT_HIBER NOT_PAGE 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|PdoUpdateInquiryData:Target[2] : VDI-UUID = {64cf2367-2cb5-4bf1-a314-e80a9c69a4ca} 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|FrontendPrepare:Target[2] : BackendId 0 (/local/domain/0/backend/vbd3/60/5632) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|FrontendPrepare:Target[2] : RingFeatures REMOVABLE 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__FrontendSetState:Target[2] : in state PREPARED 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[13894]: received 'sring connect' message (uuid = 9) 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[13894]: connecting VBD 9 domid=60, devid=5632, pool (null), evt 14 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[13894]: ring 0xae3610 connected 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[13894]: sending 'sring connect rsp' message (uuid = 9) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|FrontendConnect:Target[2] : VBDFeatures BARRIER  
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__FrontendSetState:Target[2] : in state CONNECTED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__FrontendSetState:Target[2] : in state ENABLED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-60[14232]: XENVBD|__PdoPauseDataPath:Target[0] : Waiting for 3 Submitted requests 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__PdoPauseDataPath:Target[0] : 11/11 Submitted requests left (180001 iterrations) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|PdoReset:Target[0] : backend has 11 outstanding requests after a PdoReset 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__FrontendSetState:Target[0] : ENABLED ----> CLOSING 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__FrontendSetState:Target[0] : in state CONNECTED 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[23161]: received 'sring disconnect' message (uuid = 0) 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[23161]: disconnecting domid=62, devid=768 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[23161]: 62/768, ring=0x1f29e10: disconnect from ring with 11 pending requests 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[23161]: sending 'sring disconnect rsp' message (uuid = 0) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__FrontendSetState:Target[0] : in state CLOSING 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFE000EAC05B50 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFE000EB403010 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFE000EA4C3010 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFE000EB402010 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFE000EAC0C010 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFE000EB415010 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFE000EB408010 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFE000EAC11010 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFE000EAC00010 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFE000EC801010 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFE000EAC0D690 -> FAILED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__FrontendSetState:Target[0] : CLOSING ----> CLOSED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__FrontendSetState:Target[0] : in state CLOSED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__FrontendSetState:Target[0] : CLOSED ----> ENABLED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|FrontendWriteUsage:Target[0] : DUMP NOT_HIBER PAGE 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|PdoUpdateInquiryData:Target[0] : VDI-UUID = {401c9b95-6cfa-45db-b9f6-a07dae3507fd} 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|FrontendPrepare:Target[0] : BackendId 0 (/local/domain/0/backend/vbd3/62/768) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|FrontendPrepare:Target[0] : RingFeatures  
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__FrontendSetState:Target[0] : in state PREPARED 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[23161]: received 'sring connect' message (uuid = 2) 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[23161]: connecting VBD 2 domid=62, devid=768, pool (null), evt 21 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[23161]: ring 0x1f2aa10 connected 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[23161]: sending 'sring connect rsp' message (uuid = 2) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|FrontendConnect:Target[0] : VBDFeatures BARRIER  
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__FrontendSetState:Target[0] : in state CONNECTED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__FrontendSetState:Target[0] : in state ENABLED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__PdoPauseDataPath:Target[0] : Waiting for 0 Submitted requests 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__PdoPauseDataPath:Target[0] : 0/0 Submitted requests left (0 iterrations) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__FrontendSetState:Target[0] : ENABLED ----> CLOSING 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__FrontendSetState:Target[0] : in state CONNECTED 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[23161]: received 'sring disconnect' message (uuid = 0) 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[23161]: disconnecting domid=62, devid=768 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[23161]: sending 'sring disconnect rsp' message (uuid = 0) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__FrontendSetState:Target[0] : in state CLOSING 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__FrontendSetState:Target[0] : CLOSING ----> CLOSED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__FrontendSetState:Target[0] : in state CLOSED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__FrontendSetState:Target[0] : CLOSED ----> ENABLED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|FrontendWriteUsage:Target[0] : DUMP NOT_HIBER PAGE 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|PdoUpdateInquiryData:Target[0] : VDI-UUID = {401c9b95-6cfa-45db-b9f6-a07dae3507fd} 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|FrontendPrepare:Target[0] : BackendId 0 (/local/domain/0/backend/vbd3/62/768) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|FrontendPrepare:Target[0] : RingFeatures  
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__FrontendSetState:Target[0] : in state PREPARED 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[23161]: received 'sring connect' message (uuid = 2) 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[23161]: connecting VBD 2 domid=62, devid=768, pool (null), evt 21 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[23161]: ring 0x1f2b610 connected 
Jul  8 05:40:09 IT1XENMASTER1 tapdisk[23161]: sending 'sring connect rsp' message (uuid = 2) 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|FrontendConnect:Target[0] : VBDFeatures BARRIER  
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__FrontendSetState:Target[0] : in state CONNECTED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__FrontendSetState:Target[0] : in state ENABLED 
Jul  8 05:40:09 IT1XENMASTER1 qemu-dm-62[23821]: XENVBD|__PdoPauseDataPath:Target[1] : Waiting for 4 Submitted requests 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__PdoPauseDataPath:Target[0] : 15/15 Submitted requests left (180001 iterrations) 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|PdoReset:Target[0] : backend has 15 outstanding requests after a PdoReset 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__FrontendSetState:Target[0] : ENABLED ----> CLOSING 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__FrontendSetState:Target[0] : in state CONNECTED 
Jul  8 05:40:10 IT1XENMASTER1 tapdisk[4047]: received 'sring disconnect' message (uuid = 0) 
Jul  8 05:40:10 IT1XENMASTER1 tapdisk[4047]: disconnecting domid=61, devid=768 
Jul  8 05:40:10 IT1XENMASTER1 tapdisk[4047]: 61/768, ring=0x1fae650: disconnect from ring with 15 pending requests 
Jul  8 05:40:10 IT1XENMASTER1 tapdisk[4047]: sending 'sring disconnect rsp' message (uuid = 0) 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__FrontendSetState:Target[0] : in state CLOSING 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFFA8005664B50 -> FAILED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFFA80022FC1D0 -> FAILED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFFA800211C610 -> FAILED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFFA80022421D0 -> FAILED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFFA800263BA00 -> FAILED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFFA8005342B50 -> FAILED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFFA8005B014F0 -> FAILED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFFA8005BE3B50 -> FAILED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFFA80024C6B50 -> FAILED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFFA8001E74B50 -> FAILED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFFA80020E91D0 -> FAILED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFFA80026AA2F0 -> FAILED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFFA8002A10010 -> FAILED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFFA8005CF7010 -> FAILED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFFA8002A641F0 -> FAILED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__FrontendSetState:Target[0] : CLOSING ----> CLOSED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__FrontendSetState:Target[0] : in state CLOSED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__FrontendSetState:Target[0] : CLOSED ----> ENABLED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|FrontendWriteUsage:Target[0] : DUMP NOT_HIBER PAGE 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|PdoUpdateInquiryData:Target[0] : VDI-UUID = {8690747a-19fe-4ece-8cfe-7e7ade991308} 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|FrontendPrepare:Target[0] : BackendId 0 (/local/domain/0/backend/vbd3/61/768) 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|FrontendPrepare:Target[0] : RingFeatures  
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__FrontendSetState:Target[0] : in state PREPARED 
Jul  8 05:40:10 IT1XENMASTER1 tapdisk[4047]: received 'sring connect' message (uuid = 4) 
Jul  8 05:40:10 IT1XENMASTER1 tapdisk[4047]: connecting VBD 4 domid=61, devid=768, pool (null), evt 13 
Jul  8 05:40:10 IT1XENMASTER1 tapdisk[4047]: ring 0x1faf210 connected 
Jul  8 05:40:10 IT1XENMASTER1 tapdisk[4047]: sending 'sring connect rsp' message (uuid = 4) 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|FrontendConnect:Target[0] : VBDFeatures BARRIER  
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__FrontendSetState:Target[0] : in state CONNECTED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__FrontendSetState:Target[0] : in state ENABLED 
Jul  8 05:40:10 IT1XENMASTER1 qemu-dm-61[4203]: XENVBD|__PdoPauseDataPath:Target[0] : Waiting for 1 Submitted requests 
Jul  8 05:40:22 IT1XENMASTER1 mpathalert: [debug|IT1XENMASTER1|1||mscgen] mpathalert=>xapi [label="(XML)"];
Jul  8 05:40:23 IT1XENMASTER1 last message repeated 3 times
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__PdoPauseDataPath:Target[0] : 1/1 Submitted requests left (180001 iterrations) 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|PdoReset:Target[0] : backend has 1 outstanding requests after a PdoReset 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[0] : ENABLED ----> CLOSING 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[0] : in state CONNECTED 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20681]: received 'sring disconnect' message (uuid = 0) 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20681]: disconnecting domid=1, devid=768 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20681]: 1/768, ring=0x1140a10: disconnect from ring with 1 pending requests 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20681]: sending 'sring disconnect rsp' message (uuid = 0) 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[0] : in state CLOSING 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__PdoCleanupSubmittedReqs:Target[0] : SubmittedReq 0xFFFFFA8005351010 -> FAILED 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[0] : CLOSING ----> CLOSED 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[0] : in state CLOSED 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[0] : CLOSED ----> ENABLED 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|FrontendWriteUsage:Target[0] : DUMP NOT_HIBER PAGE 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|PdoUpdateInquiryData:Target[0] : VDI-UUID = {899dce60-7187-4dcc-b1fc-6b30525dc26c} 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|FrontendPrepare:Target[0] : BackendId 0 (/local/domain/0/backend/vbd3/1/768) 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|FrontendPrepare:Target[0] : RingFeatures  
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[0] : in state PREPARED 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20681]: received 'sring connect' message (uuid = 0) 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20681]: connecting VBD 0 domid=1, devid=768, pool (null), evt 13 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20681]: ring 0x1142410 connected 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20681]: sending 'sring connect rsp' message (uuid = 0) 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|FrontendConnect:Target[0] : VBDFeatures BARRIER  
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[0] : in state CONNECTED 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[0] : in state ENABLED 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__PdoPauseDataPath:Target[0] : Waiting for 0 Submitted requests 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__PdoPauseDataPath:Target[0] : 0/0 Submitted requests left (0 iterrations) 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|NotifierDpc:Target[0] : Paused, 0 outstanding 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[0] : ENABLED ----> CLOSING 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[0] : in state CONNECTED 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20681]: received 'sring disconnect' message (uuid = 0) 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20681]: disconnecting domid=1, devid=768 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20681]: sending 'sring disconnect rsp' message (uuid = 0) 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[0] : in state CLOSING 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[0] : CLOSING ----> CLOSED 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[0] : in state CLOSED 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[0] : CLOSED ----> ENABLED 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|FrontendWriteUsage:Target[0] : DUMP NOT_HIBER PAGE 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|PdoUpdateInquiryData:Target[0] : VDI-UUID = {899dce60-7187-4dcc-b1fc-6b30525dc26c} 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|FrontendPrepare:Target[0] : BackendId 0 (/local/domain/0/backend/vbd3/1/768) 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|FrontendPrepare:Target[0] : RingFeatures  
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[0] : in state PREPARED 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20681]: received 'sring connect' message (uuid = 0) 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20681]: connecting VBD 0 domid=1, devid=768, pool (null), evt 13 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20681]: ring 0x1143010 connected 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20681]: sending 'sring connect rsp' message (uuid = 0) 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|FrontendConnect:Target[0] : VBDFeatures BARRIER  
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[0] : in state CONNECTED 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[0] : in state ENABLED 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__PdoPauseDataPath:Target[1] : Waiting for 0 Submitted requests 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__PdoPauseDataPath:Target[1] : 0/0 Submitted requests left (0 iterrations) 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[1] : ENABLED ----> CLOSING 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[1] : in state CONNECTED 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: received 'sring disconnect' message (uuid = 0) 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: disconnecting domid=1, devid=832 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: sending 'sring disconnect rsp' message (uuid = 0) 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[1] : in state CLOSING 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[1] : CLOSING ----> CLOSED 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[1] : in state CLOSED 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[1] : CLOSED ----> ENABLED 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|FrontendWriteUsage:Target[1] : NOT_DUMP NOT_HIBER NOT_PAGE 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|PdoUpdateInquiryData:Target[1] : VDI-UUID = {0db4e757-7377-4d17-80f1-b682c8854049} 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|FrontendPrepare:Target[1] : BackendId 0 (/local/domain/0/backend/vbd3/1/832) 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|FrontendPrepare:Target[1] : RingFeatures REMOVABLE 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[1] : in state PREPARED 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: received 'sring connect' message (uuid = 1) 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: connecting VBD 1 domid=1, devid=832, pool (null), evt 14 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: ring 0x2273650 connected 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: sending 'sring connect rsp' message (uuid = 1) 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 0: bad number of segments in request (0) 
Jul  8 05:40:37 IT1XENMASTER1 last message repeated 9 times
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:guest_copy2: 1/832, ring=0x2273650: req 7449327995607622699: failed to grant-copy segment 0: -1 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_parse_request: 1/832, ring=0x2273650: req 7449327995607622699: failed to copy from guest: Input/output error 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 0: bad number of segments in request (0) 
Jul  8 05:40:37 IT1XENMASTER1 last message repeated 20 times
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 4294901760: bad number of segments in request (0) 
Jul  8 05:40:37 IT1XENMASTER1 last message repeated 9 times
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 7449327999876310017: invalid request type 43 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 4294901760: bad number of segments in request (0) 
Jul  8 05:40:37 IT1XENMASTER1 last message repeated 30 times
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 7449327999876310059: bad number of segments in request (164) 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 4294901760: bad number of segments in request (0) 
Jul  8 05:40:37 IT1XENMASTER1 last message repeated 30 times
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 7449327999876310017: invalid request type 43 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 4294901760: bad number of segments in request (0) 
Jul  8 05:40:37 IT1XENMASTER1 last message repeated 30 times
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 7449327999876310059: bad number of segments in request (164) 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 4294901760: bad number of segments in request (0) 
Jul  8 05:40:37 IT1XENMASTER1 last message repeated 30 times
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 7449327999876310017: invalid request type 43 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 4294901760: bad number of segments in request (0) 
Jul  8 05:40:37 IT1XENMASTER1 last message repeated 30 times
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 7449327999876310059: bad number of segments in request (164) 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 4294901760: bad number of segments in request (0) 
Jul  8 05:40:37 IT1XENMASTER1 last message repeated 30 times
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 7449327999876310017: invalid request type 43 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 4294901760: bad number of segments in request (0) 
Jul  8 05:40:37 IT1XENMASTER1 last message repeated 20 times
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|FrontendConnect:Target[1] : VBDFeatures BARRIER  
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 4294901760: bad number of segments in request (0) 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[1] : in state CONNECTED 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 4294901760: bad number of segments in request (0) 
Jul  8 05:40:37 IT1XENMASTER1 qemu-dm-1[21216]: XENVBD|__FrontendSetState:Target[1] : in state ENABLED 
Jul  8 05:40:37 IT1XENMASTER1 tapdisk[20891]: tap-err:tapdisk_xenblkif_make_vbd_request: 1/832, ring=0x2273650: req 4294901760: bad number of segments in request (0) 

Today the alert e-mail massage was received, so I am getting a bit nerveous.

Any hint waht is going on and what we should check or do would be appreciated.
BR Andreas

Please Log in or Create an account to join the conversation.

e-mail alert: xe_wrapper: COMMAND: xe host-list hostname=IT1XENSLAVE1 --minimal 3 months 1 week ago #1835

  • Salvatore Costantino
  • Salvatore Costantino's Avatar
  • Offline
  • Posts: 593
The root cause of the HA-Lizard alert is that the Xen API (XAPI) took more then 10 seconds to respond to an API call. This is indicative of an underlying problem. XAPI or the host (dom0) could have been temporarily blocking due to a storage issue as evidenced by your daemon.log snippet.

I would recommend:

- checking all of your storage HW components.

- Also, since your VMs are all windows servers, they tend to consume a lot of disk resources at times. Could have been multiple servers indexing at the same time. You should be able to look at the Xenserver performance logs for the VMs at the time the alert was triggered. Particularly look at the disk usage for each VM during that time and whether there was an increase in disk activity across your VMs when this happened.

- make sure dom0 disk is not full. That could easily happen on a 6.5 pool which uses the older 4gb disk partition that tends to fill

- I would add, checking SMART readings for the disks in case there are early signs of failure. If you don't have a convenient way of checking disk health, we have shipped a script with iscsi-ha versions 2.14+, but given the age of your setup, you are likely on an older version. A copy of the script is attached. You can run it on each host to check whether there are early signs of failure on any of your disks.
Attachments:

Please Log in or Create an account to join the conversation.

e-mail alert: xe_wrapper: COMMAND: xe host-list hostname=IT1XENSLAVE1 --minimal 3 months 1 week ago #1836

Dear Salvatore,

thank you for your quick reply. The script showed no errors on both hosts. We have installed the XenServer on a Hardware RAID1 with two SSDs. and the SR on a Hardware RAID0 with five 600GB 10K enterprise SAS 12GB. The Fujutsu PRIMERGY RX2530 M1 Server reports also no SMART errors.

Could I check the DRBD ISCSI volume for logical errors?

The VMs are not creating a heavy load, 1DC, 1Print and File-server, 1 Exchange-server, Terminal-server, for 30 users, and 5 power users.
The Performance data from XenServer are not anymore visible in XenCenter before the incident date???

Please Log in or Create an account to join the conversation.

Last edit: by ajmind.

e-mail alert: xe_wrapper: COMMAND: xe host-list hostname=IT1XENSLAVE1 --minimal 3 months 1 week ago #1837

  • Salvatore Costantino
  • Salvatore Costantino's Avatar
  • Offline
  • Posts: 593
I am not sure whether it's possible to check the iscsi volume for errors in the same way as a disk. I would guess not since logical layers stacked on the physical block devices generally rely on the HW to handle errors, bad sectors, etc. transparently.

Please Log in or Create an account to join the conversation.

e-mail alert: xe_wrapper: COMMAND: xe host-list hostname=IT1XENSLAVE1 --minimal 3 months 1 week ago #1838

I will move all VMs to the slave and prepare the Master to go offline and check the disks from the RAID hardware controller. Maybe I will get more knowledge about a problem of the disks.

Please Log in or Create an account to join the conversation.

e-mail alert: xe_wrapper: COMMAND: xe host-list hostname=IT1XENSLAVE1 --minimal 3 months 1 week ago #1839

  • Salvatore Costantino
  • Salvatore Costantino's Avatar
  • Offline
  • Posts: 593
If you haven't done so already, I would suggest you first check your xenserver performance graphs for the VMs and hosts during the time of the event. It is plausible that the issue was triggered by multiple windows servers performing some disk bound tasks that caused the dom0 kernel to start blocking

Please Log in or Create an account to join the conversation.