Forum
Welcome, Guest
Username: Password: Remember me

TOPIC:

Ha-lizard on the XCP-NG 8.2 in 2021. Progress of my deployment 2 weeks 5 days ago #2437

Someone using halizard IN 2021? I am implementing it on 2 XCP-NG 8.2 hosts and everything is going well but the tests are not over yet. But I was unable to create the SR using XCP-NG Center. I get the error:" the SR could not be connected because the driver gfs2 was not recognized". But it is possible to see that it is installed with the command yum search gfs2 (this command finds glusterfs and also gives the correct name). the command yum install <full name I don't remember> --enablerepo = base,updates downloads from the debian base repository but without enabling this repository in the default XCP-NG settings (the XCP-NG documentation guides you through this) . Finally, the command mkfs.gfs2 -V shows that this is installed, but it still doesn't work. I suppose (based on various forums) it is a problem with the XCP-NG Center because creating and connecting the SR via the graphical screen directly on the server (or through xsconsole) everything worked perfectly. When doing in the primary, the same SR appeared immediately in the secondary.
Now the war continues. After all the configuration is done on an open internet network, I need to change the IP of the management card to an IP of my internal network and perform some reboots to see how the system behaves. This part is causing some problems but they will be overcoming.

The idea is to use only 2 nodes com sharing local storage. we don't have the resources to invest in a SAN. functionality that HA-LIZARD proposes to do. The tests continue. we are currently simulating failures and describing what to do if failures happen.

When exchanging IP from an open network to a closed network, from both nodes simultaneously the master works fine (after restart) but the slave loses all network connections. After the master wakes up, it is necessary to perform an emergency reset of the network settings on the slave and reboot it. when he wakes up, it is necessary to recreate his bond (through the xcp-ng center) and so the synchronism is summarized. but even so the vm didn’t stop working on the master. We will still improve this procedure by doing this process in maintenance mode one node at a time.



Our structure is small: 1 Windows server 2019 (AD, DHCP, DNS ...) an old redhat, an ubuntu for an intranet page, a windows server 2016 for some applications. All are accessed 24 hours a day but by FEW users. So we are betting on this solution. We are hopeful.

I'll keep you informed.

thanks

Please Log in or Create an account to join the conversation.

Ha-lizard on the XCP-NG 8.2 in 2021. Progress of my deployment 2 weeks 4 days ago #2438

  • Salvatore Costantino
  • Salvatore Costantino's Avatar
  • Offline
  • Posts: 680
Thank you for your post. The default build of a 2-node hyperconverged cluster utilizes iscsi to expose a DRBD block device. It seems you are altering that some by going with Gluster. Please let us know how that works for you and share the steps in case others are interested in a Gluster backed SR.
Thanks

Please Log in or Create an account to join the conversation.

Ha-lizard on the XCP-NG 8.2 in 2021. Progress of my deployment 2 weeks 4 days ago #2439

First of all I would like to say that it is a pleasure to talk to the ha-lizard's father. I didn't actually implement a gluster. I have no way of proving it but everything suggests that it is a failure of the xcpng center (and I am using the most current one). Because creating the SR directly on the server screen worked perfectly. Perhaps in the xoce the error would not occur. Today I intend to do more stress tests.
let's go

Please Log in or Create an account to join the conversation.

Ha-lizard on the XCP-NG 8.2 in 2021. Progress of my deployment 2 weeks 3 days ago #2440

Hi Guys,

Salvatore... I have exactly the same issue with XCP-ng 7.6

Just in the process of provisioning a second deployment now.

My original deployment (still in-place) is 2.1.4 from a couple of years back. That went well... Same XCP version but I never saw this message.

Installation Procedure:
Deploy 2x XCP-ng (with /dev/sda set as storage)
Run 2.1.7 NOSAN installer (let it convert local storage)

Installer runs into trouble here...
# Installing HA-Lizard High Availability Component
# /tmp/halizard_tmp_/ha-lizard-2.3.0/ha-lizard.init: line 5: /etc/ha-lizard/ha-lizard.conf: No such file or directory

See attachment for install shell output

Update:
Ran 2.1.4 installer on the same system without issue but had to wipe /tmp/halizard_tmp_ folder first or the 2.1.4 installer gave same error.

Cheers
Nathan
Attachments:

Please Log in or Create an account to join the conversation.

Last edit: by Nathan.

Ha-lizard on the XCP-NG 8.2 in 2021. Progress of my deployment 2 weeks 3 days ago #2441

  • Salvatore Costantino
  • Salvatore Costantino's Avatar
  • Offline
  • Posts: 680
We have switched entirely to RPM for new installations, that is available in the latest installer version 2.1.9.
Can you try the latest xcp-ng with this installer. It should work smoothly.

www.halizard.org/release/noSAN-combined/...osan_installer_2.1.9
The following user(s) said Thank You: Nathan

Please Log in or Create an account to join the conversation.

Ha-lizard on the XCP-NG 8.2 in 2021. Progress of my deployment 2 weeks 2 days ago #2442

During the installation of the most current ha-lizard on the most current xcp-ng, we also identified this same error, but the system has worked well anyway (at least so far). Salvatore informed that he migrated to rpm, but when using yum update a message appears that rpm packages are disabled by default in xcp-ng. Would it be interesting to enable this type of package (rpm) on dom0?

Please Log in or Create an account to join the conversation.