Announcement

Collapse
No announcement yet.

HS3 high availability - is it possible and how?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    HS3 high availability - is it possible and how?

    Yes, I know that HS3 is rock solid. Yes, I know that Linux is rock solid and most of you never even reboot your servers and they work stable for years. I know all that, but...

    How can I achieve a high availability on HS3 system? I guess I need 2 HS3 licenses and for sure 2 machines. But how? Is it possible at all? Does DevTeam have any plans for that?

    #2
    Originally posted by adanchenko View Post
    Yes, I know that HS3 is rock solid. Yes, I know that Linux is rock solid and most of you never even reboot your servers and they work stable for years. I know all that, but...

    How can I achieve a high availability on HS3 system? I guess I need 2 HS3 licenses and for sure 2 machines. But how? Is it possible at all? Does DevTeam have any plans for that?
    Possible? Yes on Linux probably not on Windows as it runs as a user process.

    Follow the same method of making another server Highly Available (HA) such as a database.

    Need: 2 servers minimum connected to a shared storage of some kind, network connections with at least one heartbeat connection and a service monitor cluster package. Old one is Heartbeat. Not sure what the current Ubuntu solution may be but RedHat and Suse have "Cluster Solutions" that are available. HS3 is the service that you would monitor, setup your fail-over triggers (service, network, disk, etc etc) and on event the HA service would fail the process to the other system. I would recommend using a Z-Net for your z-wave interface to avoid complications of USB port differences between the two systems requiring a manual reconfiguration. Alternative you would have 2 USB adapters and setup udev rules to enforce them being at the same location on initialization.

    So.. is it doable? Yes, is it easy? Not really.

    Comment


      #3
      Running HS as a service with Windows, you may still be able to support an active-passive cluster with Windows Clustering if it's still around - it's been so long since I've played in that arena. Having been a clustering implementer in a previous life with Tru64 UNIX (TruCluster) and AlphaVMS clustering, I can tell you it's expensive and you need to outweigh the cost benefit in an HA solution if it's even possible. Those were great technologies - totally active-active. In fact there was a customer running a VMS cluster continuously for 17 years without any downtime to the application. These two aforementioned clustering technologies allowed you to shut down servers on the fly for maintenance while still maintaining service.
      HS3PRO 3.0.0.500 as a Fire Daemon service, Windows 2016 Server Std Intel Core i5 PC HTPC Slim SFF 4GB, 120GB SSD drive, WLG800, RFXCom, TI103,NetCam, UltraNetcam3, BLBackup, CurrentCost 3P Rain8Net, MCsSprinker, HSTouch, Ademco Security plugin/AD2USB, JowiHue, various Oregon Scientific temp/humidity sensors, Z-Net, Zsmoke, Aeron Labs micro switches, Amazon Echo Dots, WS+, WD+ ... on and on.

      Comment


        #4
        Originally posted by adanchenko View Post
        Yes, I know that HS3 is rock solid. Yes, I know that Linux is rock solid and most of you never even reboot your servers and they work stable for years. I know all that, but...

        How can I achieve a high availability on HS3 system? I guess I need 2 HS3 licenses and for sure 2 machines. But how? Is it possible at all? Does DevTeam have any plans for that?
        It appears you are inquiring about physical hardware redundancy for the HS server. That is the easy part. The hard part is configuration and stateful fail over.

        There are many methods available to fail over hardware. This is only half the battle. I do not think there is an easy way to fail over the current running state of the primary HS system the the backup system. You can setup jobs to copy the currently running system over to the warm backup. but I don't think you can replicate the current state of the running system.

        Comment


          #5
          Originally posted by drhtmal View Post

          I do not think there is an easy way to fail over the current running state of the primary HS system the the backup system. You can setup jobs to copy the currently running system over to the warm backup. but I don't think you can replicate the current state of the running system.
          Simplex Technology's comment about using "shared storage of some kind" would remove the need of copying the system to a warm backup. Providing the HS directory is pointing to the shared storage. I have been considering this for some time now, just haven't pulled the trigger. In this shared storage solution, what was written to the HS database would be located on the shared storage and data, log, configuration loss would be very minimal. Adding RAID to the shared storage to mitigate hard drive failure is crucial.
          Len


          HomeSeer Version: HS3 Pro Edition 3.0.0.435
          Linux version: Linux homeseer Ubuntu 16.04 x86_64
          Number of Devices: 633
          Number of Events: 773

          Enabled Plug-Ins
          2.0.54.0: BLBackup
          2.0.40.0: BLLAN
          3.0.0.48: EasyTrigger
          30.0.0.36: RFXCOM
          3.0.6.2: SDJ-Health
          3.0.0.87: weatherXML
          3.0.1.190: Z-Wave

          Comment


            #6
            Originally posted by lveatch View Post
            ...Adding RAID to the shared storage to mitigate hard drive failure is crucial.
            Just to play devil's advocate, what about failure of the RAID controller? Where do you draw the line?

            Comment


              #7
              For a "Shared Storage" setup you will need one of the following....Now I'm writing this based upon a HS3 Linux install NOT Windows....

              NAS storage of some kind serving NFS. This will be your "Shared Storage". This is not the most ideal but for low budget it will work. Create a share for homeseer to live in and run from. If you have a large budget I prefer a real cluster filesystem like veritas cluster file system or gluster, lustre or GPFS.

              2 x servers running Linux setup preferably identical

              Mount the NFS share to both linux systems. Verify read/write access and permissions (UID/GID) the same across both systems for the homeseer account if using it rather than root

              Choose a High Availability system of your choice... free look to pacemaker, (linux ha) cost look at veritas cluster server or if using RedHat use RedHat Clustering
              READ/LEARN the system

              Setup the HA system and test the failover procedure.

              Install HomeSeer to the shared location and configure startup script via HA System designations to ensure proper failover startup.

              Test failover...
              done...

              HomeSeer will live and run all data/state/plugins etc within the NAS shared storage system and the processing will happen on whatever node the system is started on and then upon failure of that node HS3 will be then started on the other node. This is a VERY over-simplification of the setup.


              Comment


                #8
                Originally posted by joegr View Post

                Just to play devil's advocate, what about failure of the RAID controller? Where do you draw the line?
                HA the NAS. cluster failover to another NAS where you are providing storage replication.
                Len


                HomeSeer Version: HS3 Pro Edition 3.0.0.435
                Linux version: Linux homeseer Ubuntu 16.04 x86_64
                Number of Devices: 633
                Number of Events: 773

                Enabled Plug-Ins
                2.0.54.0: BLBackup
                2.0.40.0: BLLAN
                3.0.0.48: EasyTrigger
                30.0.0.36: RFXCOM
                3.0.6.2: SDJ-Health
                3.0.0.87: weatherXML
                3.0.1.190: Z-Wave

                Comment


                  #9
                  Originally posted by lveatch View Post

                  HA the NAS. cluster failover to another NAS where you are providing storage replication.
                  Use a parallel file system like GPFS or Lustre which are distributed/parallel systems without single points of failure. No master to fail either. Now you're in my playground

                  Comment


                    #10
                    I think it's a lot easier to bullet-proof the Linux box. There are too many details involved, e.g., with the Z-Wave controller status or events staged for later execution, etc., to do a true failover. My Ubuntu box uses SSD drives in a mirror configuration, has multiple fans, two power supplies, a big UPS, and there's a standby generator. The biggest problem is a restart required because of a security update. The biggest risk is an application that fails and corrupts the system. The system is setup to restart automatically in the event of failure, and it's a lean-mean startup. Still, there's the potential for some loss of state information. HS3 does have the capability to rerun events in the case of a restart, which, if used judiciously, can restore most state information. It's not true high availability, but an adequate approach for most purposes.

                    Comment


                      #11
                      Having redundant hardware or OS isn't the hard part. The hardpart is how do you get the Zwave USB stick to autoswitch.... IE having the device able to communicate with 2 different USB stick without user involvement.

                      Comment

                      Working...
                      X