Announcement

Collapse
No announcement yet.

Any value in having more than 1 Hubitat hub?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #16
    Originally posted by mik3y View Post

    Yep, there is options for sure. But as you stated, if one hub slows down, buy another hub. Why? I have Homeseer. I can throw a ton of resources at it under ESXi. No need for multiple hubs.
    Well a key point is being missed here.HS is not a hub, it is software that connects to field devices through interfaces (like ZNet) or API's from device providers and just uses plugins to communicate the different protocols. Some are built in like the HS ZWave plugin (even though it can be started or stopped like any other plugin (more are free but you still need to install plugins) BUT if you are using let's say HS3/HS4 and you don't have a interface (like ZNet, or API) communicating with the devices handing it off to the HS ZWave plugin (or another plugin) to make useful to HS3/HS4 the software is useless.Sure you don't need ZWave as a plugin if you don't have ZWave devices but choose the devices you want to use in HS3/HS4 and how do you get them in there, plugins.

    So look at HT hub, currently you are using the hub for Zigbee and ZWave communicating to field devices (device handlers/apps like HS plugins in this example), built in radios. You are processing that through that little hub along with all your rules there, all in there. In HS3/HS4 your computer is doing the work of processing all that in HT it's all in that little hub. You don't need a computer for HT (besides setting it up and making changes) in fact I've done a couple simple automation systems using a HT hub and they don't have a full time computer connected to HT at all (shut down your HS3/HS4 computer and see where that gets you). Also you can see the same issues running the HomeTroller Zee S2 (it is like the HT hub), It has ZWave hat in the Pi controller so it can process the Zwave devices but you can see it is limited to 5 plugins minus Zwave so say 4 plugins (others have reported using more, good for them but I'll bet they don't have a lot of devices or events as either). What happens when your S2 slows way down? You move up to HS3/HS4 the only choice. I guess there might be a way to run multiple S2's but in this example I think the second one would be used as a ZNet for devices (not sure but a guess) since the S2 can be used as just a ZNet (forget your S2 license) if you rewrite the SD card inside the S2.

    Again bottom line is that you are asking a lot form that HT hub the more you load it up to do all that processing, etc. Don't believe me try an S2 with the 10 apps and 25 devices you have on the HT hub and tell me how that goes! My advice (not worth much anyway) buy another HT hub. You'll be a happy camper!

    Comment


      #17
      Originally posted by Bigstevep View Post

      Well a key point is being missed here.HS is not a hub, it is software that connects to field devices through interfaces (like ZNet) or API's from device providers and just uses plugins to communicate the different protocols. Some are built in like the HS ZWave plugin (even though it can be started or stopped like any other plugin (more are free but you still need to install plugins) BUT if you are using let's say HS3/HS4 and you don't have a interface (like ZNet, or API) communicating with the devices handing it off to the HS ZWave plugin (or another plugin) to make useful to HS3/HS4 the software is useless.Sure you don't need ZWave as a plugin if you don't have ZWave devices but choose the devices you want to use in HS3/HS4 and how do you get them in there, plugins.

      So look at HT hub, currently you are using the hub for Zigbee and ZWave communicating to field devices (device handlers/apps like HS plugins in this example), built in radios. You are processing that through that little hub along with all your rules there, all in there. In HS3/HS4 your computer is doing the work of processing all that in HT it's all in that little hub. You don't need a computer for HT (besides setting it up and making changes) in fact I've done a couple simple automation systems using a HT hub and they don't have a full time computer connected to HT at all (shut down your HS3/HS4 computer and see where that gets you). Also you can see the same issues running the HomeTroller Zee S2 (it is like the HT hub), It has ZWave hat in the Pi controller so it can process the Zwave devices but you can see it is limited to 5 plugins minus Zwave so say 4 plugins (others have reported using more, good for them but I'll bet they don't have a lot of devices or events as either). What happens when your S2 slows way down? You move up to HS3/HS4 the only choice. I guess there might be a way to run multiple S2's but in this example I think the second one would be used as a ZNet for devices (not sure but a guess) since the S2 can be used as just a ZNet (forget your S2 license) if you rewrite the SD card inside the S2.

      Again bottom line is that you are asking a lot form that HT hub the more you load it up to do all that processing, etc. Don't believe me try an S2 with the 10 apps and 25 devices you have on the HT hub and tell me how that goes! My advice (not worth much anyway) buy another HT hub. You'll be a happy camper!
      Never tried an S2. You make a good point. Maybe I am asking to much if the hub and my needs require a dedicate system like I’m using today. Which is fine. My server is running 24/7 anyhow.

      If I wasn’t seeing motion lighting delays and choppy web responses, I likely may have never looked elsewhere.

      I personally don’t know where to draw the line in the sand to differentiate the difference between needing hub vs Homeseer running on a dedicate server.

      i just want fast motion lighting!

      As of right now, I only have maker installed in the hub. I have a backup of Hubitat stored away in case I want it.

      Comment


        #18
        Originally posted by mik3y View Post

        Never tried an S2. You make a good point. Maybe I am asking to much if the hub and my needs require a dedicate system like I’m using today. Which is fine. My server is running 24/7 anyhow.

        If I wasn’t seeing motion lighting delays and choppy web responses, I likely may have never looked elsewhere.

        I personally don’t know where to draw the line in the sand to differentiate the difference between needing hub vs Homeseer running on a dedicate server.

        i just want fast motion lighting!
        There's a huge difference in overall performance of a full system running on a PC vs a "hub". However keeping this within the context of "motion lighting" the problem with HE is not the lack of hardware power it's the giant overhead of the rule machine engine. This is well documented by HE staff/developer of rule machine. This is why there's Motion Lighting, now Simple Automation and why many went in the direction of writing micro-apps to do small tasks very quickly. This inefficiency is also what has driven many people to use other systems for their automation. There's a huge group migrating automation to Node-RED even die-hard HE fans doing it. There's another group that moved to Home Assistant and there's you guys running with HS and the plugin

        Comment


          #19
          Originally posted by simplextech View Post

          There's a huge difference in overall performance of a full system running on a PC vs a "hub". However keeping this within the context of "motion lighting" the problem with HE is not the lack of hardware power it's the giant overhead of the rule machine engine. This is well documented by HE staff/developer of rule machine. This is why there's Motion Lighting, now Simple Automation and why many went in the direction of writing micro-apps to do small tasks very quickly. This inefficiency is also what has driven many people to use other systems for their automation. There's a huge group migrating automation to Node-RED even die-hard HE fans doing it. There's another group that moved to Home Assistant and there's you guys running with HS and the plugin
          Yes Ive picked up on those groups.

          I put together some node red events (don’t know the proper terminology). It can be done to improve automations, but it’s definitely not easy on the eyes.

          Can you elaborate on what you mean by overhead. I don’t fully understand. The event engine from what I understand is essentially the same thing. Wouldn’t overhead exist for homeseer to then?

          Comment


            #20
            Originally posted by mik3y View Post

            Yes Ive picked up on those groups.

            I put together some node red events (don’t know the proper terminology). It can be done to improve automations, but it’s definitely not easy on the eyes.

            Can you elaborate on what you mean by overhead. I don’t fully understand. The event engine from what I understand is essentially the same thing. Wouldn’t overhead exist for homeseer to then?
            The overhead is the amount of resources/time the underlying engine requires to load, process, execute the rule. What language the underlying engine is developed in and the methodology of the automation determines factors as well.

            Hubitat's rule machine is just another Groovy app running within the Hub's resources. You have likely noticed that the first time a rule runs it's pretty slow and then when it runs again (within a time period) it is faster but not a huge amount faster. This first time slowness is because the rule has to be read, processed and then executed to perform the actions. The next time it's triggered it's already in memory so it can skip the read/process and go to the execute portion. However it being an interpreted language this still takes longer than other systems. Part of the overhead with rule machine is part of what makes it good. The complexity that can be done with rules adds to the amount of data that has to be read, processed, validated and ultimately executed upon. This is all live each time. This overhead, complexity or capability whichever you want to call it is part of the overhead which is why Simple Automations formerly Simple Lighting is faster because it's far, far less complex with far less capability so it has less data to load, process before executing. Then to get even faster there are the micro-apps that have none of the complexities and only do one thing and that's it. So they are very fast.

            In comparison HS is not interpreted but is a compiled executable with everything already read, processed and loaded and is just waiting. Waiting for the trigger to simply validate conditions and execute. Very little overhead during execution. This is not saying there isn't overhead though. HS runs on a PC which has the whole OS and .Net framework as overhead. The main difference is that it's already booted, running and once HS is running everything is just there, ready. HS does require more resources and because of that is the reason why the rPi version has a 5 plugin limit. This is a purposeful limitation put in place to prevent users from loading up too many plugins using too many resources and killing the system. In today's world with the Pi 4 I think the 5 plugin limit is too small and in really trying to compete with "hubs" HS should just remove this limit and let users shoot themselves in the foot

            Node-RED is faster than rule machine for very similar reasons. Even though javascript is interpreted there's far less to read, process and execute with. Each Node-RED flow is generally single purpose and visually it looks like a whole lot of stuff but that's only visual taken down into code form and process/execution time it's barely anything to process.

            One thing about hubs that allow loading of drivers, apps etc is they all suffer from the same thing. I'll call it the "Vera Effect" because the Vera platform was one of the first widely sold, available and used hubs and it was always "local" so don't buy into and believe the Hubitat hype of being "the first" or "the only" local hub as Vera has been out there LONG before Hubitat. Now the Vera Effect is a very logical thing that most just ignore. You have a hub that has finite processing capabilities. You give users the ability to load whatever they want... and users do just that. They will load every app, driver they can, to play with and tinker with and from questionable sources of questionable quality and then the hub starts having problems and suddenly it's the hubs fault? Uhh no. It's a user created problem of pushing the hub to do something it wasn't meant to do. Now I do like Hubitat and it's young, growing and is very capable. Could it be better? Yup... will it be.. Yup.. in time. For now understanding and knowing the limits and how to work around them is crucial. I'm critical of systems I like and think have potential and/or are missing out on their potential. So you'll see me being critical of HS and of Hubitat. If I didn't like the systems I wouldn't bother posting on either forum. I just think one isn't listening at all and the other has too many internal issues... or maybe that's both?

            Comment


              #21
              Originally posted by simplextech View Post

              The overhead is the amount of resources/time the underlying engine requires to load, process, execute the rule. What language the underlying engine is developed in and the methodology of the automation determines factors as well.

              Hubitat's rule machine is just another Groovy app running within the Hub's resources. You have likely noticed that the first time a rule runs it's pretty slow and then when it runs again (within a time period) it is faster but not a huge amount faster. This first time slowness is because the rule has to be read, processed and then executed to perform the actions. The next time it's triggered it's already in memory so it can skip the read/process and go to the execute portion. However it being an interpreted language this still takes longer than other systems. Part of the overhead with rule machine is part of what makes it good. The complexity that can be done with rules adds to the amount of data that has to be read, processed, validated and ultimately executed upon. This is all live each time. This overhead, complexity or capability whichever you want to call it is part of the overhead which is why Simple Automations formerly Simple Lighting is faster because it's far, far less complex with far less capability so it has less data to load, process before executing. Then to get even faster there are the micro-apps that have none of the complexities and only do one thing and that's it. So they are very fast.

              In comparison HS is not interpreted but is a compiled executable with everything already read, processed and loaded and is just waiting. Waiting for the trigger to simply validate conditions and execute. Very little overhead during execution. This is not saying there isn't overhead though. HS runs on a PC which has the whole OS and .Net framework as overhead. The main difference is that it's already booted, running and once HS is running everything is just there, ready. HS does require more resources and because of that is the reason why the rPi version has a 5 plugin limit. This is a purposeful limitation put in place to prevent users from loading up too many plugins using too many resources and killing the system. In today's world with the Pi 4 I think the 5 plugin limit is too small and in really trying to compete with "hubs" HS should just remove this limit and let users shoot themselves in the foot

              Node-RED is faster than rule machine for very similar reasons. Even though javascript is interpreted there's far less to read, process and execute with. Each Node-RED flow is generally single purpose and visually it looks like a whole lot of stuff but that's only visual taken down into code form and process/execution time it's barely anything to process.

              One thing about hubs that allow loading of drivers, apps etc is they all suffer from the same thing. I'll call it the "Vera Effect" because the Vera platform was one of the first widely sold, available and used hubs and it was always "local" so don't buy into and believe the Hubitat hype of being "the first" or "the only" local hub as Vera has been out there LONG before Hubitat. Now the Vera Effect is a very logical thing that most just ignore. You have a hub that has finite processing capabilities. You give users the ability to load whatever they want... and users do just that. They will load every app, driver they can, to play with and tinker with and from questionable sources of questionable quality and then the hub starts having problems and suddenly it's the hubs fault? Uhh no. It's a user created problem of pushing the hub to do something it wasn't meant to do. Now I do like Hubitat and it's young, growing and is very capable. Could it be better? Yup... will it be.. Yup.. in time. For now understanding and knowing the limits and how to work around them is crucial. I'm critical of systems I like and think have potential and/or are missing out on their potential. So you'll see me being critical of HS and of Hubitat. If I didn't like the systems I wouldn't bother posting on either forum. I just think one isn't listening at all and the other has too many internal issues... or maybe that's both?
              Thanks for this response.

              Maybe I’m just arrogant, but don’t you think the groovy applications could process faster if ran on say a PC or faster hub?

              The way you described how an app is reloaded into memory when called upon is pretty standard to all computer applications. Meaning, if the hub had more ram, the application could be held in memory longer equating to faster load and processing times. Same thing regarding a faster processor. The better the cpu, the faster it can get through all that logic.

              The smaller the app written in groovy, less processing through the logic, pure groovy code, and likely to be held into memory longer, equating to faster response times.

              As for HS, if I’m understanding correctly, once you create your event, it’s compiled and ready to react. As for Hubitat, each time it’s called upon, it needs to process through the entire code from scratch each time as if it doesn’t know what the outcome will be. Similar to a script or batch file. Hence an interpreter base language?

              This could explain why Bruce has stated that an upgraded hub with better hardware would be a marginal increase in speed for more cost.

              Maybe groovy isn’t an application that should be considered for automation? At least for motion lighting, and applications that require instant responses. At least not a interpreter base language. Then again, node red is a interpreter language and you see faster response times. Better hardware ...

              Comment


                #22
                Originally posted by mik3y View Post

                Thanks for this response.

                Maybe I’m just arrogant, but don’t you think the groovy applications could process faster if ran on say a PC or faster hub?

                The way you described how an app is reloaded into memory when called upon is pretty standard to all computer applications. Meaning, if the hub had more ram, the application could be held in memory longer equating to faster load and processing times. Same thing regarding a faster processor. The better the cpu, the faster it can get through all that logic.

                The smaller the app written in groovy, less processing through the logic, pure groovy code, and likely to be held into memory longer, equating to faster response times.

                As for HS, if I’m understanding correctly, once you create your event, it’s compiled and ready to react. As for Hubitat, each time it’s called upon, it needs to process through the entire code from scratch each time as if it doesn’t know what the outcome will be. Similar to a script or batch file. Hence an interpreter base language?

                This could explain why Bruce has stated that an upgraded hub with better hardware would be a marginal increase in speed for more cost.

                Maybe groovy isn’t an application that should be considered for automation? At least for motion lighting, and applications that require instant responses. At least not a interpreter base language. Then again, node red is a interpreter language and you see faster response times. Better hardware ...
                To your first question. Could more hardware be added? Yes and to a certain degree this would "hide" the current issues but they would still exist and because they are only hidden not resolved they would again resurface as rules got larger and larger and more complex.

                You answered your own question in the second half when you compared it to a script. This is pretty much what Groovy is. It's a scripting language that runs on top of the Java JRE. Each time the script is called it's "compiled" and executed. There's some caching that occurs which does help performance but apparently not enough. Perhaps the answer would be cache tuning of the system JRE? Maybe. I don't know.

                I don't think its inherently a problem with interpreted languages. MisterHouse was written in Perl and was a great system for years. Home Assistant is written in Python and it's a lean and fast system. I don't think it's specifically a Groovy problem but I don't know, it could be.

                Comment


                  #23
                  Originally posted by simplextech View Post

                  To your first question. Could more hardware be added? Yes and to a certain degree this would "hide" the current issues but they would still exist and because they are only hidden not resolved they would again resurface as rules got larger and larger and more complex.

                  You answered your own question in the second half when you compared it to a script. This is pretty much what Groovy is. It's a scripting language that runs on top of the Java JRE. Each time the script is called it's "compiled" and executed. There's some caching that occurs which does help performance but apparently not enough. Perhaps the answer would be cache tuning of the system JRE? Maybe. I don't know.

                  I don't think its inherently a problem with interpreted languages. MisterHouse was written in Perl and was a great system for years. Home Assistant is written in Python and it's a lean and fast system. I don't think it's specifically a Groovy problem but I don't know, it could be.
                  Good discussion. So you think the slowness and issues are within their built in applications then? Essentially throwing more hardware at it wouldn't fix the issue tells me your thoughts are that it's software related and not hardware at all?

                  Comment


                    #24
                    Originally posted by mik3y View Post

                    Good discussion. So you think the slowness and issues are within their built in applications then? Essentially throwing more hardware at it wouldn't fix the issue tells me your thoughts are that it's software related and not hardware at all?
                    I don't think it's hardware.

                    Comment


                      #25
                      Originally posted by simplextech View Post

                      I don't think it's hardware.
                      If this is true which is what most have expressed on their forums. It’s been going on for a year.

                      I’m only using maker to tie back to Homeseer. I can confirm it’s software if my hub doesn’t slow down to a crawl in the next few days.

                      im going to remove my reboot event.

                      Comment


                        #26
                        Originally posted by mik3y View Post

                        If this is true which is what most have expressed on their forums. It’s been going on for a year.

                        I’m only using maker to tie back to Homeseer. I can confirm it’s software if my hub doesn’t slow down to a crawl in the next few days.

                        im going to remove my reboot event.
                        I don't think that Maker is processing a lot since it is just really just exposing data to the HS plugin in this case, correct simplextech ? What I'm getting at is the Maker URL is just a link into the Hubitat via it's browser calling devices (details, etc) through a web page for the connection. That web page doesn't change until you add devices and update/sync with the Maker. The devices update through the already established connection in the HS plugin when HT passes those changes to the Maker to the HS plugin . Am I correct in my thinking or something else?

                        Comment


                          #27
                          Originally posted by Bigstevep View Post

                          I don't think that Maker is processing a lot since it is just really just exposing data to the HS plugin in this case, correct simplextech ? What I'm getting at is the Maker URL is just a link into the Hubitat via it's browser calling devices (details, etc) through a web page for the connection. That web page doesn't change until you add devices and update/sync with the Maker. The devices update through the already established connection in the HS plugin when HT passes those changes to the Maker to the HS plugin . Am I correct in my thinking or something else?
                          That's correct.

                          So any processing is going to be related to the OS on the hub, only. No other apps will be loaded. If I deam the hub as stable after my testing, I would assume it's safe to point your finger at apps being the fault of the slow downs. Which is what I already feel if it's software related.

                          Comment

                          Working...
                          X