Announcement

Collapse
No announcement yet.

Plugin backing up after awhile for device values

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Plugin backing up after awhile for device values

    I have a home energy monitor that beacons every 1-2s with several values. I'm slowly transitioning from the other MQTT plugin to the mcsMSQTT plugin and created a handful of devices for this energy monitor I am tracking. I noticed that after running for a day yesterday, the values on the devices mcsMQTT manages for the energy monitor were off (today after 20 minutes it was already starting to drift.) I'm using an Odroid XU4 for my HS box ("8" core ARM board), am I overdriving the plugin on this processor? The other plugin keeps up, but I don't think it's anywhere as advanced yours, so not fair to compare it that way. Running htop, I can see the HSPI_MCSMQTT.exe process sitting around 20-30% cpu usage (which is actually less than the other MQTT plugin.)

    Thanks!

    -Mike

  • #2
    Look at the Statistics tab. At the bottom it shows how deep the receive queue is as well as the average input and output times for the queue. If the input rate is greater than the output rate the queue will grow. Typically max queue occurs when plugin starts and the broker delivers its messages that have been waiting for the client to come online.

    I'm running Odroid C1 on Debian DietPi and have an average processing time of 292 milliseconds with most of my messages being JSON with 10 or so elements. HTOP will burst to around 10% when receiving a group of messages, otherwise at 0.5% for mcsMQTT. HS3 is typically around 2% with bursts near 100%, but in general the C1 is not being taxed. mcsMQTT is running in about 20 threads, but receive processing likely is serial within the same thread so adding cores for mcsMQTT will not help.

    In the early days of mcsMQTT there was much done to improve initialization times and in general understand the resource utilization. This can be revisited again if opportunity exists. Originally I did not queue the receive messages and this tended to crash during startup with the broker burst.

    Comment


    • #3
      Thanks, appreciate the quick response. I just dropped my message reporting from the power monitor from once a second, to every 5 seconds, and the queues are slowly catching up. I noticed that the other MQTT monitor was off too afterall for that one, so not processing either real time. I was sending 9 events per second and that seemed to be backing it up pretty quickly (in 2 hours it was over 20k in queue depth, and both depth and max size were growing constantly.) I had been spoiled as I'm using a telegraf with influxdb to handle my mqtt events on a dedicated box, so it had no problems keeping up. I may adjust my energy monitor script I'm using to not send all 9 messages so frequently, as only 2 values change frequently, rest can be sent in groups later.

      Comment


      • #4
        FYI so I modified my power monitor script to send only 2 events per second for testing, and the other 7 events every 30 seconds or so (probably do more as don't need them that often), and it has no problems keeping up. I am guessing at minimum that was 9 event per second + any other events that came in every 30-120s, so it was obviously causing it to back it up. That queues now look MUCH better as keeping at 0 most of the time for Current Queue Depth and a max size of 65 vs the 27000+ like I saw after 2 hours today For reference, Average Receive Milliseconds is now sitting around 330 vs the 105 I was pushing earlier.

        Comment


        • #5
          Oh I did the math, it's a lot of messages. Besides all the sensors pushing every 30-60s their own values, I use Weewx for my weather station, and that also dumps 30+ device status every 60s. So could in theory if everything lined up have 100+ events come in over the course of a second every minute, not to mention ~10 events every second that were coming in consistently.

          Comment

          Working...
          X